A massive 18-month investigation reveals why Sam Altman’s former allies—from Elon Musk to Mira Murati—don't trust him. Read the full "enemy list" and the shocking allegations of deception at the heart of OpenAI.
It’s often said that "it’s lonely at the top," but in Sam Altman’s case, it’s getting crowded in the room of people who used to believe in him.
By now, we’ve all heard the rumors, the cryptic tweets, and the sudden boardroom coups. But a massive, 18-month investigation has just laid everything bare. This isn't just a few disgruntled employees or "haters" on the internet—this is a documented list of the very people who built OpenAI, funded it, and sat in the trenches with Sam.
And their verdict? They don’t trust him.
The People Who Knew Him Best
If you want to know who someone really is, don't look at their press releases—look at their co-founders.
Elon Musk, who helped kickstart OpenAI’s original mission as a nonprofit, hasn't held back. He’s gone as far as calling him "Scam Altman," claiming that Sam "lies as easily as he breathes." Elon’s core gripe is a big one: OpenAI was supposed to be a transparent gift to humanity, and under Sam, it morphed into a closed, profit-hungry machine.
But Elon isn’t the only one. Ilya Sutskever, the brilliant Chief Scientist who was once Sam’s closest technical partner, reportedly compiled over 70 pages of evidence. We’re talking memos, Slack logs, and internal messages that allegedly proved Sam was bypassing board oversight and lying about safety protocols. It’s a chilling thought that the man responsible for the technical "brain" of AI didn’t think Sam should be the one with his finger on the button.
A Pattern, Not an Accident
When you look at the names of people who have walked away, a pattern starts to emerge. It’s not just one bad breakup; it’s a trail of broken trust.
Dario Amodei (Anthropic CEO): He left to start a rival company because he saw the "mendacious" nature of OpenAI’s leadership. He’s compared the company's current path to Big Tobacco—knowing they have a dangerous product but selling it anyway.
Mira Murati (Former CTO): One of Sam’s most loyal lieutenants for years eventually hit a breaking point. Before her exit, she reportedly told insiders she didn't feel comfortable with Sam leading the charge toward AGI, describing his playbook as one of manipulation: "Say whatever you need to get what you want, and if that fails, destroy their credibility."
The Safety Leaders: Jan Leike resigned publicly, stating that the company was "losing its way" and prioritizing "shiny products" over the literal safety of the human race.
It Goes Back to the Beginning
The most telling part of this investigation is that this isn’t a "new" Sam Altman. This behavior apparently stretches back to his first startup, Loopt. Employees twice tried to get him fired for a lack of honesty. Even at Y Combinator, the legendary Paul Graham reportedly told colleagues that Sam had been "lying to us all the time."
Even Microsoft, the group that bailed OpenAI out during the firing drama, is feeling the heat. Senior executives have expressed concerns that they’ve been misled on deals, with one even warning that Sam might eventually be remembered in the same breath as Sam Bankman-Fried or Bernie Madoff.
What Happens When Trust Runs Out?
Silicon Valley is a place built on "fake it 'til you make it," but there is a line between vision and deception. When your board members—people like Helen Toner and Tasha McCauley—publicly state that you created a "toxic culture of lying," the industry needs to stop and listen.
We are currently building the most powerful technology in human history. If the people closest to the engine are screaming that the driver can’t be trusted, we might want to think twice before we let him accelerate.
Real-Time Position: As of April 2026, the AI industry is facing a "Regulation Reckoning." With OpenAI’s latest models facing scrutiny over data ethics and internal governance, this article positions Sam Altman not as a tech hero, but as a liability to the safety and stability of the AGI race.





0 Comments