There’s a detail that stuck with me from Ronan Farrow’s account in The New Yorker. Back in 2019, Ilya Sutskever — one of the most respected minds at OpenAI — officiated Greg Brockman’s wedding. He was, by all accounts, a friend. The ceremony took place at OpenAI’s own offices, with a robotic hand serving as ring bearer. It’s a charming, slightly absurd image.
But by 2023, the same Sutskever was quietly circulating memos to board members, convinced that Sam Altman should not be the man with his finger on the proverbial button. Something had changed — not about the technology, but about the stakes.
| Field | Details |
|---|---|
| Full Name | Samuel Harris Altman |
| Born | April 22, 1985 · Chicago, Illinois, USA |
| Current Title | CEO, OpenAI |
| Education | Stanford University (dropped out, 2005) |
| Previous Role | President, Y Combinator (2014–2019) |
| Company Founded | OpenAI (co-founded 2015) |
| Notable Products | ChatGPT, GPT-4, Sora, DALL·E |
| Company Valuation (2026) | $852 billion (March 2025) |
| Firing & Reinstatement | Fired Nov 2023 · Reinstated 5 days later |
| Projected Loss (2026) | ~$14 billion |
| Reference | The New Yorker Investigation (Farrow & Marantz) |
When OpenAI’s board fired Altman in November 2023, they didn’t explain it in detail publicly. But the reasoning, pieced together from Farrow and Andrew Marantz’s reporting, seems to trace back to a fundamental unease. As the company inched closer to building a genuinely transformative intelligence — one that could match or exceed human cognition — certain people inside it grew less confident that their CEO shared their sense of caution. Altman, for his part, moved fast. He reportedly assembled a crisis communications team within hours, bringing in investors and allies. Five days later, he was reinstated. Microsoft’s backing and a threatened staff walkout were apparently decisive. It’s still unclear whether any of the original concerns were resolved, or simply overridden.
The company Altman now leads sits at an almost incomprehensible scale. Its valuation reached $852 billion earlier this year, even as it posted an estimated $14 billion in projected losses for 2026 — a figure that tripled earlier estimates. Investors seem to believe, despite everything, that what OpenAI is building justifies the arithmetic. Datacenters spreading across continents, defense contracts now involving classified military operations, products embedded in smartphones and law enforcement systems. The commercial momentum has become something almost untouchable.’

What makes all of this genuinely unsettling is the military dimension. OpenAI quietly concluded a deal with the U.S. Department of Defense to use its technology in classified operations. This came not long after Anthropic had raised alarms that AI tools risked enabling mass surveillance and fully autonomous weapons. The Trump administration walked away from Anthropic’s agreement — and OpenAI stepped in. Altman later called the original arrangement “opportunistic and sloppy.” The company then issued a statement claiming its Pentagon deal contained more safeguards than any previous agreement of its kind. Watching this unfold, it’s hard not to wonder what exactly those safeguards look like inside a classified operation that, by definition, cannot be publicly scrutinized.
There’s a political thread here that grows stranger the further you follow it. Greg Brockman, OpenAI’s co-founder and top executive, was revealed as a $25 million donor to a Trump fundraising vehicle in January. He is also a participant in an AI-focused “Super PAC” that raised $125 million in 2025 to back candidates favoring national AI regulation over state-level rules. Within months, Trump signed an executive order limiting state AI regulations in favor of a lighter national standard. Whether the sequence is coincidence or consequence, it is at minimum a structural conflict worth naming plainly.
Sam Altman controls the future of humanity in some meaningful sense — not because he declared it, but because the infrastructure he’s building is already woven into defense, medicine, labor markets, and legal systems. His own researchers have, at various points, described the technology as a potential threat to humanity. Activist historian Rutger Bregman launched a worldwide boycott campaign — QuitGPT — arguing that ChatGPT subscriptions are effectively funding authoritarianism. Meanwhile, Farrow’s piece raises questions about national security entanglements in the Gulf that remain largely unanswered.
It’s possible that Altman is, in the fullest sense, acting in good faith. Ambitious people often are, even when they’re doing things that should concern the rest of us. The deeper problem isn’t really about his character — it’s about the absence of any meaningful external brake. Self-regulated enterprises do not historically regulate in the interest of anyone beyond their own commercial survival. The rubble of a girls’ school in Minab, bombed amid questions about AI tools used in U.S. strikes on Iran, is a grim reminder of what the stakes actually are when powerful technology meets institutional impunity. Can Sam Altman be trusted? That might be the wrong question entirely. The better one is: why are we relying on trust at all?
