The release of OpenAI’s most recent policy document has an almost theatrical quality. Beautifully typeset and formatted for gloss printing, these thirteen pages are the kind of thing you could see being passed around a posh lounge in Washington with one hand while the other holds an expensive mocktail.
According to reports, lobbyists wearing suits that still smell like the store and fresh from new leases around Dupont Circle have been putting copies on lawmakers’ desks throughout the capital. Whether deliberate or not, the optics reveal something crucial about the target audience for this document.
| Field | Detail |
|---|---|
| Organization | OpenAI |
| CEO | Sam Altman (reinstated after 2023 board controversy) |
| Founded | 2015, originally as a nonprofit to manage AI risks |
| Current Structure | Hybrid for-profit / nonprofit entity |
| Policy Document Title | “Industrial Policy in the Age of AI” (13-page PDF brief) |
| Key Proposals | Public Wealth Fund, four-day workweek subsidy, higher capital gains taxes, automation levies, expanded healthcare & retirement coverage |
| Flagship Model | GPT-4o and successors; powering ChatGPT (used by hundreds of millions globally) |
| CFO (current) | Sarah Friar (reported tensions with Altman over 2026 IPO readiness) |
| Historical Reference | Altman compared AI transition to the Progressive Era and New Deal |
| Notable Critics | Former OpenAI board members, ex-colleagues, Anthropic leadership (Dario Amodei) |
| Key Concern | Eroding payroll tax base as corporate profits rise and labor income falls |
By most technical standards, the paper, “Industrial Policy in the Age of AI,” is a well-considered work. It recognizes that AI has the potential to cause massive disruption, including the loss of jobs, the overnight transformation of industries, and the collapse of payroll tax revenues as automation replaces the labor that once paid for Social Security and Medicare.
Sam Altman urges societies to take collective action before the window closes, framing the event as a historical turning point and drawing comparisons to the Progressive Era and the original New Deal. This seems accountable at first glance. It’s almost admirable. However, as you carefully read it, you get the impression that something is being carefully avoided.

In and of themselves, the suggestions are not irrational. Every American would have a stake in AI-driven growth through a public wealth fund. subsidies for a four-day workweek with no pay reductions, justified by increased productivity. Increased taxes on corporate income, capital gains, and automation-specific returns to cover the potential loss of payroll tax revenue. Healthcare and increased retirement benefits are linked to business obligations rather than just personal employment.
These are genuine concepts, many of which have been in circulation in policy circles for years, but they have stalled for reasons that have nothing to do with intellectual plausibility and everything to do with political will. As its contribution to the discussion, OpenAI has essentially put together a greatest-hits compilation of center-left economic ideas.
The discrepancy between what is suggested and what OpenAI pledges to itself is what critics are pointing out, and it is something to take seriously. You’ll notice that the word “should” keeps coming up as you go through the document.
“Could discuss.” “May consider.” The wording is suggestive rather than mandatory. OpenAI has made no legally binding promises regarding its own labor practices, tax obligations, or willingness to give up any of the massive wealth it has the potential to create. Written by the business that would gain the most from governments and societies not taking action first, it is a document about what these entities should do.
This tension is more difficult to ignore in light of history. The New Deal that Altman so casually refers to was not the result of friendly policy discussions. It resulted from years of political strife, widespread unemployment, labor unrest, and a degree of social pressure that made institutions give in.
Concessions made under duress, not out of kindness, shaped the regulatory frameworks built around railroads, energy companies, and telecommunications—the industries that became infrastructure, just as OpenAI now aspires to. This is not really addressed in the document. It makes the assumption that good ideas will become law if intelligent people agree on them. Power has never operated quite like that.
Sitting with the timing is also worthwhile. In the same week that this policy brief went viral, a lengthy New Yorker profile raised pointed questions about Sam Altman himself, including his history with the OpenAI board that fired and then reinstated him, his colleagues’ descriptions of him as someone unrestrained by direct candor, and the rival company that was specifically founded by those who decided they no longer wanted to work with him.
There have also been reports of conflict between Altman and CFO Sarah Friar regarding the company’s preparedness for an impending IPO. To be precise, none of this renders the policy document incorrect. However, it does make avoiding the trust issue more difficult.
It is necessary to have faith in the sincerity of those spearheading AI development in order to ask the public to reshape its social contract around that development. As of right now, that belief is not clearly earned and is not free.
The fact that there is a genuine underlying concern is what makes the situation truly complex. It is highly likely that AI will cause labor market disruptions for which the current safety nets are unprepared. As automation increases, the payroll tax base will almost certainly be under pressure. These risks aren’t made up for rhetorical effect. They are already starting to be felt in certain locations, by certain employees, in communities that are far from the opulent lounges where this document is being distributed. The expenses are spreading swiftly.
The advantages are still highly concentrated. Local opposition to data centers, state-level legislation, and community organizing—which seldom makes the front pages of the publications that cover AI the most enthusiastically—all demonstrate how this asymmetry is gradually turning into a political reality rather than merely an academic concern.
It’s still unclear whether a document like this reflects sophisticated preemption, a means of influencing the policy discourse before someone else does, or true institutional conscience. It is possible for both to be true simultaneously. The window Altman refers to appears to be real. Furthermore, the AI sector will eventually need to act like infrastructure rather than just write about it if it hopes to be treated as such, with all the public trust and social license that goes along with it.
