Researchers at MIT have been grappling with a question that most people have already decided they know the answer to, somewhere in Cambridge, in the kind of fluorescent-lit office that produces papers the rest of the world spends months trying to interpret. Will you lose your job to AI? You can scroll through any feed or walk into any dinner party right now and hear the answer given with absolute certainty: either everything is fine or nothing is. According to MIT, the reality is more nuanced and unsettling than either side wants to acknowledge.
A lack of data is not the issue. It’s that these forecasts are most likely based on data that measures the incorrect thing. For many years, researchers have relied on a concept known as “exposure”—basically, the proportion of tasks that comprise your job that an AI model could potentially complete. Since its creation in 1998, the U.S. government has cataloged thousands of these tasks in a database that has served as the foundation for almost all significant studies on AI displacement. It was used by OpenAI. It was used by Anthropic. It sounds rigorous and is frequently cited. It’s not, or at least not in the manner that people believe.
| Category | Details |
|---|---|
| Institution | Massachusetts Institute of Technology (MIT) — Cambridge, Massachusetts |
| Key Researcher Referenced | Alex Imas, Behavioral Economist, University of Chicago |
| Study Focus | Whether AI exposure to job tasks actually predicts displacement — and why current tools fall short |
| MIT Finding (2025) | AI can already replace 11.7% of the U.S. labor market across finance, healthcare, and professional services |
| Government Data Used | U.S. task catalogue first launched in 1998, updated regularly — used by OpenAI and Anthropic for exposure analysis |
| OpenAI Estimate | 19% of U.S. workers could see 50% of their tasks “impacted” by GPT-4 level systems |
| Key Problem Identified | “Exposure alone is a completely meaningless tool for predicting displacement” — Alex Imas |
| What’s Missing | Real-world data on whether automating a task is economically viable for employers, not just technically possible |
| Atlantic Coverage | AI will “put too many people out of work permanently” — growing consensus among economists and researchers |
| Anthropic CEO’s View | Dario Amodei has called AI a “general labor substitute” capable of doing all jobs within five years |
In an interview with MIT Technology Review earlier this month, University of Chicago economist Alex Imas stated bluntly that “exposure alone is a completely meaningless tool for predicting displacement.” For an economist to say that about the main tool his entire field has been using is startling. However, the reasoning is sound. Knowing that AI can perform a task is not the same as knowing whether a business will actually decide to automate it, whether automating it makes financial sense, or what happens to the business and the employee afterward.
Think about a developer creating high-end applications. The employer isn’t necessarily laying off employees if AI enables them to complete tasks that previously took three days. For the same wage, they are producing more. Exposure analysis cannot predict which of these outcomes will occur, such as the next hire not happening, the team shrinking in two years, or productivity gains leading to expansion. It’s possible that the jobs that are most in danger aren’t the ones that people are most concerned about. They might be, too. It’s uncomfortable to sit there.

Observing this discussion in real time gives me the impression that those who make the loudest predictions are the least tolerant of uncertainty. The CEO of Anthropic, Dario Amodei, stated that AI could serve as a general replacement for human labor in five years. Depending on your personality, this statement may sound like a clear-eyed warning or like someone dramatically sawing off the branch he’s sitting on. In the meantime, before any of the purported benefits materialize, a societal impacts researcher at the same company publicly predicted that a recession and a “breakdown of the early-career ladder” are likely. Such institutional transparency is uncommon. It’s concerning as well.
Imas and others are advocating for a completely different type of data: real-world monitoring of what truly occurs within businesses when AI tools are implemented. Not which jobs are technically replicable, but which ones employers are actually automating, how much it costs, and how it affects the people who used to do them. “We need a Manhattan Project for this,” an economist stated. Although the framing is dramatic, it is difficult to ignore the underlying point. There is still a huge gap between what AI can theoretically accomplish and what it will actually change, as well as how quickly, and there is currently no trustworthy way to measure it. According to MIT, this data point could finally make this understandable. It simply doesn’t exist in any form that can be used.
The traditional approach to technological disruption, which calls for retraining employees, changing policies, and giving it time, is predicated on the idea that there will be time to anticipate future developments. Whether that assumption is true this time is still up in the air.
