A company’s publication of a study assessing the potential harm that its own product may be causing to the workforce is subtly noteworthy. In early March 2026, Anthropic, the San Francisco-based AI company that created Claude, released a labor market report that is meticulous, methodologically serious, and, depending on how you read it, either extremely concerning or reassuring. If you’re being honest, it’s probably both.
The headline finding—that there hasn’t been a consistent rise in unemployment among workers in AI-exposed occupations since late 2022—was widely and favorably reiterated. Insofar as it goes, it’s also accurate. However, the more you read the rest of the report, the more that headline begins to seem like the beginning of an incomplete story. Beneath a few paragraphs, Anthropic discovered that hiring for younger workers in those same exposed occupations has subtly slowed. not crumbled. slowed down. It’s important to comprehend why this distinction is so important.
| Publishing Organization | Anthropic (creator of Claude AI) |
|---|---|
| Report Title | Labor Market Impacts of AI: A New Measure and Early Evidence |
| Published | March 5, 2026 |
| New Metric Introduced | Observed Exposure Combines theoretical LLM capability with real-world usage data |
| Most At-Risk Sectors | Computer & Mathematical; Office & Administrative Support; Business & Financial; Sales |
| Workers Most Exposed | Older, female, more educated, higher-paid workers |
| Young Worker Impact | 6–16% fall in employment in exposed occupations among workers aged 22–25 (Brynjolfsson et al., 2025) |
| Unemployment Finding | No systematic increase in overall unemployment since late 2022 — but hiring of younger workers has slowed |
| AI Capability Gap | Actual AI usage remains a fraction of theoretical capability; 97% of observed tasks are theoretically feasible |
| BLS Projection | Occupations with higher AI exposure projected to grow less through 2034 |
| Industry Context | Korn Ferry (2025): 4 in 10 companies planned to replace roles with AI; back-office (58%) and entry-level (37%) most at risk |
| Reference / Further Reading | Anthropic — Labor Market Impacts of AI (Full Report) |
Based on actual usage data from Claude, the report presents a new framework Anthropic refers to as “observed exposure”—a metric that attempts to measure not only what AI could theoretically do to a given job but also what it is actually doing. It turns out that there is a substantial difference between those two figures. Anthropic’s own data indicates that AI is only operating at a small portion of its potential.
Due to institutional inertia, workflow requirements, regulatory restrictions, or the need for human verification, many tasks that a language model could theoretically complete more quickly have not yet been transferred. For example, a pharmacist authorization task was identified as completely within Claude’s theoretical reach, but Anthropic pointed out that it had not seen Claude carry it out. Capability and deployment are still far apart. That might be comforting. Another possibility is that it simply indicates that the disruption is still loading.
A sizable portion of the professional workforce is employed in the four industries that Anthropic found to be most exposed: computer and mathematical, office and administrative support, business and financial, and sales. These are not jobs on the periphery. These are the jobs that support middle-class stability for millions of households and occupy downtown office buildings. According to separate projections from the Bureau of Labor Statistics, occupations with greater exposure to AI are predicted to grow less than those with less exposure through 2034. This report comes after that projection. When combined, the two data points present a picture that is hard to write off as conjecture.
One of the findings’ specific details merits more consideration than it has. When researchers looked at employment trends for workers in exposed occupations between the ages of 22 and 25, they discovered a decline of roughly 6 to 16 percent; this range was mainly due to a slowdown in hiring rather than workers losing their current jobs.
Anthropic carefully pointed out that a large number of young workers enter the labor market without a specified occupation, which implies that some of this group may simply be leaving the workforce or going back to school without ever showing up in unemployment statistics. The precise number of young people in this predicament and what they are doing in its place are still unknown. That ambiguity is a signal in and of itself.
It’s difficult to watch this unfold in real time without thinking about the entry-level job pipeline and what happens to it when businesses determine that a language model can handle the lower levels of administrative and analytical work. In the past, entry-level positions have done more than just produce results; they are where people learn about industries, form networks, and hone the professional judgment that will eventually make them valuable at higher levels.
According to a Korn Ferry report from 2025, over four out of ten businesses already had plans to replace current positions with AI, with entry-level jobs (37 percent) and back-office jobs (58 percent) at the top of the list. It’s important to consider what will happen to the workforce ten years from now if the pipeline narrows at the bottom, since those who ought to have spent their twenties honing their skills in these areas just didn’t.
In the report, Anthropic acknowledged that the effects of AI on the labor market might be more akin to the internet or the China trade shock than the abrupt shock of a pandemic: gradual, dispersed, easily explained away in any given quarter, and only apparent in hindsight. That’s an open admission from a business that stands to gain financially from its product’s continued use. Depending on how you feel about corporate research, the report’s commitment to periodically reviewing these analyses can be interpreted as either a methodological hedge or a sign of true accountability.
The results of this study are tentative and suitably hedged, but that doesn’t make it unique. The fact that a company developing one of the most popular AI systems in the world took the time to measure the effects of the system on workers and released the findings before the story became unavoidable is what makes it exist at all. It’s important to keep an open mind about whether that qualifies as accountability or something more akin to cautious reputation management.
