Anthropic has accused three Chinese artificial intelligence companies of conducting large-scale distillation attacks to steal its AI technology, claiming the firms created approximately 24,000 fraudulent accounts to conceal their efforts. The AI safety company named DeepSeek, Moonshot, and MiniMax in a recent blog post, alleging these laboratories generated over 16 million exchanges with its Claude AI system in violation of terms of service and regional access restrictions.
According to Anthropic, the distillation attacks represent an “industrial-scale campaign” to illicitly extract Claude’s capabilities and improve competing models. The company explicitly characterized the attacks as a national security issue, highlighting the growing tensions between U.S. and Chinese AI development efforts.
Understanding Distillation Attacks on AI Models
Distillation attacks involve repeatedly running variations of prompts through an AI system to analyze how it responds, effectively reverse-engineering aspects of the underlying technology. While distillation is a legitimate training technique that AI companies use to create smaller, more efficient versions of their own models, it can also be weaponized to steal competitors’ intellectual property.
Anthropic explained in its statement that distillation allows competitors to acquire powerful AI capabilities in a fraction of the time and cost required for independent development. However, the company noted that while these activities violated its terms of service, the legal framework surrounding such attacks remains unclear.
Pattern of Intellectual Property Concerns
This accusation follows similar claims made by OpenAI against DeepSeek in January, when the ChatGPT creator alleged the Chinese company engaged in distillation attacks to steal its technology. However, those allegations were met with widespread criticism and mockery from observers who noted the irony of AI companies claiming intellectual property protection.
Many AI firms, including OpenAI, have argued they possess the right to train their models on copyrighted works without permission or compensation. President Donald Trump echoed this sentiment at an AI event in July 2025, suggesting that requiring payment for training data would handicap American AI development while China faces no such restrictions.
The contradiction creates an awkward position for U.S. AI companies now claiming their own intellectual property deserves protection. Meanwhile, critics point out that these firms have built their businesses on using others’ copyrighted content without authorization or payment.
Chinese Competition and Technology Transfer
Chinese companies have historically shown willingness to ignore international intellectual property treaties and copyright laws, according to industry observers. Reuters previously reported on China’s efforts to rival Western AI chip technology through reverse-engineering and other methods.
Additionally, if Chinese AI laboratories can cheaply recreate advanced language model technology through distillation, they would gain significant advantages over U.S. competitors currently spending tens of billions of dollars on AI infrastructure and research. This dynamic has intensified concerns among American technology companies and policymakers about maintaining competitive positioning.
Calls for Coordinated Response to AI Theft
In response to the alleged attacks, Anthropic called for cooperation between AI companies, government agencies, and international stakeholders to address the growing threat. The company warned that these campaigns are increasing in both intensity and sophistication, requiring urgent action.
However, Anthropic did not specify what legal remedies might be available beyond suspending the fraudulent accounts used in the alleged distillation attacks. The company emphasized that addressing the threat extends beyond any single organization or geographic region.
Anthropic has not yet provided additional details about potential next steps or whether it plans to pursue legal action against the named companies. The effectiveness of any coordinated industry response remains uncertain as the legal and regulatory frameworks governing AI model training continue to evolve.
