AI developer says three Chinese companies generated millions of queries to extract capabilities from its system.
AI company Anthropic said in a recent blog post that three Chinese AI developers used its Claude chatbot to improve their own models through a technique known as “distillation.”
Anthropic said the companies, DeepSeek, Moonshot AI and MiniMax, generated large volumes of automated queries designed to extract knowledge from Claude and use the responses as training data for their own systems.
According to Anthropic, the activity involved more than 16 million interactions with Claude across roughly 24,000 fraudulent user accounts, which the company said violated its terms of service and regional access restrictions.
The company said the campaigns appeared designed to copy high-level capabilities from Claude including reasoning, coding, and other advanced functions, in order to accelerate development of competing AI models.
Anthropic described the tactic as a “distillation attack,” a method in which developers train a smaller or less advanced model on the outputs of a more capable system. While distillation can be used legitimately in AI development, the company said the campaigns it detected were intended to replicate Claude’s capabilities without authorization.
Anthropic said it identified the activity by analyzing request metadata, IP addresses, and patterns in prompt traffic that differed from normal user behavior. The company also said the campaigns involved repeated structured prompts aimed at extracting specific capabilities from the model.
In a statement accompanying the report, Anthropic said the activity represented “industrial-scale distillation attacks on our models.”
Anthropic said it has since implemented additional safeguards, including monitoring for automated query patterns, tightening account verification, and sharing threat data with other AI developers.
The company said the findings highlight the need for stronger protections around deployed AI systems and greater industry coordination to detect large-scale model extraction attempts.