Close Menu
StreamLineCrypto.comStreamLineCrypto.com
  • Home
  • Crypto News
  • Bitcoin
  • Altcoins
  • NFT
  • Defi
  • Blockchain
  • Metaverse
  • Regulations
  • Trading
What's Hot

Hegseth reverses a 34-year Pentagon policy on firearm

April 3, 2026

Together AI Launches Wan 2.7 Video Suite at $0.10 Per Second

April 3, 2026

Bitcoin Cannot Rally While Miners Are Bleeding. Discover How Long the Bleeding Lasts

April 3, 2026
Facebook X (Twitter) Instagram
Friday, April 3 2026
  • Contact Us
  • Privacy Policy
  • Cookie Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
StreamLineCrypto.comStreamLineCrypto.com
  • Home
  • Crypto News
  • Bitcoin
  • Altcoins
  • NFT
  • Defi
  • Blockchain
  • Metaverse
  • Regulations
  • Trading
StreamLineCrypto.comStreamLineCrypto.com

Anthropic Exposes 16M Query Theft Campaign by Chinese AI Labs

February 23, 2026Updated:February 24, 2026No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Anthropic Exposes 16M Query Theft Campaign by Chinese AI Labs
Share
Facebook Twitter LinkedIn Pinterest Email
ad


Tony Kim
Feb 23, 2026 18:32

Anthropic reveals DeepSeek, Moonshot, and MiniMax ran industrial-scale distillation assaults utilizing 24,000 faux accounts to steal Claude AI capabilities.





Anthropic dropped a bombshell Tuesday, publicly naming three Chinese language AI laboratories—DeepSeek, Moonshot, and MiniMax—as perpetrators of coordinated campaigns to steal Claude’s capabilities by over 16 million fraudulent API exchanges.

The assaults used roughly 24,000 faux accounts to avoid Anthropic’s regional entry restrictions and phrases of service. One proxy community alone managed greater than 20,000 simultaneous fraudulent accounts, mixing distillation site visitors with respectable requests to evade detection.

The Numbers Inform the Story

MiniMax led the assault with 13 million exchanges focusing on agentic coding and power orchestration. Moonshot adopted with 3.4 million exchanges centered on computer-use agent growth and reasoning capabilities. DeepSeek’s marketing campaign, whereas smaller at 150,000 exchanges, employed significantly refined methods—together with prompts designed to make Claude articulate its inside reasoning step-by-step, primarily producing chain-of-thought coaching knowledge on demand.

Anthropic traced a number of DeepSeek accounts on to particular researchers on the lab by request metadata evaluation.

Why This Issues Past Company Espionage

The timing right here is not coincidental. OpenAI publicly accused DeepSeek of distilling ChatGPT simply three days earlier on February 21. Google’s Menace Intelligence Group flagged elevated distillation exercise on February 16, together with a marketing campaign utilizing over 100,000 prompts to copy Gemini’s reasoning skills.

What makes this significantly regarding? Anthropic argues these assaults undermine U.S. export controls on superior chips. International labs can successfully bypass innovation necessities by extracting capabilities from American fashions—they usually want these restricted chips to run distillation at scale anyway.

“Illicitly distilled fashions lack vital safeguards,” Anthropic warned, noting stripped-out protections might allow “offensive cyber operations, disinformation campaigns, and mass surveillance” by authoritarian governments.

The Hydra Downside

Anthropic described the infrastructure enabling these assaults as “hydra cluster” architectures—sprawling networks with no single level of failure. Ban one account, one other spawns instantly. The proxy companies reselling Claude entry made detection exponentially more durable by distributing site visitors throughout Anthropic’s API and third-party cloud platforms concurrently.

When Anthropic launched a brand new Claude mannequin throughout MiniMax’s lively marketing campaign, the lab pivoted inside 24 hours, redirecting practically half their site visitors to seize the newest capabilities. That type of operational agility suggests these aren’t opportunistic assaults however sustained, well-resourced operations.

Anthropic’s Countermeasures

The corporate outlined a number of defensive measures: behavioral fingerprinting programs to detect distillation patterns, strengthened verification for instructional and startup accounts (probably the most generally exploited pathways), and model-level safeguards designed to degrade output high quality for illicit extraction with out affecting respectable customers.

Anthropic is sharing technical indicators with different AI labs, cloud suppliers, and authorities authorities. The message is obvious: this requires industry-wide coordination.

For traders monitoring AI infrastructure performs, this escalation provides one other variable to the aggressive panorama. Labs that may’t defend their fashions danger watching their R&D investments stroll out the door—16 million queries at a time.

Picture supply: Shutterstock


ad
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Related Posts

Together AI Launches Wan 2.7 Video Suite at $0.10 Per Second

April 3, 2026

Cambodian Lawmakers Propose Severe Prison Time for Crypto Scammers

April 3, 2026

Charles Schwab Signals Direct Bitcoin Trading Push

April 3, 2026

Schwab plans spot crypto trading launch in first half of 2026, starting with BTC, ETH

April 3, 2026
Add A Comment
Leave A Reply Cancel Reply

ad
What's New Here!
Hegseth reverses a 34-year Pentagon policy on firearm
April 3, 2026
Together AI Launches Wan 2.7 Video Suite at $0.10 Per Second
April 3, 2026
Bitcoin Cannot Rally While Miners Are Bleeding. Discover How Long the Bleeding Lasts
April 3, 2026
Cambodian Lawmakers Propose Severe Prison Time for Crypto Scammers
April 3, 2026
Charles Schwab Signals Direct Bitcoin Trading Push
April 3, 2026
Facebook X (Twitter) Instagram Pinterest
  • Contact Us
  • Privacy Policy
  • Cookie Privacy Policy
  • Terms of Use
  • DMCA
© 2026 StreamlineCrypto.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.