Anthropic Accuses Rival AI Labs of Using 24,000 Fake Accounts to Access Claude

3 Mins Read

Competition between AI labs is no longer limited to model benchmarks. It is increasingly about access — access to models, training signals, and user interactions.

Anthropic says several rival AI companies used more than 24,000 fake accounts to access its Claude models, according to reporting from VentureBeat. The company claims the accounts were used to gather outputs and insights that could potentially help competitors improve their own systems.

The allegation highlights a growing tension in the AI industry: as models become more valuable, companies are increasingly protective of how their systems are used — and by whom.


What Anthropic Is Claiming

Anthropic says it identified a large network of suspicious accounts that were interacting with its AI platform.

According to the company, these accounts were linked to organizations connected with several Chinese AI labs, including:

  • DeepSeek
  • Moonshot AI
  • MiniMax

Anthropic claims the accounts were created to query its Claude models at scale, potentially collecting responses and system behavior patterns.

The company says the activity violated its terms of service.

Anthropic has since blocked the accounts and restricted access, though it did not disclose how long the activity had been occurring.


Why Competitors Might Do This

Large language models generate valuable information through their responses.

By systematically querying a competitor’s model, a company could potentially:

  • Analyze reasoning patterns
  • Study output structure
  • Benchmark capabilities
  • Improve prompt engineering
  • Train smaller models to imitate behavior

This process is sometimes described as model distillation, where outputs from one AI system help train another.

While distillation can be legitimate when models are licensed or open-source, scraping responses from restricted systems raises legal and ethical concerns.


The Growing Issue of Model Scraping

Anthropic’s claims point to a broader issue emerging across the AI industry.

Foundation models are expensive to build. Training advanced systems requires:

  • Massive datasets
  • GPU infrastructure
  • Long development cycles

Because of these costs, companies are increasingly sensitive to competitors extracting value from their models without authorization.

Several AI labs have already taken steps to prevent large-scale scraping by:

  • Monitoring unusual usage patterns
  • Limiting automated queries
  • Blocking suspicious accounts
  • Tightening API access controls

As the AI market grows, protecting model outputs is becoming part of competitive strategy.


The Geopolitical Layer

The companies mentioned in the allegations — DeepSeek, Moonshot AI, and MiniMax — are part of China’s rapidly expanding AI ecosystem.

Chinese labs have been developing increasingly sophisticated models that compete with Western systems such as:

  • Claude
  • GPT-series models
  • Gemini

The global AI race now involves not just startups and tech giants, but also national technology strategies.

That context makes accusations like this more sensitive.

Anthropic did not publicly state whether it believes the activity was coordinated or simply conducted by individuals affiliated with the organizations.

None of the companies named in the report have publicly confirmed the allegations.


Why This Matters for the AI Industry

The situation highlights a structural challenge in AI development.

Unlike traditional software, large language models reveal a lot about themselves through their outputs.

Every response provides clues about:

  • training style
  • reasoning patterns
  • model capabilities

That makes it difficult for companies to fully protect their intellectual property.

At the same time, AI research culture has historically valued openness and experimentation.

The tension between open experimentation and commercial protection is becoming more visible as AI becomes a multi-billion-dollar industry.


The Security Challenge for AI Platforms

Anthropic’s experience also reflects a technical challenge.

AI systems designed for open access — via APIs or chat interfaces — are vulnerable to automated querying.

Detecting coordinated usage patterns requires monitoring signals such as:

  • abnormal request volumes
  • similar prompt patterns
  • repeated account creation
  • shared infrastructure origins

As models become more widely used, AI companies will likely invest more in usage monitoring and platform security.


What Happens Next

It remains unclear whether Anthropic will pursue legal action or regulatory complaints.

However, the incident may lead to several industry shifts:

  • stronger access controls for AI APIs
  • stricter account verification
  • improved detection systems for automated queries
  • tighter contractual restrictions on model usage

The issue may also attract attention from policymakers interested in AI competition and intellectual property protection.


Conclusion: The AI Race Is About More Than Models

The accusations from Anthropic show that the AI race is evolving.

It is not just about building better models. It is also about protecting the value those models create.

As competition intensifies globally, disputes over data access, model behavior, and intellectual property may become more common.

In the early days of AI development, collaboration was the norm. Today, the stakes are higher — and so is the scrutiny.


Key Takeaways

  • Anthropic says more than 24,000 fake accounts were used to access its Claude AI models.
  • The accounts were allegedly linked to organizations connected with DeepSeek, Moonshot AI, and MiniMax.
  • Anthropic believes the activity may have been used to study or replicate model behavior.
  • The incident highlights growing concerns about AI model scraping and intellectual property protection.

As the AI industry grows more competitive, platform security and access controls are becoming critical.