Anthropic logo is seen in this illustration taken May 20, 2024.
Image: REUTERS/Dado Ruvic/Illustration
In a tumultuous early 2026, Anthropic, once hailed as a leader in ethical artificial intelligence, finds itself at the centre of converging debates over national security, global competition, corporate ethics, and data governance.
At stake is more than the future of one AI startup: it’s a flashpoint in how generative AI will be shaped by geopolitics, military strategy, and public trust.
At the heart of the most visible dispute is Anthropic’s simmering conflict with the U.S. Department of Defense. CEO Dario Amodei is set to meet with Defense Secretary Pete Hegseth to negotiate how Anthropic’s flagship AI model, Claude, may be employed by the U.S. military.
The Pentagon has grown impatient with Anthropic’s insistence on safety-oriented restrictions, especially limits on uses such as autonomous weapons systems and mass surveillance which it views as impediments to defence readiness.
Anthropic has consistently positioned itself as a cautious voice in the industry, prioritising what it calls responsible deployment and constitutional AI principles over rapid adoption for military systems.
This contrasts with rivals such as OpenAI, Google and Elon Musk’s xAI, which have reportedly agreed to broader terms with the Pentagon.
Defense officials have hinted they may designate Anthropic a “supply chain risk,” a label that could limit its participation in U.S. defence contracts, an extraordinary step for a firm at the forefront of model development.
The controversy underscores a larger question: should safety guardrails be negotiable when national security is invoked? The outcome could reverberate across AI policy globally, particularly as Western governments grapple with balancing oversight against technological advantage.
Anthropic’s troubles go well beyond Washington’s corridors.
In a detailed public announcement late February, the company accused three Chinese AI developers, DeepSeek, Moonshot AI, and MiniMax of running industrial-scale “distillation attacks” on its Claude models.
According to Anthropic, these campaigns used roughly 24,000 fraudulent accounts to generate more than 16 million interactions with Claude, allegedly extracting capabilities to train rival systems in violation of terms of service and regional access restrictions.
Distillation is a widely understood AI training technique in which the outputs of a large model teach a smaller one.
But when done illicitly at scale, Anthropic argues it can effectively bypass the long, costly process of developing frontier AI systems independently, and could strip out safety guardrails embedded in the original system.
The accusations have broader geopolitical resonance. Some commentators and policymakers see them as evidence of organised attempts to bridge the AI gap between U.S. firms and rapidly improving Chinese competitors, feeding into ongoing debates over export controls on advanced AI chips and technology.
Anthropic says it has begun building stronger detection and defence measures, including behavioural fingerprinting and traffic classifiers, and is sharing indicators with others in the industry to make similar attacks harder to execute and easier to detect.
Rather than consolidating sympathy, Anthropic’s claims sparked a contentious backlash online.
High profile figures, including entrepreneur Elon Musk have seized on the Chinese distillation accusations to turn scrutiny inward, calling out Anthropic for hypocrisy regarding data and training practices.
Musk’s critics alleged that Anthropic itself has previously trained its models on datasets that contain material scraped without explicit permission, echoing broader industry debates about copyright and ‘data theft’ in AI training.
While these charges mix technical dispute with social media hyperbole, they reflect a real tension: as AI firms decry illicit use of their APIs, they also face unresolved questions around what constitutes ethical training data usage in an era of open information.
This puts a spotlight on a sector that remains largely unregulated when it comes to sourcing and using vast swaths of internet content, legal precedent be damned.
The latest saga comes as Anthropic continues to grow as a business: the company has announced new product innovations, cybersecurity initiatives, and global expansion efforts throughout early 2026.
Yet the competing pressures, from the Pentagon’s demands to China-linked technology battles and public disputes over ethics pose difficult questions about the role of AI developers in a world where technology, governance and geopolitics are deeply entangled.
One thing is certain: how the company navigates these crosswinds will have implications far beyond Silicon Valley boardrooms.
FAST COMPANY (SA)