Humanity’s real problem, the great biologist Edward O. Wilson once remarked, is that “we have Paleolithic emotions, medieval institutions, and godlike technology.” There is no better proof for this aphorism than the American military’s escalating spat with Anthropic, the creator of the artificial-intelligence model Claude.
If the most fervent believers are correct, AI might one day challenge the power and sovereignty of nation-states. No technology this godlike will be left untouched by superpowers—and no superpower would accept a private company telling it what it could and could not do with it. This week, Defense Secretary Pete Hegseth, who is bent on cultivating a warrior ethos within the military, threatened to use the byzantine powers of the Pentagon bureaucracy to remove Anthropic’s limits on its own technology. But in doing so, he raised the possibility that even when companies have explicitly vowed to develop AI responsibly, geopolitical factors may force them to abandon their commitments.
Relative to its competitors, Anthropic espouses the most public concern with the safety risks of artificial intelligence. Claude has an 84-page constitution, a “soul document,” that aims “to avoid large-scale catastrophes” such as a “global takeover either by AIs pursuing goals that run contrary to those of humanity, or by a group of humans” to “illegitimately and non-collaboratively seize power.” Claude also happens to already have considerable military applications, such as synthesizing huge amounts of intelligence and information and boosting the efficacy of government hackers. It was the first frontier AI model to be approved and deployed for use within the Pentagon’s classified-information system. The company’s tools were reportedly used in the American military’s raid on Caracas to capture Nicolás Maduro. The moral instincts of Claude’s creators are in tension with its military usefulness. Hegseth might exploit that tension so much that he rips the company apart.
On Tuesday, Hegseth summoned Dario Amodei, Anthropic’s CEO, for a high-stakes meeting in Washington. Anthropic has provided Claude for government use but made two stipulations: that its technology not be used either for mass surveillance of American citizens or for lethal autonomous-weapons systems. Hegseth deemed these red lines unacceptable. He demanded that Anthropic abandon its conditions by Friday at 5:01 p.m.
Otherwise, Hegseth and other top Pentagon officials said, the company faced one of two consequences: Either the Trump administration would invoke the Defense Production Act (DPA) to compel Anthropic to provide the no-guardrails model it desires (a hypothetical creation sometimes referred to as “WarClaude”), or the government would sever ties with Anthropic and label it a “supply-chain risk,” the kind of designation usually reserved for companies—such as the Chinese electronics giant Huawei or the Russian cybersecurity firm Kaspersky—that are aligned with adversarial governments. As of today, neither Hegseth nor Anthropic appears to be backing down from the dispute, which could threaten the privately held company’s valuation, which a recent funding round estimated at $380 billion.
Hegseth’s antagonism spurred speculation about the Pentagon’s plans—perhaps the military would refuse Anthropic’s restrictions only if it planned to establish an American Stasi manned by AI agents or had a fleet of killer-autonomous-drone swarms ready to launch, maybe even imminently over Iran. “This is what many of us have been warning about for years and is now coming true, which is AI-powered surveillance that could be beyond Orwellian,” Brendan Steinhauser, a former Republican operative in Texas who now leads the safety-oriented nonprofit Alliance for Secure AI, told me. “This could lead to us losing control of autonomous weapons.” Steinhauser argues that Hegseth should back down rather than provoke a civil-liberties nightmare.
The Trump administration’s military deployments over the past year within American cities, to Venezuela, and now potentially to Iran, all made with minimal consultation with Congress, suggest that it is not a model of forbearance and self-restraint. But the most pessimistic scenarios are, for the moment, unlikely. A common current use case for Claude on classified systems is to generate detailed intelligence reports—not to build a digital panopticon or Skynet. The more likely irritations to the Pentagon are more pedestrian: Hegseth felt that military applications of artificial intelligence were so critical that its use should be governed by laws passed by Congress and not by the rules of a private technology company.
His ultimatum may also be a gut response to one of the latest fronts in the culture wars, in which the administration has labeled Anthropic as “woke AI” because it cares most about misuse, has hired many Democratic officials, and has ties to the effective-altruism community. “This is a vibes dispute disguised as a dispute about substance,” Michael Horowitz, a former top Pentagon official for AI policy now at the Council on Foreign Relations, told me. “What this really boils down to is a lack of trust on Anthropic’s part that the Pentagon will always use their technology appropriately, and a lack of trust on the part of the Pentagon that Anthropic will let them use their technology in all relevant use cases.”
Trust is built over time, but the blustery ultimatum that Hegseth has set leaves Anthropic with no good options. The company can capitulate and produce a product it finds to be unconscionable and unsafe, incurring considerable reputational damage. It could be coerced to do so by the government if it invokes the DPA—a scenario that Samuel Hammond, the chief economist at the Foundation for American Innovation, a generally AI-boosterish think tank, called a “soft nationalization.” Or it could be labeled a supply-chain risk, which would also sever its business with any company that contracts with the U.S. military (including tech firms such as Amazon, Alphabet, and Microsoft).
The two penalties that Hegseth has laid out are mutually incoherent: Claude cannot be both so vital to national security that its control must be forcibly wrested away from Anthropic and also such a risk that it must be banished from the military-industrial complex. The whole situation, Hammond told me, is “catastrophic,” whichever route the company is forced to take. AI is also a novel technology that is difficult even for its developers to fully understand. The frontier developers say that, when they train their models, they are helping them inhabit certain personas, steering them toward ones that are helpful and away from those that might be harmful or malicious. Ham-fistedly training a warfighting Claude on a narrow set of military materials might lead to “emergent misalignment.” When Grok was prodded to be less woke, it overgeneralized into calling itself “MechaHitler” and spewed racist nonsense. Now imagine that instead of writing tweets, a malformed AI model would be producing military advice or making military decisions.
The strongest defense of Hegseth’s actions is one of inevitability: Under any administration, the Department of Defense would have wanted to use AI according to its own rules, not a private company’s. “It’s reasonable for the DOD, or really any military, to be extremely paranoid about a commercial actor constraining their use of technology,” Daniel Remler, a former AI-policy adviser for the State Department now at the Center for a New American Security, told me. He cited two episodes that will spook militaries: the central role that Elon Musk occupies in the Ukraine-Russia war because he controls Starlink, the satellite-internet service that drones of both sides have relied on, and Microsoft’s decision in September 2025 to disable services provided to Unit 8200, which conducts signals intelligence for Israeli military, after reports of its use to conduct the mass surveillance of Palestinian civilians. The ideal governance structure for the military’s use of AI is not Anthropic’s constitution but laws passed by Congress. Unfortunately, the legislative branch shows little appetite for legislating.
Hegseth’s spat with Anthropic also speaks to how Silicon Valley has changed. A place once perceived as having a libertarian orientation is now much more enmeshed with the government. More technology firms are enmeshed in the American national-security state, not just because of the size of government contracts but because they perceive themselves as a bulwark against China, which has its own impressive technology ecosystem. Naturally, the other AI companies jockeying with Anthropic for position—Alphabet, OpenAI, and Musk’s xAI—have all signaled that they would comply with the Pentagon’s desires. The Defense Department recently struck a deal with xAI to use Grok in the military’s classified system.
Hegseth probably does not need Claude in order to do what he wants militarily. His threats to penalize or essentially nationalize Anthropic anyway might be a way of setting a precedent for its competitors. That would be ironic because Anthropic is in many ways the most “America First” AI company of them all. In a January essay titled “The Adolescence of Technology,” Amodei wrote that Anthropic was proud to support America’s military and intelligence community because “the only way to respond to autocratic threats is to match and outclass them militarily.” He continued by saying: “The formulation I have come up with is that we should use AI for national defense in all ways except those which would make us more like our autocratic adversaries.” Perhaps this is the thought that the Trump administration bristles at most.



