How exactly does the Pentagon evict Claude?
Swapping out one AI model on a classified network for another takes minutes. Retraining the people who’ve learned to rely on it will take much longer

The Department of Defense is phasing Anthropic’s Claude out of its classified networks within six months, triggering a complex transition for military personnel.
AFP/Stringer/Getty Images
The Pentagon has put Anthropic on the clock. On Thursday, the Department of Defense formally notified the company that it has been deemed a “supply chain risk”—a label that has turned its artificial intelligence systems, including its flagship model, Claude—into a liability.
The move escalates a dispute that has been brewing for weeks over Anthropic’s safety-first ethos—its commitment to limit how its technology is deployed—and the DOD’s demand for unfettered control.
The Pentagon is phasing out Claude, one of the world’s most advanced AI models, from its classified networks within six months. On paper, swapping one model for another appears quick. “It’s simple to swap out the models and to install new ones,” according to a source close to Palantir—a defense-tech giant that has partnered with Anthropic to host Claude inside secure military networks.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
The hardest part begins after the model is gone, rewiring everything that’s been built around it.
Claude is what’s known as a frontier model, an AI capable of executing complex, multistep tasks on its own. That’s not how the DOD currently uses it. Lauren Kahn, a researcher at Georgetown University’s Center for Security and Emerging Technology and a former Pentagon official, describes its deployment as more like a chatbot than a free-roaming agent. Claude sits “on top” of existing software, she says, and shows up only in certain places—tightly controlled corners of a classified environment. And it isn’t connected to “effectors,” she says, meaning that it can’t “launch an effect”—a weapon command, for example—“in the real world.”
In late 2024, Anthropic became the first AI company to clear the Pentagon’s classified hurdles. Until recently, Claude was the only large language model publicly known to be operating in that environment. Accessed through tools like Claude Gov—which became a preferred option for some defense personnel, according to Bloomberg—the system taps into enormous data pipelines to turn a flood of unstructured information into readable intelligence. In other words, Claude summarizes information for the Department of Defense, but it can’t pull a trigger.
Once people rely on a tool, it can be hard to let it go. Each integration must be offboarded piece by piece. And whatever replaces Claude must clear strict security reviews and approvals before it touches a classified system. Software changes inside the Pentagon can be “excruciating,” Kahn says. Even something as simple as installing Microsoft Office “takes months and months and months.”
At press time, Anthropic did not respond to multiple requests for comment from Scientific American. The Department of Defense declined to discuss the specifics of the transition.
Unlearning Claude
Every AI model fails in its own characteristic ways. Operators who’ve spent months using Claude learn those quirks through trial and error: which prompts land badly, which outputs require a second look.
Kahn studies automation bias, the tendency of human operators to overdelegate to machines. “I worry about a slightly heightened risk of automation bias in the early stages as they’re working out the kinks,” she says. People will check for Claude’s mistakes while the replacement model makes new ones. The personnel most exposed to the transition will be the power users who built the most customized work flows and learned the model’s downsides well enough to exploit its strengths.
While Pentagon personnel brace for the operational transition, the messy details of the political standoff have spilled into public view. Late on Thursday Anthropic CEO Dario Amodei published a blog post vowing to challenge the government’s “supply chain risk” designation in court, arguing the statute is typically reserved for foreign adversaries. Behind the scenes, the standoff appears to have devolved into a game of chicken. Emil Michael, the Pentagon official who’s led the department’s negotiations with Anthropic, posted on X that talks with the company are dead. And Amodei is reportedly scrambling to resuscitate them.
Meanwhile the DOD is already moving on. Within hours of Anthropic’s official blacklisting, OpenAI announced it had signed a deal to deploy its models on the military’s classified networks, securing the contract its rival had just lost.
Anthropic was willing to risk eviction from the U.S. government rather than compromise its safety-first ethos. Its replacement initially accepted the Pentagon’s demand for unfettered operational flexibility—only to hastily add the very surveillance guardrails that Anthropic advocated for after OpenAI CEO Sam Altman faced massive internal and public backlash. The swap may not be so simple after all.
It’s Time to Stand Up for Science
If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.
I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.
If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.
In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.
