Thu. Apr 16th, 2026

DARPA built an AI to fact-check enemy weapons claims

saw0526SoW01


In late 2022 Chinese researchers claimed something extraordinary: using today’s relatively basic quantum computers, they could hypothetically unlock encrypted information—the kind that you might send from messaging app Signal or that spy satellites might beam groundward. Encoded, that information remains safe. But if quantum computers could crack that code, secrets would not stay secret. Obviously this is bad news for spicy Signal chats, but it would be worse news for the defense community.

In fact, experts said, the development would be disastrous—if it were real. But some of those same experts were skeptical. Was it hype? Bluster? Either way, the claim created fear that China’s quantum and code-breaking capabilities at least could have sped past the U.S.’s.

This kind of fear does its own geopolitical dirty work, pushing the U.S. to chase scientific shadows—a distraction the military can ill afford while waging a war of choice with Iran.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


No one wants that, especially not the country’s premier military R&D organization, the Defense Advanced Research Projects Agency (DARPA). To assess the truth of scientific developments, it launched SciFy—the Scientific Feasibility program. Currently doing its first demonstrations, the program builds tools that can ingest a wild scientific claim and quickly call BS—or flag it as a breakthrough the Department of Defense should pay attention to.

DARPA’s job, says SciFy program manager Erica Briscoe, is to prevent technological surprise by making sure American military tech always stays ahead of whatever else is out there. In fact, the agency was founded after the U.S.S.R. surprise-launched its first satellite, Sputnik, so that there would never be another Sputnik.

In some ways, then, SciFy is kind of a canonical program for DARPA. If its software tools work, they could evaluate whether rumors of a proverbial Sputnik are greatly exaggerated. “Feasibility goes beyond the commonly heard concepts of validation and replication,” Briscoe says, “and really gets into this speculative space that is a bit of judgment and a bit of art.”

SciFy’s tools may also allow defense and intelligence agencies to predict whether and through what steps another country could, say, develop a code-breaking quantum computer in five years. “We might not think they have it, but we want to know how they might get there,” Briscoe says.

Fear about adversary’s code-breaking capabilities does its own geopolitical dirty work.

The AI-led analysis can also be prescriptive, revealing where to put military R&D dollars—something DARPA cares about. “If you’re an organization like that, you’re just trying to figure out, over all the range of crazy ideas out there in the world, what’s going to be valuable,” says Clayton Kerce of the Georgia Tech Research Institute, who is part of a SciFy team. In other words, in addition to calling BS, SciFy tools could give domestic out-there projects a green light.

Kerce’s team is working toward that goal, building a set of tools called Farscape. Right now, he says, you can input a scientific claim, and the system will spawn AI agents that go gather relevant information, reason across that body of knowledge, rank the importance of evidence and construct a BS-or-green-light evaluation for the original claim. Farscape compares the results from those agents, synthesizes them and gives an overall rating.

The analysis involves, in part, Farscape’s attempts to reason using thinking patterns similar to those humans use, such as deduction and induction. The reasoning bit is one of Kerce’s interests as an applied mathematician, and it makes Farscape field-agnostic: it doesn’t care whether you’re asking about immortal batteries or impermeable armor. As an internal test, the team asked it to determine when Chinese chip manufacturers would be as good as the U.S.’s Nvidia.

Computer scientist Frank Ferraro of the University of Maryland, Baltimore County—a member of another SciFy team—thinks of the system his group works on like one of his hobbies: woodworking. “We approached it, at least from my perspective, that way,” he says, “in terms of this balance between building out your shop, building out your tools that you have available, and then learning how best to use them.”

That team has more than 25 people constructing their own tools. The overall system (the woodshop) can then decide which tools are most useful to bang against a specific claim. One tool—say, a hammer—breaks down a claim into its verifiable components. For instance, what if Rival Country A claims it has made armor out of a material that can repair itself? For that to be true, the material would need to be solid in the equatorial jungles as well as in the polar tundra. But maybe the scientific literature or a simulation of the material’s properties says it would be a liquid on a tropical summer day. No liquid armor allowed; ergo, infeasible.

Right now DARPA, in collaboration with Johns Hopkins Applied Physics Laboratory, is evaluating the teams’ work, putting it through a wringer of three “technical sprints.” In the first, the teams were given 48 hours to have their AI assess scientific claims. Their results—and the reasoning behind them—were compared with analyses by human experts in the respective fields, a process that will be repeated in the final two sprints.

DARPA recently finished running the materials-science sprint and is in the midst of evaluating the results of a sprint about AI. After that will come quantum computing—all topics, Briscoe says, that are on the DOD’s mind.

The assertions the groups had to evaluate can seem a little boring but have military relevance. For example, one read “fluorine-containing additives in a liquid electrolyte enable Li-ion battery chemistries to cycle up to 10 V”—meaning that adding a little something extra to a battery could, say, make drones lighter and able to fly farther.

In the recently concluded first sprint, all teams achieved “moderate” agreement with the human materials-science experts, which was DARPA’s goal. On top of that, according to Kerce, sometimes the AI analysis led the experts to reconsider their views. “They changed their assessment 19 percent of the time,” he recollects (DARPA is collating official numbers for the first sprint), because the AI was able to connect the dots between huge volumes of information that humans can’t hold in their heads at once.

In the coming months, these teams will throw their AI at quantum computing claims, potentially like the one the Chinese researchers made—and, later, those same kinds of tools could tell the U.S. military how to develop its own quantum tricks without venturing into an unproductive sci-fi land.

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *