The Pentagon’s Anthropic ultimatum and the case for the ‘separation of power and truth’
Once the state gets its grubby mitts on the operating system for knowledge, the downstream damage is to reality itself.
Control over the tools of knowledge is control over knowledge itself. That’s the principle behind every press licensed, every book banned, every broadcaster fined. Power wants control over truth. Power has always wanted that. It will never stop wanting that. And once it gets even a finger on that lever — whether it’s through regulation, contracting cascades, “supply chain” designations, or emergency authorities — it will no longer need to announce that it’s shaping reality. It will just…shape reality. We will see when the powerful want us to see, and we will be blind when they want us to be blind.
Right now, AI is the most powerful tool of knowledge creation we have. Many of us are cynical about its promise and its products — so cynical that we’d be willing to let the government reach out and take control of it. And last week, we saw the government do just that.
On Friday, the Trump administration picked a fight with one of the biggest AI companies in the world. On Saturday, the U.S. launched major combat operations against Iran, Iran hit back, and the whole country’s attention turned to a larger fire. By the end of the weekend, Friday’s story was effectively buried.
That’s the tempo now. Friday’s “This has real ramifications” becomes Monday’s “Wait, that was Friday!?”
So let me put a flag in the ground at the start, because this piece is headed somewhere bigger than the daily headlines.
Friday’s story is a reminder of something we’re going to have to say out loud, over and over, whether it’s fashionable or not: The separation of truth and power — the separation of government from the tools we use to ask and answer questions — just became more important than ever.
If we’re talking about AI as a truth-seeking tool, as the interface between human curiosity and attempted answers, then the worst outcome of the current wave of AI cynicism is that we build — out of fear and fashionable pessimism — a government backdoor into it.
What happened between The Pentagon and Anthropic, before it gets buried again
Late last week, the Pentagon delivered Anthropic a “best and final” ultimatum: Give the military full “lawful” access to its AI chatbot Claude without corporate carve-outs, or lose the relationship.
The deadline was Friday.
Anthropic declined. Publicly, their line has been consistent. They have said they will support national security work, including in classified environments, but they also draw two bright boundaries: no mass domestic surveillance of Americans, and no fully autonomous weapons. And while they did choose to work with a department that oversees the military (and employs people to make weapons of war), companies don’t sign their beliefs away when contracting with the government.
It’s easy to hear those two boundaries and think this is mostly a debate about scary sci-fi endpoints: killer robots and a surveillance state. And yes, those are real, concrete moral red lines. But they’re also the kind of red lines that hide the deeper categorical mistake people keep making about AI. Claude isn’t just a weapons component, and it’s not like Microsoft telling the Department of Defense that its word processor can only be used for messages of peace. It’s a system that talks, explains, summarizes, argues, and refuses — a system that sits in the middle of human inquiry. So when the government pressures a company to erase its carve-outs and treat the model as fully “lawful-accessible,” it isn’t only pushing for capability. It’s pushing for control over the rules of speech inside a truth tool.
And that’s why I’m talking about “truth” here: because the fight underneath the fight is about who gets to set the boundaries for what questions can be asked, what answers can be given, what gets declined, and what becomes professionally unsafe to say out loud.
In other words, mass surveillance and autonomous weapons are the obvious dangers. The less obvious danger is what happens when the state learns it can use its contracting power — and the threat of emergency authority — to reach into the epistemic layer itself; the layer at which truth is discovered and decided. Once you accept the premise that government gets to dictate the internal policies of a general-purpose system that mediates inquiry, you’ve already accepted the principle that power can steer truth-seeking. The rest is just paperwork and precedent.
Once Anthropic refused to play ball, President Trump ordered federal agencies to stop using their products (with a phase out). Soon after, Defense Secretary Pete Hegseth branded Anthropic a national-security “supply chain risk.”
In the defense world, this is the kind of designation meant for situations like foreign state control of a platform, compromised update channels, coerced access, and hidden dependencies — problems that can turn a vendor into a villain.
And it doesn’t just hit the company wearing the label, either. The moment the Pentagon uses that phrase, every prime contractor and subcontractor hears the same message: touch this, and you may be torching your own contracts. People don’t need a formal prohibition to start backing away. Risk departments and lawyers do that work automatically. This designation is treated as something closer to blacklisting than ordinary contract drama.
Hovering over the exchange was the detail that should have dominated the weekend, which was reporting that Pentagon officials discussed using the Defense Production Act as leverage in the dispute. The DPA was passed in 1950, in response to the start of the Korean War, and grants the president power over American industries in the name of national security. The idea was to ensure the prioritization and production of necessary materials during times of emergency, when those materials may be scarce. It was certainly not intended to allow the president to bully private companies into doing whatever he wants. But that is, in effect, what he’s using it for.
Adam Thierer has been warning for years about the DPA getting repurposed into a general tech-policy lever, turning an emergency production statute into an expansive tool for steering algorithmic development.
The government’s invocation of the DPA last week should have been an outrage, but then the Iran strikes swallowed the news cycle. The thing is, though, the Anthropic story didn’t get less consequential after war began. It just got easier to miss.
As I said on X on Thursday, the government had the very simple and fair option of buying from someone else if Anthropic didn’t meet their requirements. Instead, they’re trying to make Anthropic radioactive. And to accomplish what? The government quickly contracted with OpenAI, who maintains similar red lines. The switch doesn’t make sense unless the purpose was to punish Anthropic.
That’s where the free-speech issue shows up in plain English. Not because the government declined to sign a contract, but because the government aimed its power at the rulebook for what a private system will and won’t say.
I’ve discussed this before on ERI, but it bears repeating: With AI, “speech” isn’t a metaphor. It’s the outputs. It’s the refusals. It’s the internal rules that decide what the model will help with, what it will facilitate, and what it will decline to do.
The combination of “radioactive” labeling and emergency-authority threats teaches the entire industry that independence is conditional. If the government is simply making ordinary procurement decisions based on performance, cost, or mission alignment, there is little constitutional issue. But if it is using contracting power as leverage to punish protected expression, that should be taken very seriously.
Even companies nowhere near the Department of Defense will start asking themselves a quiet, dangerously chilling question: Could they do this to us next?
We want Reno v. ACLU, not a Federal AI Commission
We’ve already faced this choice once with a transformative communications technology.
In the 1990s, the internet triggered panic. Congress responded with the Communications Decency Act. The impulse was familiar: a new communications tool creates some real harms and a lot of moral hysteria, so Washington reaches for the master switch.
The Supreme Court refused. In Reno v. ACLU, it struck down the CDA’s core censorship provisions and treated the internet as what it actually was: a vast forum for speech, not a broadcast medium requiring paternal supervision. That decision didn’t deny that the internet would create problems. It recognized something more important: giving government general authority over the infrastructure of expression is a cure that turns into a permanent disease.
That’s the model we need for AI.
The model we do not need is the FCC model. The FCC began with an ostensibly bounded problem of allocating scarce broadcast space. Over time, it became a durable bureaucracy with elastic authority and the predictable tendency to treat every new communications controversy as justification for more discretion, more oversight, more leverage, more power.
Now imagine that “public interest” logic aimed not at the airwaves, but at the interface people use to ask questions. The tool people use to think with. The operating system for truth.
That’s how you end up with the Federal AI Commission. Do we want that? FAIC no!
We can’t give the government control of AI without giving up control of truth, too
AI is quickly becoming our society’s epistemic infrastructure: the tool people use to ask questions, find information, summarize, argue, test, translate, and ultimately decide what they think. In other words, it is increasingly woven into how people make sense of information and, ultimately, how they understand the world around them. That’s why the separation that matters most now is the separation between truth-seeking and power.
When government gains leverage over the systems that mediate inquiry, it gains leverage over inquiry itself. You don’t need overt censorship. You don’t need show trials. You don’t even need a “Ministry of Truth.” You just need a backdoor that grants you the ability to nudge the operating system for truth toward whatever the people in charge find convenient with actual truth-seekers none the wiser.
That’s why the current wave of AI cynicism worries me. And I don’t mean skepticism here. Skepticism can be healthy. I really do mean cynicism. In particular, I mean the kind of cynicism that serves as a pretext for building permanent government authority over AI. This is authority that will not be used sparingly, and will not stay limited to the administration that created it.
This, however, is not the popular take right now. The fashionable mood is “tech caused all our problems, therefore tech should be supervised by the state.” And while that mood is understandable, it’s also how you sleepwalk into the worst possible outcome.
AI creates specific, different, and difficult problems. It can, for example, be used to mislead and deceive people in much more efficient and effective ways, with deepfakes and the like. We can talk about them. We should talk about them. Companies should fight about them loudly, both in public and inside the industry. Competition should remain fierce. Consumer pressure should matter. Liability should be the consequence of real wrongdoing. And we shouldn’t lose sight of the tools we already have available to counteract those problems. FIRE Senior Fellow Jacob Mchangama wrote a fantastic article for Persuasion recently about how powerful the truth still is, even amid our new technology.
The line I don’t want us to blur is the line between government and the epistemic layer itself. The irony is that the headline fears — killer robots and mass surveillance — are already bad enough to keep you up at night. But even those nightmares don’t justify building a permanent government grip on the epistemic layer. Once that grip exists, it won’t stay confined to weapons systems or intelligence programs; it will seep outward into whatever becomes politically urgent next — health, elections, “misinformation,” dissent, you name it.
Once the state gets its grubby mitts on the operating system for truth — once it has a reliable way to steer what models can say, what they can refuse to say, and what questions become professionally unsafe — the downstream damage is not limited to AI. It becomes a problem with knowledge creation itself.
And that brings me to the principle underneath this whole thing; the one that made me care about this in the first place: Free speech is not only how knowledge gets made. It’s also knowledge itself.
This ties into what I call the Pure Information Theory of free speech. Sincere speech tells you what someone believes. And yes, that belief may be mistaken. It may be confused. It may even be destructive. But the belief still exists in the world, and a truth-seeking society needs to see it to understand and respond to it. Even lies are evidence about how the world is. They point to incentives, fear, pressure, corruption, status games, and more — all of which is valuable information about the minds that are both operating and operating in the world around us.
Freedom of speech isn’t just a moral principle. It’s an epistemic principle. It’s how error gets exposed and corrected. It’s how truth gets separated from falsehood. It’s how a society avoids becoming blind to itself.
The philosopher Karl Popper described knowledge as progress through error elimination — conjectures and refutations. John Stuart Mill warned that silencing dissenting opinion rests on the assumption of infallibility, and that even true ideas decay into “dead dogma” without challenge. Supreme Court Justice Robert Jackson emphasized that our freedom to differ is not limited to things that do not matter much.
Taboo is the enemy of knowledge. Think about that in the context of AI — perhaps the most powerful potential tool for truth seeking and knowledge creation we will ever invent — and you start to see why this moment is so important. Hopefully, you’ll also start to see why the separation of this tool from the influence of power is the only way to safeguard the future.
SHOT FOR THE ROAD
FIRE Executive Vice President and host of the So to Speak podcast Nico Perrino hopped across the pond recently to participate in a debate at the Oxford Union, the world’s most prestigious debating society. The resolution was “This House Believes In The Right To Offend,” and Nico did an incredible job emphasizing how fundamental free speech is to all the things we hold dear, even if that sometimes means hearing things we hate.




"FAIC no!" I see what you did there, Greg. Nice!
“We’re from the government, and we’re here to help.” Be afraid. Be very afraid.
This entire article stands as a cogent argument for a dramatically smaller government.