All Briefings | Weekly Update
March 26, 2026
Weekly Update

Bipartisan Translation

The series that translates national security arguments across partisan lines, because the stakes are too high for tribal shorthand.

The Department of War Tried to Muzzle an AI Company. A Judge Noticed.

What Happened and Why It Matters

The Department of War demanded that Anthropic, the AI company behind the Claude models, remove safety guardrails so the military could deploy its AI for any "lawful use." That included autonomous weapons systems and mass domestic surveillance of Americans. Anthropic said no. The Trump administration responded by designating Anthropic a "supply chain risk," killing their $200 million government contract, and ordering all federal agencies to stop using the company's products.

Yesterday, a federal judge in San Francisco called the Department of War's actions "an attempt to cripple Anthropic." That's a federal judge saying the quiet part out loud.

Here's What You Need to Know in 30 Seconds

The Department of War wants unrestricted AI deployment, including capabilities that could surveil American citizens and make lethal targeting decisions without direct human authorization. An AI company refused. The government retaliated. A judge flagged the retaliation. And Congress, the body that's supposed to write the rules for all of this, has done nothing. They funded $14.2 billion in military AI for FY2026 without passing a single law defining what it can do, who it can target, or who's accountable when it goes wrong.

That's your tax dollars building weapons systems that operate in a legal vacuum. Both sides of the aisle should be furious about that. For different reasons. Which is the whole point of this series.

The Hawk Case: We're in an Arms Race and You Want to Read the Manual

The People's Liberation Army is not waiting for this conversation to conclude. China is pouring resources into AI-enabled warfare, and every restriction the United States places on its own military AI is a capability gap that an adversary will exploit. The Department of War has requested a record $14.2 billion for AI and autonomous research in FY2026. Not because generals are reckless, but because they understand what's coming.

When a private company, accountable to its board and not to the American people, gets to unilaterally decide what the U.S. military can and cannot do with its own contracts... something has gone very wrong. The national security apparatus can't afford to have its operational posture held hostage to corporate ethics policies that weren't written with warfighters in mind. Human-on-the-loop systems, where a machine acts autonomously but a human can theoretically intervene, may be the only realistic option at the speed of modern conflict. The question isn't whether to use AI in warfare. The question is whether America will have rules, or whether our adversaries will write them for us by outpacing us while we deliberate.

The hawk's bottom line: Legal clarity isn't a luxury. It's an operational necessity. If a U.S. autonomous weapons system kills civilians and no law governs its deployment, the United States loses allies, faces international prosecution, and hands adversaries a propaganda weapon. The absence of rules doesn't mean freedom of action. It means accountability chaos. Hawks need legislation too. They just need different legislation than the reformers want.

The Reformer Case: Read That Part About Surveilling Americans Again

Read that sentence again: mass domestic surveillance of Americans.

That's not a foreign intelligence question. That's a Fourth Amendment question. The United States military does not have a legal mandate to conduct surveillance on its own citizens, and the fact that this was buried inside a contract dispute, rather than debated in Congress, should alarm every civil libertarian, left and right. This isn't a hypothetical. The Department of War explicitly demanded removal of the safeguard against domestic mass surveillance as a condition of the contract.

On autonomous weapons: there's a reason 156 nations voted for a legally binding treaty on lethal autonomous weapons systems. When an AI decides to kill without a human authorizing that specific decision, who's accountable? Under international humanitarian law, somebody has to be. An AI targeting system can't be court-martialed. The general who deployed it can claim the algorithm decided. And the legal vacuum that results isn't an abstraction. It's the kind of thing that leads to war crimes with no one responsible.

A federal judge this week described the Department of War's behavior as "troubling" and questioned whether the "supply chain risk" designation was retaliation for Anthropic's ethical stance. Using national security tools to punish companies for maintaining ethical limits on their products is a chilling precedent. Today it's Anthropic. Tomorrow it's any AI firm that declines to build a surveillance tool.

The reformer's bottom line: The alternative to law is corporate discretion. And corporations change their minds, get acquired, and respond to stock prices. Anthropic holding the line today doesn't guarantee some future AI company holds the same line. Rights embedded in contracts are not rights. Rights embedded in law are. Reformers need legislation too. They just need different legislation than the hawks want.

Where They Actually Agree (Whether They'll Admit It or Not)

Here's the part neither side wants to say out loud. They agree on more than they think.

The current situation, where a $200 million contract dispute between a tech company and the Department of War gets litigated in federal court with no legislation governing what military AI can do, is a disaster for everyone. Hawks and reformers are both stuck with the worst possible outcome: no framework, improvised by a court case, with the actual policy questions still unanswered.

They both need legal clarity. They disagree on what the law should say. That's fine. That's what legislatures are for. What's not fine is that Congress wrote the check for billions in military AI and then left the room before anyone could ask what it's for.

Where They Don't (And Shouldn't Pretend To)

I'm not going to pretend the convergence papers over genuine disagreement. It doesn't.

On autonomous weapons: The hawk position, that human-on-the-loop oversight is sufficient and requiring human-in-the-loop authorization for each lethal decision is operationally unworkable at machine speed, is not irrational. The reformer position, that autonomous lethal targeting is a categorical moral line that can't be crossed regardless of operational convenience, is also not irrational. These are sincere values disagreements about the nature of human agency in life-and-death decisions. Legislative convergence on "some oversight" won't resolve the underlying question. It'll postpone it.

On domestic surveillance: Hawks argue that the domestic/foreign intelligence distinction has eroded in an era of transnational threats, and that AI-enabled pattern analysis of domestic communications may be the only way to detect certain threats in time. Reformers argue that this is exactly the argument used to justify every surveillance abuse in American history, and the Fourth Amendment exists precisely to hold that line. They're both right. Welcome to America. This has been our core tension since the founding and neither side is wrong to hold their position.

What a good law does is not eliminate this tension. It makes the tension visible, creates a mechanism for democratic accountability, and forces the people who make these decisions to defend them publicly. That's not what's happening now.

Here's My Two Cents

I'm going to be direct about what frustrates me most about this story, and it isn't Anthropic or the Department of War. It's that we can't get the people in the room to be in the same room.

The hawk and the reformer are both right about what they're scared of. The hawk is right that China isn't waiting. The reformer is right that the Fourth Amendment isn't optional. These aren't contradictory positions. They're the two guardrails of a road that Congress is supposed to build. Instead, Congress funded the car, skipped the road, and now a federal judge in San Francisco is trying to figure out where the lanes go. That's not governance. That's abdication.

And the taxpayer is footing every dollar of it.

$14.2 billion in military AI funding with no legal framework. $200 million in a contract that collapsed because nobody wrote the rules before signing the deal. Untold millions more in legal fees, court proceedings, emergency designations, and agency scrambles to find replacement AI vendors because the government torched its own contract in a fit of political retaliation. That's your money. That's my money. And none of it had to be spent this way.

The questions Congress hasn't answered aren't hard to articulate. What level of human control is legally required before an AI system takes a lethal action? What domestic surveillance activities are categorically off-limits regardless of who's asking? What happens when an autonomous system causes civilian casualties and the contractor says "the algorithm decided"?

These aren't unanswerable. Other democracies are working on them right now. The EU has legal frameworks for high-risk AI in defense contexts. International law proposals are at the UN. The information exists. The drafting resources exist. What's missing is the political will to do the hard work of defining what we believe... and then being accountable to that definition.

I think the reason Congress hasn't acted is that this issue doesn't break cleanly along party lines, and issues that don't break cleanly don't get fundraising emails. There's no neat villain. The hawk case and the reformer case are both legitimate. That means legislation would require actual negotiation, actual compromise, actual governance. And we've gotten very bad at that.

So instead we get a proxy war. Anthropic vs. the Department of War, litigated in federal court, with a judge doing the work that 535 elected representatives were sent to Washington to do. A federal judge shouldn't be the mechanism by which the United States decides whether AI can autonomously kill people or surveil its own citizens. That's a legislative question. The fact that it's arrived in a courtroom isn't anyone's failure but Congress's.

Whoever is reading this in a congressional office: the time for an AI-in-warfare legislative framework was three years ago. The second-best time is now. You know where the drafting resources are. Use them. The taxpayers who fund both the weapons and your salary would appreciate it.

Anna R. Dudley writes on national security, intelligence policy, and the places where hawks and reformers need to find each other. Bipartisan Translation is a weekly series. Subscribe at annardudley.substack.com.

Back to Briefings