All Briefings | Weekly Update
April 29, 2026
Weekly Update

Bipartisan Translation II

The series that translates national security arguments across partisan lines, because the stakes are too high for tribal shorthand.

Google Got the Contract. The White House Wants a Do-Over.

What Happened and Why It Matters

Thirty-four days after the appeals court rejected Anthropic's emergency bid to block the Pentagon's blacklisting, Google signed the deal that Anthropic refused. Google will provide the Department of War unrestricted access to Gemini AI for use in classified networks—"all lawful purposes," the exact language that Anthropic said no to. The military has a new vendor. The White House is now drafting an executive action to bring Anthropic back anyway. And 950 Google employees are asking their company to reconsider the deal.

This isn't a normal government contract fight. This is the Pentagon trying to acquire the capability to do things that one AI company explicitly refused, succeeding by finding another one willing, and then the executive branch pivoting to undo both outcomes by changing the definition of who gets blacklisted. The amount of institutional whiplash required to navigate this story is itself the story.

Here's What You Need to Know in 30 Seconds

On April 8, a federal appeals court denied Anthropic's request for a temporary block on the Pentagon's blacklist. Anthropic is now officially a "supply chain risk" barred from Department of War contracts. On April 28, Google signed a deal granting the Pentagon unrestricted Gemini access for classified use—the same terms Anthropic rejected. OpenAI and xAI had already filled the void months earlier. Now the White House is drafting executive guidance to allow federal agencies to bypass Anthropic's blacklist designation and bring the company onboard with its newest model. Trump said the company is "shaping up." CEO Dario Amodei met with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent. Oral arguments in Anthropic's case are scheduled for May 19. Congress has still passed no legislation defining what military AI can do.

That's three different policy positions operating simultaneously with no legal framework to resolve them. That's your government in the era of military AI.

The Hawk Case: A Vendor Already Signed. Why Are We Rewriting the Contract?

The Pentagon set out to acquire a critical capability: artificial intelligence that can operate at scale in classified defense networks without arbitrary corporate guardrails. That's a legitimate national security need. Google has now provided it. The contract is signed. The vendor is vetted. The capability is available. What exactly does revoking Anthropic's blacklist designation accomplish except to create redundant contracts, confuse the acquisition process, and signal that the government rewards companies that initially say no?

If the issue was never actually about military AI use—if it was always about corporate defiance—then say that. But don't dress it up as acquisition strategy. Google proved that there are vendors willing to provide the exact same capability at the same price point. The hawk position on this is simple: Google solved the problem. The White House wanting to un-solve it isn't governance. It's political revenge. And it's expensive.

The bigger hawk concern: if the next company the Pentagon wants to contract with makes the same "ethical" objections Anthropic did, and the government is now publicly committed to eventually bringing them back into the fold anyway, then what's to stop every vendor from saying no first and negotiating better terms later? The Pentagon needs to be able to communicate that a blacklist is final. Because if it's not, then every company will learn to treat the government's requirements as opening bids in a negotiation, not non-negotiable terms.

The hawk's bottom line: Google is the vendor. The capability exists. The 950 employees who signed the letter asking Google to pull out are not your acquisition process. If the White House wants to bring Anthropic back, that's a policy decision that should come with a contractual framework that doesn't let vendors dictate the terms by refusing them. Right now it looks like the government is rewarding the initial refusal.

The Reformer Case: Google Proved Anthropic Was Right

950 Google employees signed an open letter opposing the deal. Think about that. Almost a thousand people who work for the company that just signed the Pentagon contract are publicly asking the company to undo it. That's not ordinary corporate dissent. That's a statement about what just happened.

Google accepted the exact terms Anthropic rejected. The Department of War wanted unrestricted use of AI in classified networks. That means no constraints on autonomous weapons. That means no guardrails against domestic mass surveillance. That means every application that Anthropic said no to is now available through Google's infrastructure. And a significant portion of Google's own workforce is saying: we believe this is wrong.

The White House pivot—drafting executive guidance to bypass Anthropic's blacklist while Google's own employees are protesting the deal—is a tacit admission that Anthropic's position has credibility. If the White House truly believed the Pentagon's original position was correct, it wouldn't need to change the legal designation. It could just defend Google's choice. Instead, it's simultaneously keeping Google's deal in place and trying to bring Anthropic back. That's contradictory policy.

The deeper reformer concern: if the White House can draft executive guidance to override a "supply chain risk" designation, then what was the point of the designation? If it's reversible through administrative action whenever political winds shift, then it's not a security designation. It's a hostage-taking mechanism that can be weaponized against any vendor the current administration has a grievance with. Today it's used to bring Anthropic back. Next administration, it could be used to exclude someone else.

The 950 Google employees aren't wrong. Anthropic said no to something that matters. Google said yes. And now the government is trying to have both.

The reformer's bottom line: Anthropic held the line on something the Pentagon explicitly wanted: unrestricted military use of AI without human authorization requirements or domestic surveillance prohibitions. That took political capital. Anthropic burned it. And if the White House is now going to use executive action to bring Anthropic back anyway, then what the company refused to accept actually didn't matter. The Pentagon got it from Google. The administration is just trying to stock the shelves with every option. That's not policy correction. That's hedging.

Where They Actually Agree (Whether They'll Admit It or Not)

The appeals court decision, the Google contract, and the White House pivot have created a weird consensus nobody's talking about: all three branches of the government have now tacitly agreed that unrestricted military AI is going to happen.

The appeals court didn't enjoin the Pentagon's blacklist. It let it stand. That's the judiciary saying: the executive has the power to designate companies as supply chain risks. The Pentagon used that power. The White House is now using the reverse power—executive guidance to override that designation. And Google, in the middle of a staffwide revolt, signed the contract anyway. What all three positions have in common is that they have accepted the premise: AI for military use without Congressional oversight will proceed.

The disagreement isn't about whether it's going to happen. It's about how fast, and who profits from the transition. Hawks want it immediately and efficiently. Reformers want it legally constrained. The White House wants it available from multiple vendors so the government never has to depend on one company saying no. None of these positions require Congress to pass a law.

That's the real problem nobody's solving.

Where They Don't (And Shouldn't Pretend To)

The fundamental disagreement hasn't moved an inch in the five weeks since the appeals court ruling.

On whether "all lawful purposes" is a meaningful constraint: Hawks argue that "lawful" is the operative word—if Congress hasn't prohibited it, it's lawful, and the military has the authority to pursue lawful objectives. Reformers argue that "lawful" is circular if Congress hasn't actually passed a law defining what military AI can do domestically or in autonomous targeting. They're both right about their premise. The problem is they're arguing about different meanings of the same word.

On the White House executive action: Hawks see it as reasonable course correction—once you've found a vendor, why needlessly exclude another? Reformers see it as an end-run around the security designation that just proves the security designation was political rather than technical. The hawk argument assumes the designation was a mistake. The reformer argument assumes it was deliberate. Both cannot be true simultaneously, but the White House is acting as if both are.

On the Google employee letter: Hawks see it as normal corporate dissent—people often disagree with company contracts. Reformers see it as a legitimate moral objection to the specific terms of that contract. One frame sees principled disagreement as noise. The other sees it as a warning sign nobody's heeding. Neither side is going to move the other by debating whose interpretation is more reasonable. They're using different moral languages.

Here's My Two Cents

The most damning thing about this story is that it's now a story at all. By April 29, the question of whether the Pentagon can use AI to surveil American citizens and make lethal targeting decisions without human authorization should not still be pending oral arguments. It should be settled law.

Instead, we have four pieces moving simultaneously: a Pentagon that got what it wanted from Google anyway, a White House trying to un-exclude Anthropic despite no legal framework to do so, a company's workforce protesting a contract its executives already signed, and a federal court system deciding the policy questions that Congress was supposed to answer three years ago.

What strikes me is that the White House's executive action approach is actually a cleaner read than anything else in this timeline. The executive is saying: we want options. We want Anthropic's Mythos model available. We want Google's Gemini available. We want OpenAI's models available. Let's build a portfolio rather than fighting over which vendor is ideologically pure. That's not a policy position. That's procurement hedging. It's what you do when you haven't actually decided what problem you're solving, so you acquire every possible solution.

But it also means the Pentagon wins either way: it has Google now, and if the White House succeeds in bringing Anthropic back, it has both. Anthropic's refusal to sign the original contract bought the company political credibility with exactly the people who asked the 950 Google employees to sign the letter. But it didn't change what the Pentagon acquired. It just changed which vendor profile would supply it.

The hawks are right that this is about operational capability. The reformers are right that the capability matters because of what it enables. And neither side is wrong. But both are operating in the absence of law. So they're left negotiating the shape of the problem in federal court instead of writing legislation.

The 950 Google employees are asking their company to do something it won't do: reconsider a signed contract based on the employees' moral objection to its terms. That question—whether a company should be able to refuse a government contract on grounds of ethical principle—is exactly the kind of question a legislature should resolve. Should contractors have that right? Under what circumstances? What are the national security exceptions? What audit trail is required?

These are not unanswerable questions. They're uncomfortable questions. Because "yes, contractors can refuse military work on ethical grounds" undermines military procurement. And "no, they can't" means the government can always find someone willing, and the refuser doesn't actually change anything, it just makes them look pure while someone else does the work.

Congress could pass a law that settles this. Congress could require AI-in-warfare impact assessments. Congress could mandate domestic surveillance audits. Congress could define which uses of military AI require explicit human authorization. Congress could create liability for autonomous weapons deployment. Congress could make the tradeoff visible and force elected representatives to vote on it publicly.

Congress is not doing this. The White House is drafting executive guidance. The Pentagon is signing contracts. Google's employees are signing letters. The court system is hearing oral arguments. And somewhere in that ecosystem, the policy decision is being made. It's just being made everywhere except where it's supposed to be made.

That's the real scandal. Not that Anthropic said no. Not that Google said yes. Not that the White House wants a second option. The scandal is that none of these are decisions being made through the legislative process. They're negotiations being conducted by press release and court filing and executive guidance because Congress abdicated.

The oral arguments on May 19 will matter. But they'll matter because a judge will be forced to answer a question that should have been answered by legislation. That's not the judiciary working as intended. That's the judiciary picking up after the legislature stopped showing up.

Related Briefings

Weekly Update · March 26, 2026
Bipartisan Translation: The Department of War Tried to Muzzle an AI Company
The first briefing in this series. Anthropic refused. The government retaliated. A judge flagged it. The story this Apr 29 update is the sequel to.
Weekly Update · April 23, 2026
Connecticut Picks the Fight
The same federal-vs-state-vs-courts arc, applied to AI regulation rather than military AI procurement.
Weekly Update · April 30, 2026
702 Got the Substance. Then Got the Poison Pill.
Same Congressional dysfunction, surveillance lane. Real reforms held hostage to an unrelated CBDC fight.

Anna R. Dudley writes on national security, intelligence policy, and the places where hawks and reformers need to find each other. This is the second briefing in the Bipartisan Translation series. Read the first briefing. Subscribe at annardudley.substack.com.

Back to Briefings
Copied to clipboard