The series that translates national-security and AI-policy arguments across partisan lines, because the stakes are too high for tribal shorthand.
Connecticut Did the Thing the Federal EO Was Supposed to Stop
What Happened This Week
On Tuesday, April 21, the Connecticut Senate passed Senate Bill 5 by a vote of 32 to 4. The bill establishes the most comprehensive state-level frontier-AI regulatory framework in the country: developer disclosure requirements for any model above a defined compute threshold, an Artificial Intelligence Policy Office overseen by an AI Policy Director, a regulatory sandbox for testing AI systems in supervised conditions, and explicit rules governing youth interactions with AI chatbots. The bill is now in the House. Senate Majority Leader Bob Duff, who managed the floor debate, framed the vote as "Connecticut deciding it cannot wait for Washington." Governor Ned Lamont's office signaled, in advance of the vote, that he intends to sign whatever final bill emerges from the conference process.
The bill was passed exactly four months and ten days after President Trump's December 11, 2025 Executive Order titled "Ensuring a National Policy Framework for Artificial Intelligence," which directed federal agencies to challenge state AI laws inconsistent with the federal policy statement, established a Department of Justice AI Litigation Task Force, evaluated conditions on federal funding to states with conflicting laws, and asked the Federal Communications Commission and the Federal Trade Commission to consider preemptive federal disclosure standards. The EO was followed, on March 20, 2026, by the White House National Policy Framework for Artificial Intelligence, the document the EO was meant to operationalize.
Connecticut's Senate passed SB 5 with full awareness of all of this. The 32–4 margin includes every Democrat, sixteen of nineteen Republicans, and three Republicans who specifically referenced the federal preemption EO in their floor remarks while voting yes. SB 5 is, by design, on a collision course. The collision is the point. The state is not asking the executive branch's permission. It is testing whether the federal preemption EO can actually preempt anything.
Here's What You Need to Know in 30 Seconds
Federal AI legislation has not passed in three years of attempting. The December 2025 EO does not preempt state AI laws by itself — the EO directs agencies to challenge such laws and threatens conditional funding, but it does not have statutory preemption authority. The DOJ AI Litigation Task Force is preparing test cases. Connecticut just gave them one. SB 5 will become law in Connecticut by mid-July if the House moves on its current schedule, and DOJ will file a challenge within days of the governor's signature. The case will reach the Second Circuit by 2027 and likely the Supreme Court by 2028. Until then, the operative federal-state line on AI regulation is going to be drawn by district judges interpreting an executive order whose statutory grounding is itself contested. Eighteen other states have AI bills in committee that are watching this case to decide whether to advance their own.
The Hawk Case: Fifty Different Regimes Is a Disaster
The hawk position on Connecticut SB 5 is not anti-regulation. It is anti-fragmentation. The hawk's claim is that frontier-AI development requires either no regulation or one regulation, and the absolute worst outcome is the one Connecticut just made more likely: a patchwork of state-level rules that vary in scope, compliance burden, and enforcement intensity, that frontier-AI developers must navigate simultaneously to remain in the U.S. market. The compliance cost of fifty regimes is non-linear. It is not fifty times the cost of one regime; it is much more, because the regimes will conflict, and resolving the conflicts requires legal infrastructure that only the largest developers can sustain. The result, the hawk says, is regulatory capture by default. The big labs absorb the cost. Smaller competitors do not. American AI consolidation accelerates. And every other country watches and concludes the U.S. cannot govern itself.
The hawk further argues that state-level AI regulation, even when well-intentioned, is operating outside its competence. Frontier AI is not local. The decisions a developer makes about training-data composition, evaluation methodology, and deployment limits are made centrally and apply globally. A state-level regulator demanding documentation in a particular format is, in practice, demanding that the developer reformat globally for one market. The cost of that reformatting falls disproportionately on developers without the capacity to maintain parallel regulatory tracks. Connecticut, the hawk says, is not protecting its citizens from AI risk — it is preventing them from accessing the products developers ship to other markets, while imposing compliance costs that will be passed through to every Connecticut resident regardless of whether they ever interact with an AI product.
The hawk reads the December 2025 federal preemption EO as the right answer in spirit and the wrong instrument in detail. The EO is the correct response to the fragmentation problem. It identifies state-level patchwork as a national-security and economic-competitiveness issue. It directs federal agencies to push back. It signals to the states that the federal government will challenge conflicting laws. The instrument problem is that the EO does not have statutory backing — Congress has not passed a federal AI law that the EO can claim preemption from. The DOJ Litigation Task Force is therefore working in unusually thin legal territory: arguing field preemption from an executive order is not how preemption doctrine has historically functioned. The hawk's frustration is that the right policy posture is being executed through the wrong constitutional channel, and the result will be a series of mixed court rulings that leave the fragmentation problem worse than before.
The hawk's bottom line: Connecticut's bill is a state government opportunistically filling a vacuum the federal government created by failing to legislate. The fix is for Congress to pass a federal frontier-AI law that preempts the state patchwork by ordinary preemption operation. The fix is not for the executive branch to litigate the patchwork into submission, because the executive branch will lose some of those cases, and the result will be a more durable patchwork than if the executive had simply allowed Congress to do its job. Connecticut is not the villain. Congress is the absentee.
The Reformer Case: Connecticut Is the Only Lawmaker Doing Its Job
The reformer position starts from a different premise: federal inaction is a federal policy choice. Three years of frontier-AI legislation has died in committee. The EU has passed a comprehensive framework. The UK has stood up an AI Safety Institute. China has its own deployment-licensing regime. The U.S. Congress, by contrast, has produced hearings and discussion drafts, and nothing else. The reformer's claim is that this is not deliberation. It is paralysis dressed up as deliberation. And paralysis at the federal level has consequences: while Congress decides what it would do if it could decide, the AI deployment frontier is moving, and somebody has to govern.
States have always been the laboratories. The 14th Amendment was tested in state-level civil-rights cases for a century before federal civil-rights legislation passed. Same-sex marriage was legal in seventeen states before Obergefell. The minimum wage was a state-level invention. The reformer points out that federal preemption arguments tend to surface, in U.S. legal history, exactly when states begin doing things the federal government has declined to do. The pattern is not new. The pattern is the system working. Connecticut is testing the limits of federalism on AI policy because no other branch of government is willing to.
On the EO specifically, the reformer makes a more targeted argument. The December 2025 EO does not preempt anything by its own terms. Preemption requires either an act of Congress, a treaty, or a federal regulation issued under valid statutory authority. The EO directs agencies to push back against state AI laws, but the agencies are not actually granted preemption authority by the EO — they are granted, at best, the authority to file suit and to evaluate conditional funding leverage. The DOJ AI Litigation Task Force is, in legal-doctrinal terms, an ordinary plaintiff. Its theory of the case will be that state AI laws conflict with federal interests in interstate commerce or national security. That is a real theory. It is also a hard theory to win, because the same theory would, in principle, preempt nearly any state regulation of any technology. The reformer reads the EO not as a serious preemption play but as a bargaining posture, designed to deter states from acting while Congress works things out. Connecticut just decided not to be deterred.
The reformer further argues that fifty different state regimes is not the disaster the hawk describes. It is, in many domains, the standard U.S. regulatory equilibrium. State insurance commissioners regulate insurance. State attorneys general enforce consumer protection. State environmental agencies set air-quality standards above the federal floor. The frontier-AI labs are not facing fifty different regimes. They are facing the same dynamic every multistate enterprise has managed for the entire post-New Deal period. The compliance cost is real. It is not unprecedented. And it has historically pushed federal legislation forward, not backward.
The reformer's bottom line: Connecticut's bill is what state-level governance looks like when the federal government has, for three years, not produced a frontier-AI statute. The hawk's preferred answer — federal preemptive legislation — is also the reformer's preferred answer. The reason it has not happened is that Congress has not been able to assemble the votes for it. Blaming the states for legislating is blaming the states for doing their job. The fix is to pass the federal law, not to deny that fifty states would otherwise legislate in its absence.
Where They Actually Agree
Strip the rhetoric and the hawk and the reformer agree on the central diagnosis: federal AI legislation is overdue, the December 2025 EO is not a substitute for federal legislation, and the U.S. is heading into a multi-year period in which the operative federal-state line on AI regulation will be drawn by federal courts rather than by the legislative branch. Both sides agree that this is not the optimal path. Both sides agree that this is the path we are on.
Both sides agree that the courts will draw the line incrementally, through the cases that make it to the Second, Ninth, and Sixth Circuits first. Both sides agree that the Supreme Court is unlikely to rule on AI preemption this term and is likely to rule on it within three. Both sides agree that the rulings will not produce a clean preemption doctrine — they will produce a patchwork of holdings that vary by which state regulation is being challenged, what federal interest is being asserted, and which judge is on the panel. Both sides agree that this is, in policy terms, a poor way to resolve the question.
Both sides also agree, privately, that Connecticut's SB 5 was carefully drafted to maximize its likelihood of surviving a preemption challenge. The bill avoids a frontier-model definition that would conflict with any plausible federal definition. It uses compute-threshold language that mirrors the EU AI Act and most extant federal discussion drafts. Its chatbot rules are tied to data-protection authorities states have used for two decades. The reformer reads this as careful drafting. The hawk reads it as legal sandbagging. They are describing the same drafting choices.
Where They Don't (And Shouldn't Pretend To)
On whether the federal EO can actually preempt. The hawk treats the EO as a strong signal of federal intent that should be respected by states until Congress acts. The reformer treats the EO as a non-binding political statement with no preemption authority. Both are reading the same legal text. The disagreement is about what kind of text it is — a bargaining position or a legal claim. The courts will eventually decide. Until they do, the disagreement is irreducible.
On the cost of fragmentation. The hawk argues the cost is non-linear and disastrous for smaller developers. The reformer argues the cost is linear and unprecedented only in degree, not in kind. There is no reliable empirical study of multi-state compliance costs in the AI domain because the laws are too new. Both positions are defensible. They are not equally testable.
On states as laboratories. The hawk treats state-level AI legislation as opportunism filling a federal vacuum. The reformer treats it as the structural function the federalist system was designed to perform. Both readings are consistent with the historical record on different policy domains. The relevant question is which historical analogy applies, and that is a normative judgment, not a factual one.
On the courts as the eventual arbiter. Neither side is happy about this. The hawk would prefer Congress to legislate; the reformer would also prefer Congress to legislate. Both are forced into court-drawn lines because Congress has not. The disagreement here is over which side bears the larger share of the blame for that, and the answer cuts very differently along partisan lines than the underlying policy disagreement does.
Here's My Two Cents
Connecticut SB 5 is not the story. The story is that the operative AI-governance line in this country is going to be drawn by a district judge in the Eastern District of New York or the District of Connecticut sometime in the next eighteen months, and that line will then be tested in the Second Circuit, and then again in the Supreme Court, and the eventual doctrine that emerges will be the framework every other state and every other federal agency operates under for the next decade. That is what is at stake in the case the DOJ Litigation Task Force is now drafting. The case will not be styled as The Future of AI Governance. It will be styled as something like United States v. Connecticut. The fact that the styling is mundane is not a sign that the stakes are. It is a sign that we have arrived at the point where the most consequential policy questions get decided in pleadings and oral arguments rather than in floor speeches.
Connecticut is not the protagonist of this story. Connecticut is the catalyst. The protagonist is the federal court system, which is about to be asked to define the limits of executive-branch preemption authority in a domain where Congress has refused, for three years, to legislate. Federal courts have done this kind of work before, most famously on environmental regulation, immigration, and labor relations. The pattern is consistent. When the executive branch tries to do the work the legislature has declined to do, the courts eventually narrow the executive's authority. The courts do not love the role. They take it because nobody else will.
That makes my reading of the political dynamics more cynical than either the hawk or the reformer would put on the record. The December 2025 EO was issued knowing that Congress was unlikely to pass federal AI legislation in 2026. The DOJ Litigation Task Force was stood up knowing that the test cases would be hard. The administration's strategy, if I read it correctly, is to use litigation to create a federal preemption posture that Congress does not have the votes to formalize. This is not a uniquely Republican strategy. The Biden administration used a similar posture on student-loan forgiveness, executive immigration relief, and Title IX enforcement when Congress would not move. The pattern is the executive branch using litigation and rulemaking to advance policies the legislative branch will not pass. The courts have, in most of those cases, eventually ruled against the executive. The reason they ruled against the executive is that the executive was operating beyond its statutory authority, not because the policy was wrong.
The same pattern is going to apply here. The DOJ AI Litigation Task Force is going to win some cases and lose more. The cases it loses will narrow executive preemption authority on AI specifically and on emerging-technology regulation generally. The cases it wins will be narrow holdings that do not establish a coherent doctrine. The fragmentation the hawk fears will not be solved by the litigation. It will be made messier by the litigation. The only thing that solves it is federal legislation.
So the policy question that matters, the one neither the hawk nor the reformer is willing to answer in public, is what federal AI legislation would actually look like, and whether the votes for it could be assembled if Connecticut and several other states force the issue through the courts. My reading is that the votes are closer than they look. The Senate has a working bipartisan group on frontier-model disclosure that has been quietly drafting since January. The House has a parallel effort that has been less public but more advanced. Neither group has been willing to go public with a bill because the political cost of being the sponsor is high and the political cost of being the opposition is currently low. Connecticut may have just changed the cost of opposition. If a Connecticut judge enjoins the DOJ's preemption challenge, the political cost of opposition to a federal bill goes up sharply, because the alternative is now the patchwork the hawk has spent two years warning against. The legislation that does not exist today may exist in eighteen months, not because Congress wants it but because the alternative is the patchwork doing the work the legislation should have done.
That is where I land on this. Connecticut's SB 5 is a forcing function. Whether it forces what its sponsors want (a state-level laboratory model that survives preemption challenge) or what its opponents want (a federal preemptive law that supersedes the state framework) depends on factors none of us control. Either way, the action moves to the courts, and from the courts, eventually, back to Congress. The fastest path to a coherent federal AI framework, ironically, may run directly through the patchwork the hawks have been warning about. Sometimes the only way to make Washington legislate is to make Washington realize that the alternative is worse.
Position yourself accordingly. The companies waiting for "regulatory clarity" should expect the next two years to deliver the opposite of what they are asking for. The state attorneys general watching to see whether to follow Connecticut's lead will have an answer by the end of summer 2027. The federal courts that are about to take ownership of this question will not enjoy doing so, and they will spend the next half-decade producing a doctrine that none of the parties to the original dispute would have written if they had been allowed to write it. That doctrine will, eventually, become the operative AI policy of the United States. The doctrine will be the work of judges. The judges will be the people Congress declined to be.
I do not know if that is a problem. I know it is what is happening.
Related Briefings
Anna R. Dudley writes on national security, intelligence policy, and the places where hawks and reformers need to find each other. Bipartisan Translation is the weekly series for the conversation that is not happening on cable news. Subscribe at annardudley.substack.com.