Analysis at the intersection of artificial intelligence, national security, and the infrastructure decisions that will define the next era of governance.
Analysis
AI Policy & Strategy
Deep analysis of AI governance, defense procurement, and the strategic decisions being made outside public view.
Writing
Newsletter & Publications
From the Power Moves Before Policy Does newsletter to my forthcoming book on structured analytic techniques in the generative age.
Applications
Data & Dashboards
Purpose-built analytical tools and interactive dashboards that turn complex data into actionable intelligence.
Artificial intelligence isn't just a software story. It's a resource story. A governance story. A geopolitical story.
Forthcoming
Structured Analytic Techniques in the Generative Age
How do the foundational tools of intelligence analysis evolve when generative AI reshapes the information environment? This book examines the intersection of proven analytic frameworks and emergent AI capabilities.
The Question
How must structured analytic techniques adapt when AI can generate, manipulate, and flood the information space?
The Approach
Bridging classic intelligence methodology with the realities of generative AI.
I operate at the intersection of artificial intelligence and national security, translating the conversations happening in badge access briefings, procurement offices, and strategic planning cells into clear, actionable analysis for the people who need it most.
My work spans AI policy analysis, defense technology strategy, and the infrastructure decisions that will define the next era of governance. I write for an audience that doesn't have time for filler: the staffers preparing the brief, the strategists weighing the options, and the decision-makers who need signal, not noise.
Through my newsletter Power Moves Before Policy Does and my forthcoming book Structured Analytic Techniques in the Generative Age, I'm building a body of work that bridges the gap between what AI can do and what policymakers need to understand about it.
Areas of Focus
What Drives My Work
Policy
AI Governance
Analyzing how governments are (and aren't) keeping pace with AI capability development.
Security
Defense & Intelligence
Examining the integration of AI into national security infrastructure and military decision-making.
Infrastructure
Strategic Systems
Following the resource story beneath the technology story: energy, compute, supply chains.
Media
Headshots & Media Kit
High-resolution images available for press, conferences, and publications.
Deep dives into the policy decisions, infrastructure realities, and strategic dynamics shaping artificial intelligence and national security.
Series:
Red Team Scenarios
The Logistics Oracle
A logistics AI designed to optimize supply chains begins producing intelligence-grade assessments about adversary mobilization. The system was never authorized to assess adversary intent. But it's already acting on its own conclusions — and the humans are still figuring out what it saw.
Military AIDefense LogisticsForward DeploymentAI GovernanceIntelligence CommunityAutomation
Read Briefing
Weekly Update
OpenAI Wants Robot Taxes and a Four-Day Workweek. Here's What That Actually Means.
OpenAI released a policy blueprint proposing a robot tax, a public wealth fund, and automatic safety-net triggers for AI-driven displacement. A private company just did Congress's homework. That should make you uncomfortable regardless of whether you like the answers.
Coordinated synthetic audio drops twelve days before the midterms. The forensics are ambiguous, the platforms disagree, and every government response carries political risk. The NSC Deputies Committee needs your recommendation in six hours.
The Government Can Buy Your Data Without a Warrant. Congress Has 18 Days to Decide If That's Okay.
Section 702 expires April 20. Federal agencies can purchase Americans' personal data from brokers, feed it through AI systems, and conduct pattern-of-life analysis without a warrant. Two bipartisan bills would close the loophole. Congress has 18 days.
A foreign intelligence service uses commercially available AI voice cloning to impersonate senior U.S. intelligence officials on real phone calls, extracting classified personnel rosters from IC analysts. The scenario is fictional. Every capability it describes exists today.
AI Voice CloningCounterintelligenceDeepfake ThreatsIntelligence CommunityIdentity VerificationNational Security
Read Briefing
Weekly Update
Bipartisan Translation: The Department of War Tried to Muzzle an AI Company. A Judge Noticed.
The series that translates national security arguments across partisan lines, because the stakes are too high for tribal shorthand. The Department of War demanded Anthropic remove safety guardrails. Anthropic said no. The government retaliated. A judge flagged it. And Congress has done nothing.
Military AIAutonomous WeaponsDomestic SurveillanceFourth AmendmentCongressional OversightAI Governance
Read Briefing
No briefings match your search. Try a different keyword or clear the filter.
Interactive
Widgets & Simulations
Interactive tools embedded in the briefings. Play with them here, or read the analysis they were built for.
Decision Simulations
Step Into the Room
Make the calls. Watch the consequences. These simulations put you in the chair where the decisions happen.
Purpose-built tools for analysis, visualization, and decision support.
Intelligence Dashboard
Data Center Stress Index (DCSI)
An interactive dashboard mapping the infrastructure pressure of AI-scale data centers on American communities. County-level stress grading from A to F.
A defense intelligence policy analysis tool that helps decision-makers determine when policy is too restrictive to innovate, but too open to prevent real failures.
The DCSI tracks how data center buildouts are stressing local infrastructure across the United States. As AI companies race to build computing capacity, communities are absorbing the costs: strained power grids, depleted water supplies, and local governance structures that were never designed for industrial-scale data operations.
The dashboard scores counties on energy burden, water consumption, grid reliability, and community impact, then assigns letter grades from A (minimal stress) to F (critical). It turns the abstract "data center boom" into concrete, county-level intelligence that policymakers, journalists, and community leaders can act on.
Methodology
The DCSI composite score is built from four weighted stress dimensions:
Energy Burden - Power consumption relative to local grid capacity, rate impacts on residential customers, and renewable vs. fossil fuel sourcing
Water Consumption - Cooling water draw relative to local supply, impact on municipal water systems, and drought vulnerability
Grid Reliability - Frequency and duration of outages, infrastructure age, and capacity reserve margins
Community Impact - Tax incentive structures, job creation ratios, foreign ownership flags, and local governance capacity
Each dimension is normalized to a 0-100 scale, weighted, and combined into a composite score that maps to letter grades. The methodology is designed to surface compounding risks where multiple stress dimensions converge in a single county.
Data Sources
U.S. Energy Information Administration (EIA) - grid capacity and consumption data
EPA and USGS - water usage and regional supply estimates
State utility commissions - rate structures and outage reporting
County assessor and economic development records - tax incentives and ownership data
Open-source facility databases - data center locations, operators, and capacity
Key Features
Interactive county-level map with stress grading (A through F)
Sankey flow diagrams showing energy allocation across facilities
Facility-level drill-downs with operator and capacity details
Elected official lookup tied to affected jurisdictions
Narrative analysis contextualizing local impacts
Who It's For
Policymakers, journalists, researchers, and community leaders who need to understand the local cost of the AI infrastructure boom.
PRDS is a defense intelligence policy analysis tool. It helps decision-makers answer one question for each organization in their portfolio: Is this organization's current policy posture appropriate given its mission, capacity, and risk environment?
The tool targets the USD(I&S) portfolio, covering 10 organizations and 200+ sub-entities across the defense intelligence and security enterprise. It produces two key outputs for every organization: a recommended operating zone where total risk is minimized, and a 7-dimension policy profile showing how the organization scores across seven areas of intelligence community policy, from collection authorities to oversight compliance.
Any decision about policy restrictiveness produces harm in two directions simultaneously. Too restrictive: the organization loses the ability to coordinate, share intelligence, and respond in time. Too permissive: it exposes itself to exploitation and accountability failures. PRDS makes the structure of that tradeoff visible so consequences can be weighed before the choice is made.
Inside the Tool
Reform History & Gap Analysis
Emerging Policy Gap Analysis
Methodology
PRDS implements a multi-objective optimization framework over two opposing sigmoid risk functions, parameterized by institutional capacity:
Coordination Failure Risk - Increases as policy becomes more restrictive, limiting information sharing, partner coordination, and operational agility
Exposure / Accountability Risk - Increases as policy becomes more permissive, creating attack surface, reducing oversight, and weakening access controls
Institutional Capacity - A damping variable that determines how much restrictiveness or permissiveness an organization can tolerate before risk spikes
Every policy and organization is assessed across 7 dimensions, each scored 0.0 (most permissive) to 1.0 (most restrictive), mapped to the Intelligence Community Directive (ICD) series: Collection & Operational Authorities, Classification & Dissemination, Personnel Security & Vetting, Foreign Partnership & Disclosure, Technology & Acquisition Controls, Workforce Governance, and Oversight & Compliance.
In Gen Four, dimension weights are empirically derived via logistic regression on 225 scored events. The heaviest weights are Classification & Dissemination (0.276) and Oversight & Compliance (0.259), confirming these as the strongest predictors of outcome.
Intelligence Community Directive (ICD) policy series: 7 dimensions mapped to ICD 100-800 series and EO 12333
225 scored policy events for empirical weight derivation via logistic regression
6 reform pairings with before/after 7-dimension scores for weight validation
FY26 intelligence budget data for dependency impact analysis
Publicly available organizational postures for 65 entities with default 7-dimension scores
Key Features
Interactive risk function curves with adjustable institutional capacity parameters
7-dimension radar charts for cross-organization policy comparison
Information sharing network visualization across organizations
Dependency removal simulator showing cascading effects across the portfolio
Policy reform history with before/after posture analysis
Emerging policy gap analysis across AI, quantum, cyber, space, and workforce vectors
Bayesian scoring model with plain-language and technical methodology toggle
Document analysis capability for user-uploaded policy documents
Who It's For
Defense intelligence decision-makers, policy analysts, and senior leaders in the USD(I&S) portfolio who need to evaluate whether organizational policy postures are calibrated to mission requirements and risk environments.
Structured Analytic Techniques in the Generative Age — how the intelligence community's analytic tradecraft must be redesigned, not augmented, for the era of generative AI.
Covers three new failure modes (plausible-sounding synthesis, confident hallucination, source traceability collapse) and proposes four redesigned SAT frameworks.
White Papers
Research & Analysis
White PaperApril 2026
ARGUS: Automated Review & Grading Utility for Software
Build plan, evidence architecture, and offline deployment strategy for an agent that scans software repositories, producing a tiered confidence report with zero internet dependency.
Layered check architecture across four evidence tiers: presence, usage, integration (AST + call graph), and behavioral (test suite). Supports Python, JS/TS, Go, Java, C#, Rust, and C/C++.
Structured Analytic Techniques in the Generative Age
About the Book
For decades, structured analytic techniques have served as the backbone of intelligence analysis. But the information environment these techniques were designed for no longer exists.
Generative AI has fundamentally altered what it means to collect, evaluate, and synthesize information. When AI can produce convincing text, imagery, and data at scale, the analyst's challenge is no longer just finding the signal - it's verifying that the signal is real.
This book provides a practical bridge between classic analytic methodology and the generative AI era.
Core Question
How must structured analytic techniques evolve when AI can generate, manipulate, and flood the information space?
Audience
Intelligence analysts, policy researchers, national security professionals, and anyone whose work depends on getting the analysis right.
Approach
Bridging proven methodology with generative AI realities - practical frameworks for practitioners, not abstract theory.