
Faster Than Thought
Nine hundred strikes in twelve hours. Everyone's debating what started this war. Nobody's asking if it was the AI.
The Brief
This article examines how AI is compressing military decision-making timelines from days to minutes, using the March 2026 Iran strikes as a case study. It traces the paradox of the US government banning then using Anthropic's Claude AI, explores the concept of cognitive off-loading, and asks what happens to human oversight when planning outpaces reflection.
- What is decision compression in military AI?
- Decision compression describes AI collapsing the military planning cycle from days or weeks to minutes. AI systems simultaneously analyze drone footage, telecommunications intercepts, and human intelligence, then identify targets, recommend weapons, and generate legal justifications for strikes far faster than human planners could.
- Was Anthropic's Claude AI used in the Iran strikes despite being banned?
- According to the Wall Street Journal, Anthropic's Claude was used in the March 2026 Iran strikes through Palantir Technologies, despite President Trump ordering all federal agencies to stop using Anthropic three days earlier. A six-month phase-out provision in the ban allowed continued use.
- What is cognitive off-loading in warfare?
- Cognitive off-loading, described by Professor David Leslie of Queen Mary University of London, occurs when AI presents strike recommendations that humans approve without performing the same depth of analysis. The decision-maker remains in the loop but carries less of the analytical burden that historically slowed the process.
- How was Claude AI used in the Venezuela Maduro raid?
- In January 2026, US special operations forces used Anthropic's Claude during the raid to capture Venezuelan President Maduro, deployed through Palantir's classified military platform. An Anthropic employee's reported concern about this use triggered the broader Pentagon-Anthropic dispute over AI safety guardrails.
I watched Andrew Bustamante break down the Iran war the other night. Former CIA operative, years inside the intelligence machine.1 He was laying out the reasons behind the strikes, and what struck me was how they didn't cohere. Nuclear ambitions, regime change, influence campaigns, legacy politics. Each made sense alone. Together, they read like competing memos nobody reconciled before the missiles left.
Then it hit me. Maybe nobody had to.
Intelligence communities still publish their threat assessments on paper. The ODNI's annual report gets compiled by committees, reviewed by analysts, structured as narrative because that's how humans process complexity. You need the reasons to tell a story. AI doesn't.
The systems processing that same raw intelligence don't need the signals to cohere. They don't need a story. They need data points. What the machine actually sees is drone footage and telecommunications intercepts and satellite imagery and human intelligence, all at once, none of it needing to agree. Contradictions aren't a problem when the output isn't a narrative. It's a target list.
The academics call the result "decision compression."2 Planning that used to take days or weeks now happens in minutes. Craig Jones at Newcastle studies kill chains for a living, and his description stuck with me. "The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought."2
Nine hundred strikes in twelve hours. Faster than thought.
The planning cycle used to be measured in days. Now it loads in seconds.
The tool at the center of this war is Anthropic's Claude. The same Claude the government publicly banned three days before the strikes began.3 Anthropic had refused to let its AI be used for mass domestic surveillance or fully autonomous weapons. The government ordered all agencies to stop using it. Three days later, a war launched with that very technology still running. A six-month phase-out clause meant the tool outlasted the politics. Nobody turned it off because nobody could.
The backstory writes itself. In January, Claude was used during the raid to capture Venezuela's President Maduro, deployed through Palantir's classified platform.4 It worked. Then, during a routine check-in, an Anthropic employee reportedly expressed concern to a Palantir executive. Palantir told the Pentagon. That one conversation, someone wondering aloud whether their tool should help capture a head of state, triggered everything that followed.
The decision-maker is still in the chair. The question is what's left to decide.
The tool was too woke to keep. Not too woke to plan nine hundred strikes in half a day.
There's a term for what happens to the humans in this loop. David Leslie at Queen Mary University has watched live demonstrations of military AI, and he calls it "cognitive off-loading."2 The machine presents its recommendation. The human approves it. But the thinking already happened inside the algorithm. The decision-maker is still in the chair. The weight of the decision isn't.
We've been arguing all week about what started this war. I haven't heard anyone ask if it was the AI. When the planning cycle was measured in days, a lawyer reviewed targeting criteria and an analyst questioned the assumptions underneath. A commander could sleep on it. When the cycle compresses to minutes, those checkpoints still exist on paper. Whether anyone has bandwidth to use them is another question.
Somewhere in the paperwork banning Anthropic, there's a line granting a six-month transition period. It might be the most consequential sentence written about this war, and the person who wrote it thought it was about procurement.
References
Footnotes
-
Bilyeu, T. & Bustamante, A. (2026). "Ex-CIA Spy Andrew Bustamante Breaks Down The Iran War." Impact Theory. iHeart ↩
-
Hern, A. (2026). "Iran war heralds era of AI-powered bombing quicker than 'speed of thought.'" The Guardian ↩ ↩2 ↩3
-
TechPolicy.Press. (2026). "A Timeline of the Anthropic-Pentagon Dispute." TechPolicy.Press ↩
-
Christou, W. (2026). "US military used Anthropic's AI model Claude in Venezuela raid, report says." The Guardian ↩
More to Explore

The Memory Famine
In November, analysts projected two percent smartphone growth. By February, they were forecasting the worst decline in history. The thing that changed wasn't demand.

Your AI Has Amnesia
You wrote the rules. Your AI followed them. Then it quietly stopped.

The Pilot Graveyard
AI tuberculosis detection worked beautifully in Kenya. Then the grant ran out and nobody could pay for the subscription.
Browse the Archive
Explore all articles by date, filter by category, or search for specific topics.
Open Field Journal