Overview
In February 2026, the Pentagon demanded that Anthropic remove all safety guardrails from its Claude AI model for military use, specifically the prohibitions against mass domestic surveillance and fully autonomous weapons without human oversight. Anthropic's CEO Dario Amodei refused publicly. On February 27, Trump ordered all federal agencies to cease using Anthropic's technology and Defense Secretary Pete Hegseth designated the company a "supply chain risk to national security" — a classification normally reserved for companies like Huawei with ties to hostile foreign governments.
Within hours, the Pentagon signed a new contract with OpenAI that included the same safety red lines Anthropic was designated a national security threat for insisting upon. The next day, the Wall Street Journal reported that U.S. Central Command had used Claude — integrated into the Pentagon's Maven Smart System — for intelligence assessments, target identification, and battle scenario simulation during military strikes against Iran, hours after the ban.
Significance for Agre's Framework
This event crystallizes virtually every major concept in Agre's intellectual framework:
Critical Technical Practice: Anthropic's refusal represents institutional-scale enactment of something like CTP — a technical organization refusing to separate "doing" from questioning what the doing enables. The retaliation mirrors Agre's description of how technical fields suppress internal critique.
The Capture Model: The dispute centered on whether AI would be used for mass domestic surveillance — not visual surveillance (Foucault's panopticon) but computational capture of activity patterns. The "all lawful purposes" language in the OpenAI contract embodies Agre's insight that capture operates through the redesign of categories rather than overt watching. "Publicly available information" collection is capture, not surveillance.
Selective Amplification: The government's enforcement was selectively applied — identical safety commitments accepted from one company, treated as grounds for national security designation when asserted by another. This is the mechanism Agre described: institutional power amplifying preferred positions while suppressing others.
Infrastructural Warfare: The domestic surveillance infrastructure (ImmigrationOS, 135,000 detention beds, geofenced targeting) represents the permanent wartime footing Agre predicted in "Imagining the Next War" — where the distinction between war and policing collapses and security becomes a rationale for unlimited institutional expansion.
Security Through Design vs. Protection: Anthropic's position (safety built into the contract) represents redesign. The Pentagon's position (unrestricted tools for threat detection) represents protection. The government's choice to punish redesign and reward the appearance of protection confirms Agre's argument about which approach prevails when conservative institutional capture is operative.
The Cascade Series Connection
Mark Ramm's "The Pentagon Banned Claude as a National Security Threat" (March 2, 2026) analyzed this event as part of the Cascade Series, drawing on a timeline database of 4,300+ events from 1142 to 2026. The analysis demonstrates the Agrean method of institutional genealogy applied to contemporary events — tracing the institutional circuitry that produced the confrontation rather than treating it as an isolated incident.