AI Ethics, Anthropic & The Department of War
This morning I was listening to my regular podcasts: BBC World news updates, NPR headlines, and The AI Daily Brief. The latter’s entire first third of one episode focused on widespread astonishment that Anthropic’s model Claude had reportedly been used in “Operation Epic Fury.” I’ve found it surprising that this has increasingly become major news. Curious, I looked through mainstream news sites and saw coverage everywhere from The Wall Street Journal to The New York Times framing the development as something close to a betrayal of public trust.
As I understand it from the AI scene in NYC (and elsewhere), Anthropic has a strong reputation for being something like the missionaries of AI, in the sense that they are often described, as a futurist in the arts once put it to me, as building “AI for good!” I smiled politely when confronted with this oxymoronic phrase and moved on from the topic.
Did we not expect governments to use the most advanced AI systems available, especially those optimized for coding, to write code for the DoW? Not only to code, but to help dominate an AI-driven battlespace? Claude, particularly Claude 4.6 Sonnet and Opus, is widely considered a top-tier and often best-in-class model for coding tasks. Palantir Technologies integrates Anthropic’s Claude models into its platforms and has reportedly done so for defense applications, including work connected to the Department of Defense, which specifically deals in warfare, since around 2024.
So why is it now, in 2026, that Anthropic CEO Dario Amodei has said the company “cannot in good conscience” allow the use of its models in these conditions?
“I think it [Anthropic] probably made a mistake,” the FCC’s Carr told CNBC. “There’s obviously rules of the road that are in place that are going to apply to every technology that the Department of War contracts with.” - CNBC
Even though there was a public display of Anthropic’s dispute on stage just before the killing of Iran’s supreme leader, and reports that President Donald Trump ordered U.S. agencies to stop using Anthropic technology, it remains difficult to deny that a modern technical toolkit is incomplete without advanced coding systems like Claude.
While almost everyone in the AI industry is currently embracing the language of “guardrails,” including Sam Altman, there is reason to question how meaningful those constraints can ultimately be. (AI ethicists I know have been posting on socials in a frenzy).
Artificial intelligence cannot be fully bent to interpret human subjective ethics, particularly when those ethics are themselves debated and contested, be it between people or between borders. It is even more difficult to control systems that evolve rapidly in environments that change minute by minute, such as active conflict.
A note on the business of war, and a hint of what is to come with AI weapons: