AI Ethics & Iran
This morning I was listening to my regular podcasts: BBC World news updates, NPR headlines, and The AI Daily Brief. The latter’s entire first third of one episode focused on widespread astonishment that Anthropic’s model Claude had reportedly been used in “Operation Epic Fury.” I’ve found it surprising that this has increasingly become major news. Curious, I looked through mainstream news sites and saw coverage everywhere from The Wall Street Journal to The New York Times framing the development as something close to a betrayal of public trust.
Did we not expect governments to use the most advanced AI systems available, especially those optimized for coding, to write code for the Department of War? Not only to code, but to help dominate an AI-driven battlespace? Claude is widely considered a top-tier and often best-in-class model for coding tasks. Palantir Technologies integrates Anthropic’s Claude models into its platforms and has reportedly done so for defense applications, including work connected to the Department of Defense, which specifically deals in warfare, since around 2024.
So why is it now, in 2026, that Anthropic’s head of AI Security stepped down and their CEO Dario Amodei has said the company “cannot in good conscience” allow the use of its models in these conditions?
“I think it [Anthropic] probably made a mistake,” the FCC’s Carr told CNBC. “There’s obviously rules of the road that are in place that are going to apply to every technology that the Department of War contracts with.” - CNBC
Even though there was a public display of Anthropic’s dispute on the world’s stage (aka social media) just before the killing of Iran’s supreme leader, and reports that President Donald Trump ordered U.S. agencies to stop using Anthropic technology, it remains difficult to deny that a modern technical toolkit is incomplete without advanced coding systems like Claude.
Anthropic is now seeking to revise its contract with the U.S. Department of Defense, and while almost everyone in the AI industry is currently embracing the language of “guardrails,” including Sam Altman, there is reason to question how meaningful those constraints can ultimately be. (Questioning this gets those AI Ethicists posting on socials in a frenzy!).
Artificial intelligence cannot be fully bent to interpret human subjective ethics, particularly when those ethics are themselves debated and contested, be it between people or between borders. It is even more difficult to control systems that evolve rapidly in environments that change minute by minute, such as active conflict. You enter into a contract for war, you must deal with the rules of war, which, in reality, is often that there are no rules.
A side note on the business of war, and a hint of what is to come with AI weapons: