The US military used Claude, an AI model developed by Anthropic, in its raid to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed on Saturday. The raid involved bombing in Caracas and resulted in the deaths of 83 people, according to Venezuela’s defence ministry. Anthropic’s terms forbid using Claude for violence, weapons, or surveillance. The company declined to confirm Claude’s use in the raid but said all uses follow its rules. The US Defence Department also stayed silent on the report. Sources told the WSJ that Claude was accessed through Anthropic’s partnership with Palantir Technologies, a defense and federal law enforcement contractor. Palantir refused to comment. AI use in the military is growing. Israel uses autonomous drones in Gaza, and the US has deployed AI for targeting strikes in Iraq and Syria. But critics warn AI in weapons can cause targeting errors and raise ethical issues. Anthropic’s CEO, Dario Amodei, calls for AI regulation and is cautious about AI in lethal military operations. This worries some in the US defense sector. In January, US War Secretary Pete Hegseth said the military won’t "employ AI models that won’t allow you to fight wars." The Pentagon also announced cooperation with Elon Musk’s xAI and uses custom versions of Google’s Gemini and OpenAI systems for research support.