Is an AI “Chatbot” actually running the war in the Middle East?
It sounds like something straight out of a sci-fi movie, but the U.S. military is reportedly using Claude AI (made by the company Anthropic) to help pick targets in the war against Iran. According to recent reports, this AI system helped identify and strike a staggering 1,000 targets within just the first 24 hours of the conflict. By plugging the AI into a massive data system called “Maven,” the military was able to process satellite photos and intelligence at “machine speed,” launching hundreds of missiles in a single day—a process that used to take human planners weeks to finish.
The twist? There is a massive “civil war” happening behind the scenes between the U.S. government and the AI’s creators. President Trump recently labeled Anthropic a “supply chain risk” because the company refused to let their AI be used for things like mass surveillance or fully autonomous “killer robots.” Even though the government is trying to ban the company, the Pentagon admits they can’t stop using Claude yet because it’s too deeply embedded in their war systems. It’s a messy mix of cutting-edge technology and high-stakes ethics, and it’s changing how wars are fought in real-time.