Hundreds of civilians died in the Iran war. Is the US military’s AI making a fatal mistake?
Are there guardrails for the use of AI in Iran’s war?
The AI tools used by the military are extremely sophisticated, but they have not yet reached the point where human judgment is no longer required.
- Experts and former officials say military artificial intelligence systems are at the heart of Operation Epic Fury
- As the war drags on, the role of AI could increase
- In mid-March, more than 100 members of the House and Senate signed a letter to Secretary of Defense Pete Hegseth asking whether Maven Smart Systems was involved in the school walkout.
The deaths of hundreds of Iranian civilians in the war have put the U.S. military’s new AI systems in the spotlight, with lawmakers raising concerns that the systems are making fatal mistakes.
Experts and former officials say the military’s artificial intelligence systems are at the heart of Operation Epic Fury, a new phase in the deployment of AI on the battlefield.
“After years of saying we were moving too slowly, I am now concerned about how fast we are moving,” said retired Lt. Gen. Jack Shanahan, who led efforts to develop and integrate AI into the military.
“At some point, it may become increasingly difficult to define what advanced AI systems should not do, as opposed to defining what humans want them to do.”
During a closed session of the House Armed Services Committee on March 25, Pentagon officials told lawmakers that AI is being used for data management, but not for final target selection, according to people familiar with the briefing.
Gen. Brad Cooper, commander of U.S. Central Command, said in his latest video on the war on March 11 that U.S. soldiers are “utilizing a variety of advanced AI tools.” “While humans will always make the final decisions about what to photograph, what not to photograph, and when to photograph it, advanced AI tools can reduce processes that previously took hours or even days to seconds.”
The military has attacked more than 12,000 targets in the month-long Iran war, including more than 1,000 in the first 24 hours after the war began on February 28. One of the locations bombed that day was a school in Iran, killing at least 175 people, most of them children.
Earlier in the war, the U.S. military launched longer-range, more expensive missiles to attack Iran from afar, but now that Iran’s air defenses have weakened, it has switched to shorter-range gravity bombs that can be dropped from aircraft, according to Chairman of the Joint Chiefs of Staff Dan Cain and others.
Emelia Probasco, a senior researcher at Georgetown University’s Center for Security and Emerging Technologies who studies military uses of AI, said the first target likely came from the Pentagon’s long-standing plan to attack Iran.
But as wars drag on, Probasco said AI could play an increasing role, including “prioritizing” targets, or telling soldiers where to attack first.
“We are now entering a phase where these targets are being attacked and we may see an even greater impact of AI,” she said. “What you’re looking for are time-critical targets, moving targets, and targets we didn’t know about before.”
20 soldiers with AI equals 2,000 jobs
For nearly a decade, the military has been integrating an AI tool known as the Maven Smart System into its computer systems. Often shortened to “Maven,” the system fuses the military’s many disparate channels of data, information, satellite imagery, and asset movement into a single software platform. Military leaders say the system will enable faster and more effective decision-making in the heat of battle.
This system has already significantly increased the number of targets that can be attacked by a given number of operators. According to Probasco’s 2024 study of Army exercises using the system, the roughly 20 personnel using the system could rival the work of more than 2,000 soldiers during the Iraq War targeting cells, which at the time was considered the most efficient in U.S. military history.
And, she added, its development in the two years since her research has been “dramatic.”
In a demonstration of the Maven Smart System at a March 12 conference, Cameron Stanley, the Pentagon’s chief digital and artificial intelligence officer, showed how users can easily turn structures into fireballs by simply “left-clicking, right-clicking, left-clicking.”
On a screen behind Cameron, a cursor hovered over an overhead image of a row of cars, displaying numbers representing their dimensions, location coordinates and other data. Cameron says that with just a few clicks, object “detection” can be moved into a “targeting workflow.”
The system offered options for which metrics the AI should prioritize, including time to target, distance, and ammunition. Sleek graphics appeared to show on a map the circular blast radius that attacks created and the arc that weapons traveled. After a few clicks on the blue “Approve” and green “Task Execution” bars, a dark cloud of explosions filled the screen.
“When we started this, it literally took hours to implement what we saw there,” Cameron said.
Iran school strike raises questions about AI
Despite official claims that AI improves military accuracy, Iran’s civilian death toll has raised concerns that AI is contributing to false targeting.
Lawmakers asked whether AI played a role in school strikes. An investigation by The New York Times and other sources has found that the United States was likely behind the attack, which used American-made Tomahawk missiles. Those reports said the school may have been on an old target list that the military was unable to review. The Pentagon said its own investigation into the attack was ongoing.
In mid-March, more than 100 members of the House and Senate signed a letter to Pentagon Secretary Pete Hegseth asking for details on whether the Maven Smart System was involved in the school attack and how the military is checking the workings of AI.
Shanahan said there was “no indication” that AI was involved in the attack, “but we need to recognize that while future AI will be able to discover more targets than ever before, humans must remain responsible for the decisions to attack those targets.”
Past military exercises have demonstrated that AI is far less accurate than humans. In the Army exercises studied by Probasco, the Maven smart system was able to accurately identify tanks about 60% of the time, compared to 84% accuracy for human soldiers, but that number dropped to just 30% in snowy weather. An AI targeting system tested by the Air Force in 2021 reached only 25% accuracy when tested in imperfect conditions.
In 2023, the Department of Defense issued a directive stating that soldiers and commanders using AI systems must be able to use “an appropriate level of human judgment regarding the use of force.”
“Our military operates in full compliance with all laws and established policies of the United States, including ensuring human participation in critical operational decisions at all times,” the Pentagon said in a statement to USA TODAY.
“The responsibility for the lawful use of any AI tool lies with the human operator and chain of command, not with the software itself.”
Department of Defense pursues companies behind AI chatbots
The Trump administration has generally moved to remove regulations on AI in the name of innovation and reducing bureaucracy, and the Pentagon has followed suit. In a Jan. 9 memo outlining the military’s AI strategy, Hegseth directed the Pentagon to “unleash experimentation” with AI models and work to “actively identify and eliminate bureaucratic barriers to deeper integration” of AI.
“We must accept that the risks of not moving fast enough outweigh the risks of incomplete coordination,” the memo said.
In recent months, this approach has put the Department of Defense at odds with Anthropic, the Silicon Valley company behind Claude, the only AI chatbot currently configured to run on Maven smart systems.
Anthropic sought an agreement from the Department of Defense that its technology would not be used to attack targets without mass surveillance or human approval. After Pentagon officials publicly criticized the company on social media, the Pentagon refused to accept those terms, saying Claude must be available to the military for “all lawful uses.” The Pentagon had moved to designate the company as a “supply chain risk,” a designation meant to limit companies vulnerable to sabotage or sabotage by U.S. adversaries, but that move was blocked by a March 26 ruling by a federal judge.
“The military will not allow vendors to enter the chain of command by restricting the lawful use of critical capabilities,” the Pentagon said in a statement. “It is the military’s sole responsibility to ensure that our warfighters have the tools they need to prevail in crises, without interference from corporate policy.”
Antropic said in a statement that it believes the Pentagon has not yet used Claude in violation of its terms. However, the controversy reportedly arose after Anthropic learned that the military had used Claude in an operation to capture Venezuelan President Nicolas Maduro. “Anthropic currently has no confidence that Claude will function reliably or safely when used in support of lethal autonomous warfare,” the company argued in court documents.
Heidi Klaaf, chief AI scientist at the AI Now Institute, said that while AI built for military purposes “already has a lot of accuracy issues,” language learning models like Claude “are actually even more inaccurate.”
“They’re not very good at solving tasks outside of what they’re trained for. That’s fine when used in non-critical environments, like writing emails, but it’s very different when dealing with new scenarios like fog of war.”
The Claude controversy is not the first time that Silicon Valley’s growing business partnership with the Pentagon to produce high-tech weapons and military tools has come under fire from the companies that make them. Google was originally contracted to work on the Maven Smart System, which was in its early stages of development, but canceled the contract in 2018 after employee protests. Google and Amazon employees have also protested the companies’ AI contracts with the Israeli military in recent years, as well as Google’s work with immigration and border security authorities.
“If any technology company yields to the Department of Defense’s demands, Mr. Hegseth will have the power to build and deploy AI-powered drones that kill people without human approval,” a group representing workers at Amazon, Google and Microsoft said in a statement about the humanity debate.
Shanahan said human control of AI in military applications is a “non-negotiable starting point” but could ultimately be limited to designing and developing systems that increasingly operate on their own.
“At some point, we’re going to operate under the assumption that autonomous weapons will be released and humans won’t be able to take them away.”

