Does AI really reduce casualties in war? “That’s highly questionable,” says Lauren Gould
FWO Grant to investigate algorithmic warfare and the role of AI in conflicts

Military experts and developers often claim that artificial intelligence (AI) reduces civilian casualties in war. Lauren Gould, an expert in contemporary warfare, challenges this claim. “AI accelerates the kill chain, which is likely to increase civilian harm. In Gaza, we saw this happening daily.” To investigate the use of AI in wars, Gould and Marijn Hoijtink (University of Antwerp) recently secured a Research Project Grant from the Flemish Research Foundation (FWO).
How Israel uses AI in the Gaza War

Many questions exist regarding the role of AI in contemporary conflicts. In what ways do governments and commercial parties collaborate? How are the algorithms programmed? How reliable is the data? And what are the lines of responsibility for the various forms of harm caused? One thing is certain, however: AI is rapidly becoming a key element in modern military operations.
“In wars like those in Gaza and Ukraine, we’re seeing increased cooperation between defence forces and commercial players like Google, Microsoft, and Palantir,” Gould explains. “This enables the use of AI to identify enemies and potential targets – with far-reaching consequences.”
“For instance, Palestinian-Israeli journalists revealed last year that the Israeli military was using at least three AI systems. One system, called ‘Lavender,’ is reportedly designed to identify Hamas militants. Another, ‘Gospel,’ would map out where they live. A third, ‘Where’s Daddy?,’ would predict when they’d be at home. Then attacks followed.”
In the wars in Gaza and Ukraine, we’re seeing increased cooperation between defence forces and commercial players like Google and Microsoft.
Does AI actually lead to more civilian casualties?
“Proponents argue that AI enables more precise targeting and therefore reduces civilian deaths, but that’s highly questionable,” says Gould. “In practice, AI is accelerating the kill chain — the process from identifying a target to launching an attack.”
“For example, during recent conflicts, Israel identified significantly more targets thanks to AI systems. Before the Gaza War, they might identify around 50 targets a year. During the war, that number increased drastically to as many as 100 targets per day. And Israeli officers had just 20 seconds to verify the AI-generated information and decide whether a target was legitimate.”

The human cost of AI-driven warfare
An accelerated kill chain inevitably leads to more deaths, injuries, and destruction, Gould asserts, and possibly to new forms of civilian harm. “We will explore this further, but we’re already observing what I would call a kind of ‘perpetual psychic imprisonment’. Civilians know that they’re being surveilled on a massive scale, day and night, but they have no idea what behaviour or physical characteristics might make them a target.”
Moreover, the lines of responsibility for civilian harm become increasingly blurred, Gould explains. “Military officers using AI often shift responsibility onto the technology itself. The commercial companies developing these systems argue they have no control over how their technology is used, and governments often deny using them altogether.”
Responsibility becomes blurred: officers point to the technology, companies deny control, and governments deny its use.
“The research we will conduct now, will shed light on how AI is transforming the nature of war and the effects this has on ordinary people,” Gould concludes. “This isn’t some futuristic technology. It’s happening right now – and the consequences are immense.”
Researching AI’s impact: fieldwork in Gaza and Ukraine
With funding from the FWO, Gould and Hoijtink will tackle the key questions surrounding algorithmic and data-driven warfare. Their research, called Tracing the Realities of Algorithmic Warfare, forms part of Gould’s project Realities of Algorithmic Warfare and Hoijtink’s Platform Wars.
The grant will allow them to hire a PhD candidate and collaborate with societal partners to conduct fieldwork amongst technological advanced militaries and in Ukraine and Gaza. The researchers also aim to raise public awareness about the promises, perils and political dilemmas of AI-driven warfare.