Israel's AI-enabled targeting of Hamas members jeopardizes moral and legal standards of warfare
Speed and scale of AI-targeting compromise human judgement

Israel’s military campaign in Gaza is one of the conflicts worldwide where the military make use of AI-enabled decision-support systems (AI-DSS). These systems use algorithms to process large quantities of data coming from many sources, to identify and follow military targets (Hamas leaders or members) quickly and precisely. Since the decision to take out a target ultimately lies with human operators – unlike lethal, fully autonomous weapon Systems (LAWS) – this should theoretically safeguard the process against dehumanization, and uphold moral and legal standards of warfare. In practice, though, the increased speed and scale of targeting seems to diminish moral restraint among the Israeli operators, and also lead to inaccuracies and unnecessary civilian casualties, making the compliance with international humanitarian law highly questionable, as Jessica Dorsey and Marta Bo point out in their article on Opinio Juris.
These AI-based systems (going by names as ‘the Gospel’ and ‘Lavender’) are used to process data (such as human intelligence, drone footage, and intercepted communication) and constantly feed the military decision-making process with the identification – and even nomination – of possible targets. The inclusion of human judgement in the AI-DSS decision-making loop (distinguishing it from fully autonomous lethal weapon systems) should – in theory – provide the necessary moral and legal safeguards.
While a human theoretically remains in the loop, there are uncertainties regarding the extent to which humans truly maintain meaningful control or exercise judgement within these military decision-making processes.
Are human operators in control?
The main appeal of AI-enabled targeting lies in the speed and scale of operation: to generate large numbers of potential targets for elimination. The big question is if meaningful human control within such a process is possible, given that human operators are unlikely to have a clear overview of what data such systems have available, what they are trained on, what the specific parameters are for the algorithmic calculations, and how accurate they are. In a related article on Opinio Juris (AI-targeting and the erosion of moral restraint) the authors argue that the systematic mode of killing facilitated by AI-enabled systems leads to an erosion of targeting standards and to the moral devaluation of both victims and perpetrators.
International humanitarian law at stake
Given the moral implications of AI-enabled warfare outlined above, it should come as no surprise that there are legal concerns as well, as Dorsey and Bo point out. International humanitarian law (IHL) – also known as 'the law of war' – is made up of treaties (mainly the Geneva Conventions and their Additional Protocols) and a body of customary international law. These include the basic principles and rules of precaution (to avoid civilian casualties), distinction and proportionality. Distinction mandates military personnel at all times distinguish between the civilian population and combatants and between civilian objects and military objectives, and to direct their operations only against military objectives. Proportionality in combat mandates weighing up the possibility of civilian casualties to the anticipated military advantage.
The speed and scale of production or 'nomination' of targets, coupled with the complexity of data processing, may make human judgement impossible or, de facto, meaningless.
Precaution, distinction and proportionality are key
The IHL principles of precaution, distinction and proportionality seem unreconcilable with the AI-DSS practice to generate as many targets as quickly as possible. While a human theoretically remains in the loop, there are uncertainties regarding the extent to which humans truly maintain meaningful control or exercise judgement within this type of military decision-making process. The large volume of targets continuously presented by the system may induce so-called cognitive action bias, referring to the human tendency to take action, rather than carefully judging each individual situation. Time pressure may also lead to automation bias, which means overly relying on AI-driven recommendations about targets.
Civilian casualties of AI-enabled systems
As mentioned above the military use of AI-DSS lies in speeding up the so-called OODA (Observe, Orient, Decide, Act) decision loop. But Dorsey and Bo point out that the OODA decision model was developed for specific tactical environments (such as air-to-air combat) and cannot be directly translated to the urban combat domain in Gaza. Careful judgement, not rushing to decisions, is essential to minimise civilian harm in densely populated areas. In practice, the system often links Hamas targets to family homes, resulting in significantly higher civilian casualties, and the requisite weighing of civilian casualties versus anticipated military advantage seems not in order.
Furthermore, the AI systems used by the Israeli Defense Forces (IDF) are is not very accurate. Only 90 per cent of the targets identified by the system turn out to be legitimate military targets. And since human operators are unlikely to understand (exactly) how the AI system generates the target 'proposals', this raises many questions about the transparency, accountability and (ultimately) legitimacy of the whole process.
In situations like those described in Gaza it is very difficult to see how military personnel, assisted by these AI-DSS, are presently complying with their legal obligations, or whether legal compliance is possible at all.
Getting AI-DSS on the regulatory agenda
The article concludes with a plea for regulation of AI-assisted systems, which up till now have been more or less overshadowed by the more acute threat of fully autonomous weapons systems. These are currently addressed by the Group of Governmental Experts on Lethal Autonomous Weapon Systems (GGE LAWS) within the framework of the United Nations Convention on Certain Conventional Weapons. By comparison, the risks associated with AI-DSS have been neglected, likely due to the fact that AI-DSS retain a form of human-machine interaction, which has contributed to their legitimisation and normalisation in warfare – although as we have seen the level of human control is probably overestimated.
Putting AI-DSS on the agenda could start within the UN General Assembly First Committee on Disarmament and International Security, a forum that could potentially assume a leading role in drafting a regulatory framework. Another forum that could bring this issue more prominently to the fore is the upcoming second Summit on Responsible AI in the Military Domain (REAIM Summit), to take place in September 2024 in Seoul.
About Jessica Dorsey
Jessica Dorsey is an international lawyer with expertise in international humanitarian law, human rights law and public international law. Her current doctoral research project focuses on the legitimacy of military operations in light of increasing autonomy in warfare. She teaches international law courses at University College Utrecht and the Conflict and Security track of the Public International Law master's program.
Jessica is a member of the Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM). She leads the Realities of Algorithmic Warfare project together with Dr. Lauren Gould. She is an Executive Board Member of Airwars, an Associate Fellow at the International Centre for Counter-Terrorism, The Hague and the Managing Editor of the international law blog Opinio Juris.