AI-Driven Targeting and the Erosion of Distinction and Proportionality: A Case Study of the Israel-Hamas Conflict
- Sia Jyoti
- 3 days ago
- 4 min read
This report is an academic policy analysis aimed at examining how emerging AI-driven targeting systems interact with international principles of distinction and proportionality, using publicly reported information and reputable primary sources. The goal is to inform scholarly and policy discussions on the legal and ethical implications of autonomous and semi-autonomous weapons.

In the current realm of war, from Ukraine to Israel, the integration of artificial intelligence (AI) into military operations has fundamentally altered modern warfare, particularly in the realm of target identification and subsequent strikes. Specifically, machine learning, a subset of AI, is increasingly deployed to analyse vast volumes of data to identify patterns potentially indicative of threats. The appeal of AI in this domain lies in its capacity to process information more rapidly than human analysts, thereby identifying more potential targets in a condensed timeframe. In the context of drone warfare, machine learning algorithms digest data from sources such as satellite imagery, intercepted communications, and social media activity, detecting spatial or behavioural patterns that may correspond to hostile activity. The U.S. Department of Defence highlights that AI-enabled systems can significantly accelerate decision-making and reduce cognitive burdens on human operators (U.S. DoD, 2023). Nonetheless, while technologically impressive, these systems lack the contextual understanding and legal reasoning capacities essential for compliant targeting under international humanitarian law (IHL). Reliance on statistical correlations risks generating false positives, particularly where civilian behaviour mimics known militant patterns. This report explores how this reliance on probabilistic reasoning presents critical challenges to the IHL principles of distinction and proportionality, with primary focus on the use of such systems in the Israel-Gaza context.
According to Article 48 of Additional Protocol I (API) to the Geneva Conventions, parties to a conflict are obliged to distinguish between civilians and combatants, and between civilian and military objects. Further, Article 50 of API establishes that in cases of doubt, a person shall be considered a civilian (API, Art. 50(1)). This high legal threshold ensures that only those with a clear and direct link to hostilities are considered lawful targets. While these safeguards may slow down the process of issuing strikes, they are instrumental in ensuring civilian protection. AI systems, however, are ill-equipped to interpret such legal distinctions. Rather than making categorical legal determinations, machine learning models operate based on pattern recognition and prior data correlations. For instance, reports by +972 Magazine and Local Call indicate that during recent operations in Gaza, the Israel Defence Forces (IDF) deployed AI systems, codenamed "Gospel" (Hasbora) and "Lavender", to accelerate target identification (Abraham, 2023). According to Israeli sources cited in these investigations, these systems flagged targets based on data-driven assessments, allowing the IDF to "generate targets at a rate that was previously unimaginable." While official statements from IDF spokespeople maintain that "all targets are reviewed by human intelligence officers prior to action" (IDF Spokesperson’s Unit, 2024), the sheer volume of target designations has led some analysts to question whether such oversight can remain meaningful under time pressure.
From a legal standpoint, the core issue lies in the epistemic gap between behavioural patterns and combatant status. Frequenting a known militant’s residence, using encrypted communication apps, or living in proximity to suspicious sites may raise red flags for an AI system, but these factors do not legally transform a civilian into a combatant. As the International Committee of the Red Cross (ICRC) has consistently emphasised, direct participation in hostilities must be established with clarity, and individuals not directly engaged retain full civilian protections (ICRC, 2009). Absent an ability to distinguish with legal precision, AI-enabled targeting may inadvertently expand the category of those subject to attack, thereby eroding the principle of distinction.
The principle of proportionality, enshrined in Article 51(5)(b) of API, prohibits attacks expected to cause excessive incidental civilian harm in relation to the anticipated military advantage. Article 35(2) further prohibits means and methods of warfare causing superfluous injury or unnecessary suffering. Crucially, proportionality requires contextual judgment, not only in assessing the legitimacy of the target, but also in anticipating and mitigating civilian harm. The use of AI systems can shift this legal and moral evaluation into a numerical calculation. As reported by +972 Magazine, the Lavender system reportedly flagged over 37,000 individuals as suspected Hamas operatives, often based on tenuous digital links. Subsequent strikes were carried out, at times reportedly without additional verification regarding the target’s direct participation in hostilities (Abraham, 2024). While the IDF has stated that targeting decisions are always subject to legal review and human approval, sources within Israeli intelligence acknowledged that "acceptable" collateral damage thresholds were pre-defined algorithmically for certain categories of targets (Local Call, 2024).
This trend raises concerns of automation bias, where human operators may defer to machine output, even in the absence of full verification (Ekelhof, 2019). Human Rights Watch has similarly warned that without robust oversight, algorithmic decision-making risks undermining accountability and diluting scrutiny over lethal outcomes (HRW, 2021). Importantly, the IDF has defended its use of AI by citing the complexity of urban warfare, the presence of subterranean tunnel networks, and the continued rocket fire from within dense civilian areas (IDF, 2023). These conditions undeniably pose significant operational challenges. However, the heightened risks to civilians in such environments arguably demand greater, not reduced, fidelity to international legal norms. Moreover, advocates of AI integration in military operations also point to potential benefits, including increased accuracy, improved threat detection, and reduced collateral damage when systems function correctly. For example, NATO’s Emerging Security Challenges Division has argued that AI has the potential to “improve targeting precision and reduce human error” when applied under strict legal and ethical oversight (NATO, 2022). The United Nations Institute for Disarmament Research (UNIDIR) has similarly noted the importance of “retaining meaningful human control” over autonomous systems to ensure compliance with international law (UNIDIR, 2021).
Ultimately, while AI-driven intelligence systems provide states with powerful tools to manage information and identify potential threats in real time, their deployment must be handled with caution. The operational context in Gaza, while particularly illustrative, is not unique—similar challenges are likely to arise as these systems proliferate globally. These developments underscore the need to align emerging military technologies with established legal and humanitarian norms. Transparency, human oversight, and accountability must remain central. If algorithms are not bound by the principles that safeguard civilian life, the foundational framework of international humanitarian law may be seriously compromised.
Comments