How Israel Uses AI in Gaza—And What It Might Mean for the Future of Warfare

A smoke plume erupts over Khan Yunis from Rafah in the southern Gaza strip during Israeli bombardment on Jan. 8, 2024.

AI warfare may conjure images of killer robots and autonomous drones, but a different reality is unfolding in the Gaza Strip. There, artificial intelligence has been suggesting targets in Israel’s retaliatory campaign to root out Hamas following the group’s Oct. 7, 2023 attack. A program known as “The Gospel” generates suggestions for buildings and structures militants may be operating in. “Lavender” is programmed to identify suspected members of Hamas and other armed groups for assassination, from commanders all the way down to foot soldiers. “Where’s Daddy?” reportedly follows their movements by tracking their phones in order to target them—often to their homes, where their presence is regarded as confirmation of their identity. The air strike that follows might kill everyone in the target’s family, if not everyone in the apartment building.

[time-brightcove not-tgx=”true”]

These programs, which the Israel Defense Force (IDF) has acknowledged developing, may help explain the pace of the most devastating bombardment campaign of the 21st century, in which more than 44,000 Palestinians have been killed, according to the Hamas-run Gaza Health Ministry, whose count is regarded as reliable by the U.S. and U.N. In earlier Gaza wars, Israeli military veterans say airstrikes occurred at a much slower tempo.

“During the period in which I served in the target room [between 2010 and 2015], you needed a team of around 20 intelligence officers to work for around 250 days to gather something between 200 to 250 targets,” Tal Mimran, a lecturer at Hebrew University in Jerusalem and a former legal adviser in the IDF, tells TIME. “Today, the AI will do that in a week.”

Experts on the laws of war, already alarmed by the emergence of AI in military settings, say they are concerned that its use in Gaza, as well as in Ukraine, may be establishing dangerous new norms that could become permanent if not challenged.

A body of a Palestinian is retrieved from the rubble of a house destroyed in an Israeli strike at the Nuseirat refugee camp in the central Gaza Strip on Nov. 12, 2024.

The treaties that govern armed conflict are non-specific when it comes to the tools used to deliver military effect. The elements of international law covering war—on proportionality, precautions, and distinctions between civilians and combatant—apply whether the weapon being used is a crossbow or a tank—or an AI-powered database. But some advocates, including the International Committee of the Red Cross, argue that AI requires a new legal instrument, noting the crucial need to ensure human control and accountability as AI weapons systems become more advanced.

 “The pace of technology is far outstripping the pace of policy development,” says Paul Scharre, the executive vice president at the Center for a New American Security and the author of Army of None: Autonomous Weapons and the Future of War. “It’s quite likely that the types of AI systems that we’ve seen to date are pretty modest, actually, compared to ones that are likely to come in the near future.”

The AI systems the IDF uses in Gaza were first detailed a year ago on the Israeli online news outlet +972 Magazine, which shared its reporting with The Guardian. Yuval Abraham, the Israeli journalist and filmmaker behind the investigation, tells TIME he believes the decision to “bomb private houses in a systemic way” is “the number one factor for the civilian casualties in Gaza.” That decision was made by humans, he emphasizes, but he says AI targeting programs enabled the IDF “to take this extremely deadly practice and then multiply it by a very large scale.” Abraham, whose report relies on conversations with six Israeli intelligence officers with first-hand experience in Gaza operations after Oct. 7, quoted targeting officers as saying they found themselves deferring to the Lavender program, despite knowing that it produces incorrect targeting suggestions in roughly 10% of cases.

One intelligence officer tasked with authorizing a strike recalled dedicating roughly 20 seconds to personally confirming a target, which could amount to verifying that the individual in question was male.

Graves are prepared for the funeral of Palestinians killed in overnight Israeli strikes at a cemetery in Rafah, on the southern Gaza Strip on Feb. 21, 2024.

The Israeli military, in responding to the +972 report,  said that its use of AI is misunderstood, saying in a June statement that Gospel and Lavender merely “help intelligence analysts review and analyze existing information. They do not constitute the sole basis for determining targets eligible to attack, and they do not autonomously select targets for attack.” At a conference in Jerusalem in May, one senior military official sought to minimize the importance of the tools, which he likened to “glorified Excel sheets,” Mimran and another person in attendance told TIME.

The IDF did not specifically dispute Abraham’s reporting about Lavender’s 10% error rate, or that an analyst might spend as little as 20 seconds analyzing the targets, but in a statement to TIME, a spokesperson said that analysts “verify that the identified targets meet the relevant definitions in accordance with international law and additional restrictions stipulated in the IDF directives.”  

Converting data into target lists is not incompatible with the laws of war. Indeed, a scholar at West Point, assessing the Israeli programs, observed that more information could make for greater accuracy. By some contemporary accounts, that may have been the case the last time Israel went to war in Gaza, in 2021. That brief conflict apparently marked the first time the IDF used artificial intelligence in a war, and afterward, the then-head of UNRWA, the U.N. agency that provides health, education, and advocacy for Palestinians, remarked on “a huge sophistication in the way the Israeli military struck over the last 11 days.” But the 2021 round of combat, which produced 232 Palestinian deaths, was a different kind of war. It was fought under Israeli rules of engagement ostensibly intended to minimize civilian casualties, including by “knocking on the door”—dropping a small charge on the rooftop of a building to warn occupants that it was about to be destroyed, and should evacuate.

In the current war, launched more than 14 months ago to retaliate for the worst attack on Jews since the Holocaust, Israeli leaders shut off water and power to all of Gaza, launched 6,000 airstrikes in the space of just five days, and suspended some measures intended to limit civilian casualties. “This time we are not going to “knock on the roof” and ask them to evacuate the homes,” former Israeli military intelligence chief Amos Yadlin told TIME five days after Oct. 7, warning that that the weeks ahead would be “very bloody” in Gaza. “We are going to attack every Hamas operative and especially the leaders and make sure that they will think twice before they will even think about attacking Israel.” Abraham reported that targeting officers were told it was acceptable to kill 15 to 20 noncombatants in order to kill a Hamas soldier (the number in previous conflicts, he reports, was zero), and as many as 100 civilians to kill a commander. The IDF did not comment on those figures.

Israeli army battle tank at a position along the border with Gaza, on March 19, 2024.

Experts warn that, with AI generating targets, the death toll may climb even higher. They cite “automation bias”—the presumption that information provided by AI is accurate and reliable unless proven otherwise, rather than the other way around. Abraham says his sources reported times they made just that assumption. “Yes there is a human in the loop,” says Abraham, “but if it’s coming at a late stage after decisions have been made by AI and if it is serving as a formal rubber stamp, then it’s not effective supervision.”

Former IDF chief Aviv Kochavi offered a similar observation in an interview with the Israeli news site Ynet six months before Oct. 7. “The concern,” he said, speaking of AI broadly, “is not that robots will take control over us, but that artificial intelligence will supplant us, without us even realizing that it is controlling our minds.”

Adil Haque, the executive editor of the national security law blog Just Security and the author of Law and Morality at War, described the tension at play. “The psychological dynamic here pushes against the legal standard,” he says. “Legally, the presumption is that you can’t attack any person unless you have very strong evidence that they’re a lawful target. But psychologically, the effect of some of these systems can be to make you think that this individual is a lawful target, unless there’s some very obvious indication that you make independently that they are not.”

Israel is far from the only country using artificial intelligence in its military. Scores of defense tech companies operate in Ukraine, where the software developed by the Silicon Valley firm Palantir Technologies “is responsible for most of the targeting” against Russia, its CEO told TIME in 2023, describing programs that present commanders with targeting options compiled from satellites, drones, open-source data, and battlefield reports. As with Israel, experts note that Ukraine’s use of AI is in a “predominantly supportive and informational role,” and that the kinds of technology being trialed, from AI-powered artillery systems to AI-guided drones, are not yet fully autonomous. But concerns abound about potential misuse, particularly on issues related to accuracy and privacy.

A Ukrainian analyst views drone footage of Russian trenches near Bakhmut, Ukraine, on Jan. 6, 2023.

Anna Mysyshyn, an AI policy expert and director of the Institute of Innovative Governance, an NGO and watchdog of the Ministry of Digital Transformation of Ukraine, tells TIME that while “dual-use technologies” such as the facial-recognition system Clearview AI play an important role in Ukraine’s defense, concerns remain about their use beyond the war. “We’re talking about … how to balance between using technologies that have advantages on the battlefield [with] protecting human rights,” she says, noting that “regulation of these technologies is complicated by the need to balance military necessity with civilian protection.”
 

With fighting largely confined to battlefields, where both Russian and Ukrainian forces are dug in, the issues that animate the debate in Gaza have not been in the foreground. But any country with an advanced military—including the U.S.—is likely to soon confront the issues that come with machine learning. 

“Congress needs to be prepared to put guardrails on AI technologies—especially those that put international humanitarian law in question and threaten civilians,” Sen. Peter Welch of Vermont said in a statement to TIME. In September, he and Sen. Dick Durbin of Illinois wrote a letter to Secretary of State Antony Blinken urging the State Department to “proactively and publicly engage in setting international norms about the ethical deployment of AI technology.” Welch has also since put forward his proposed Artificial Intelligence Weapons Accountability and Risk Evaluation (AWARE) Act, which if passed would require the Defense Department to catalog domestic deployments of AI systems, the risks associated with them, and any foreign sharing or exportation of these technologies.

“A more comprehensive and public approach is necessary to address the risk of AI weapons and maintain America’s leadership in ethical technology development,” Welch says, “as well as establish international norms in this critical space.”

It may seem unlikely that any government would find an incentive to introduce restrictions that also curtail its own military’s advancements in the process. “We’ve done it before,” counters Alexi Drew, a Technology Policy Adviser at the ICRC, pointing to treaties on disarmament, cluster munitions, and landmines. “Of course, it’s a very complex challenge to achieve, but it’s not impossible.”

   

​ Experts share the reported concerns of Israeli intelligence officers about the inclination to defer to programs that suggest airstrike targets. 

Similar Posts