An article in “+972 Magazine” makes some very attention getting claims about the Israeli bombardment of Gaza. On the one hand, it is based on anonymous sources, and if any of it is true it is surprising to me it has not turned up in more well known media outlets. On the other hand, it really makes a lot of logical sense to me in that it seems 100% consistent with facts on the ground. So I am summarizing it here as it is summarized in the magazine, and I encourage you to consider it seriously but with a healthy dose of skepticism.
First, they describe an Israeli program to identify likely Hamas associates based on statistics. People are assigned a score from 0 to 100, based on their likelihood of being associated with Hamas. The statistics are trained on cases known for sure to be associated with Hamas.
The book offers a short guide to building a “target machine,” similar in description to Lavender, based on AI and machine-learning algorithms. Included in this guide are several examples of the “hundreds and thousands” of features that can increase an individual’s rating, such as being in a Whatsapp group with a known militant, changing cell phone every few months, and changing addresses frequently.
“The more information, and the more variety, the better,” the commander writes. “Visual information, cellular information, social media connections, battlefield information, phone contacts, photos.”
Since the October 7, 2023 attack, this story goes (I am going to stop saying this from here – I am relaying the story as this source explains it), a few things have changed. One is the increasing automation of the process, producing large numbers of potential targets. Second is the lowering of the threshold from the highest scoring targets to lower scoring ones.
He explained that when lowering the rating threshold of Lavender, it would mark more people as targets for strikes. “At its peak, the system managed to generate 37,000 people as potential human targets,” said B. “But the numbers changed all the time, because it depends on where you set the bar of what a Hamas operative is. There were times when a Hamas operative was defined more broadly, and then the machine started bringing us all kinds of civil defense personnel, police officers, on whom it would be a shame to waste bombs. They help the Hamas government, but they don’t really endanger soldiers.”
One source who worked with the military data science team that trained Lavender said that data collected from employees of the Hamas-run Internal Security Ministry, whom he does not consider to be militants, was also fed into the machine. “I was bothered by the fact that when Lavender was trained, they used the term ‘Hamas operative’ loosely, and included people who were civil defense workers in the training dataset,” he said.
The system is believed to be about 90% accurate, meaning 10% of the targets identified do not have links to Hamas, and Israeli leadership judged this to be acceptable collateral damage. But whatever you think of the morality of that judgment, it was the tip of a very large and very cold iceberg. Because the leadership also decided that to take these people out, it was acceptable to take out their extended families by leveling their homes in the middle of the night. The higher value the target, the greater number of innocent civilians the leadership judged to be acceptable to kill.
In an unprecedented move, according to two of the sources, the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.
So it kind of makes a very cold, calculated kind of logical sense from a certain point of view – the leadership believes it is under an existential threat, and they judge that killing 15-100 people for every combatant (which they are about 90% sure of) is acceptable to remove the threat.
I have to say this high a ratio does not seem morally acceptable to me. You can say “what about” the Allied bombings of German and Japanese cities in World War II, the score counting of “military aged males” in Vietnam, and whatever went on in Afghanistan and Iraq, the U.S.-enabled Saudi bombing of Yemen, etc. And you would be right – all of these are very likely immoral, in my view. Millions of wrongs don’t make a right.
A lot is made of the AI angle here, and that makes it a bit more chilling to me. Basically, technologies developed for marketing (by U.S. firms in many cases) are being applied to evil causes the Nazis, Stasi, the KGB or the Spanish Inquisition could only have dreamed of. I think the Israeli leadership believes what it is doing is morally justified, even if most reasonable people in the world might disagree. It’s horrible to imagine what a truly horrible, ill-intentioned regime might do with these technologies.