[ad_1]
Zachary Kallenborn, a research affiliate who studies drone warfare, terrorism and weapons of mass destruction at the University of Maryland, said the report suggested that for the first time, a weapons systems with artificial intelligence capability operated autonomously to find and attack humans.
“What’s clear is this drone was used in the conflict,” said Mr. Kallenborn, who wrote about the report in the Bulletin of Atomic Scientists. “What’s not clear is whether the drone was allowed to select its target autonomously and whether the drone, while acting autonomously, harmed anyone. The U.N. report heavily implies, but does not state, that it did.”
But Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations, said that the report does not say how independently the drone acted, how much human oversight or control there was over it, and what specific impact it had in the conflict.
“Should we talk more about autonomy in weapon systems? Definitely,” Ms. Franke said in an email. “Does this instance in Libya appear to be a groundbreaking, novel moment in this discussion? Not really.”
She noted that the report stated the Kargu-2 and “other loitering munitions” attacked convoys and retreating fighters. Loitering munitions, which are simpler autonomous weapons that are designed to hover on their own in an area before crashing into a target, have been used in several other conflicts, Ms. Franke said.
“What is not new is the presence of loitering munition,” she said. “What is also not new is the observation that these systems are quite autonomous. How autonomous is difficult to ascertain — and autonomy is ill-defined anyway — but we know that several manufacturers of loitering munition claim that their systems can act autonomously.”
The report indicates that the “race to regulate these weapons” is being lost, a potentially “catastrophic” development, said James Dawes, a professor at Macalester College in St. Paul, Minn., who has written about autonomous weapons.
[ad_2]
Source link