AI-driven drones appear to have attacked humans without authorization, according to a new report by the U.N.
Many critics view AI technology as an existential threat to humanity, seeing some variation of the Terminator franchise’s Skynet wiping humanity out. Those critics may have just been given the strongest support yet for their fears, with AI drones attacking retreating soldiers without being instructed to.
According to the U.N. report, via The Independent, Libyan government forces were fighting Haftar Affiliated Forces (HAF) forces.
“Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2,” read the UN report.
What makes the Kargu so dangerous is that it’s a “loitering” drone, designed to autonomously pick its own targets based on machine learning. If one such drone isn’t dangerous enough, the Kargu has swarming abilities, enabling 20 such drones to work together in a coordinated swarm.
“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” wrote the report’s experts.
The incident is sure to raise questions about the ongoing safety issues surrounding AI drone use, especially in the context of military applications.