As the power of artificial intelligence grows, the likelihood of a future war filled with killer robots grows as well. Proponents suggest that lethal autonomous weapon systems (LAWs) might cause less “collateral damage,” while critics warn that giving machines the power of life and death would be a terrible mistake.
Last month’s UN meeting on ‘killer robots’ in Geneva ended with victory for the machines, as a small number of countries blocked progress towards an international ban. Some opponents of such a ban, like Russia and Israel, were to be expected since both nations already have advanced military AI programs. But surprisingly, the U.S. also agreed with them.
In July, 2,400 researchers, including Elon Musk, signed a pledge not to work on robots that can attack without human oversight. Google faced a revolt by employees over an Artificial Intelligence program to help drones spot targets for the Pentagon, and decided not tocontinue with the work. KAIST, one of South Korea’s top universities, suffered an international academic boycott over its work on military robots until it too stopped work on them. Groups like the Campaign to Stop Killer Robots are becoming more visible, and Paul Scharre’s book Army of None, which details the dangers of autonomous weapons, has been hugely successful.
But the government’s argument is that any regulation would be premature, hindering new developments which would protect civilians. The Pentagon’s current policy is that there should always be a ‘man in the loop’ controlling any lethal system, but the submission from Washington to the recent UN meeting argued otherwise:
“Weapons that do what commanders and operators intend can effectuate their intentions to conduct operations in compliance with the law of war and to minimize harm to civilians.”
So the argument is that autonomous weapons would make more selective strikes that faulty human judgements would have botched.