Created by Materia for OpenMind Recommended by Materia
4
Start Drones That Kill on Their Own: Will Artificial Intelligence Reach the Battlefield?
08 May 2018

Drones That Kill on Their Own: Will Artificial Intelligence Reach the Battlefield?

Estimated reading time Time 4 to read

On a stage and addressing a crowded auditorium, an executive unveils an amazing advance: a tiny drone endowed with Artificial Intelligence (AI) that fits in the palm of his hand and is able to select its human target and fire a load of three grams of explosive into the brain. It’s impossible to shoot it down, its reactions are a hundred times faster than those of a human being, and one cannot escape or hide from it. When they fly in a swarm they can overcome any obstacle. “They cannot be stopped,” says the speaker.

Next on the scene are television news excerpts that advise of a lethal attack of these devices in the US Senate. A woman follows the news while chatting online with her son abroad. The conversation ends abruptly when a swarm of drones strike the young man and other students who have shared a video on their social networks. Finally the image returns us to the presentation, where the executive boasts of the possibility of selecting the enemy even by publishing a specific hashtag.

All this is just fiction from the short film Slaughterbots published by the Campaign to Stop Killer Robots, an initiative promoted by the International Committee for the Control of Robotic Weapons (ICRAC) and other entities. But according to the warning at the end of the video from Stuart Russell, professor of computational science at the University of Berkeley (USA), this is more than just speculation: the technology already exists and soon these lethal autonomous drones could become a reality. In fact, last November, the US Department of Defense opened a call for the development of “automatic target recognition of personnel and vehicles from an unmanned aerial system using learning algorithms.”

Drones capable of deciding for themselves

Armed drones have been on the battlefield for decades, but until now they have been simple devices that are controlled from a distance. The Secretary of Defense of the United States Jim Mattis recently declared that calling current drones unmanned is a mistake, since they are at all times under the control of a human pilot. The potential leap forward is profound: today the talk is about making devices the size of a domestic drone, capable of deciding for themselves and without human supervision who is to be attacked and then doing so. According to what Paul Scharre, ex-special operations officer, former Pentagon adviser and author of the new book Army of None: Autonomous Weapons and the Future of War (WW Norton & Company, 2018), has told OpenMind that while “no country has stated that they intend to build fully autonomous weapons,” at the same time “few have ruled them out either.”

Scharre, who currently heads the National Security and Technology Program of the think-tank Center for a New American Security, warns that: “Many countries around the world are developing ever more advanced robotic weapons, including many non-state groups.” These advances may include varying degrees of autonomy, and for the expert the key question is whether the line will be crossed towards the total elimination of human control, “delegating life and death decisions to machines.”

Until now, military drones have been simple devices that are controlled from a distance. Credit: U.S. Air Force/ Kemberly Groue

However, there is no doubt that this technology is now accessible. And there is no shortage of those who believe that if states that respect the law abstain from developing it, they will be defenceless against its use by aggressor nations and terrorist groups. Another question is whether AI ​​applied to warfare will end up being used. Some experts suggest that it could generate a deterrent effect that leads to a balance of power, as happened with the nuclear escalation during the Cold War. But Scharre doubts the viability of this scenario, since nuclear missiles could be tracked via satellite; on the contrary, given that software gives autonomy to AI-based weapons, their surveillance is enormously complex. “The biggest challenge is the difficulty in verifying compliance with any kind of cooperation,” says the expert. “This makes it very likely that nations will invest in autonomous technology, if nothing else out of fear that their adversaries are doing so.”

Avoid human errors and emotions

Those who support the development of autonomous military drones also point to their ability to avoid human errors and emotions, freeing current pilots from the moral responsibility of casualties, a position defended by robotics engineer Ronald Arkin at the Georgia Institute of Technology (USA). However, in addition to the danger of suppressing any hint of humanity, other experts suggest that the refining process of all technology is fraught with errors, and in this case will result in deaths due to software bugs or errors in recognition. What’s more, those companies and individuals that contribute to creating the necessary basic technologies may suddenly find themselves as potential military objectives.

For all of the above, organizations such as ICRAC advocate a “prohibition of the development, deployment and use of armed autonomous unmanned systems.” According to what professor Steve Wright of the Politics & International Relations Group at Leeds Beckett University (United Kingdom) and a member of ICRAC, explained to OpenMInd, the objective of this entity is to demand from the United Nations a veto under the Geneva Convention on Certain Conventional Weapons (CCW). “The negative legal, political and ethical consequences of autonomous armed drones far outweigh any temporary military utility,” writes Wright. Last September, more than a hundred senior executives of technology companies signed an open letter urging the CCW to take action on it, although without explicitly requesting a veto.

Some organizations advocate a prohibition of armed autonomous unmanned systems. Credit: U.S. Navy/ Daniel J. McLain

“If current negotiations fail, we can anticipate these drones rapidly proliferating to both rogue states and non-state actors, including terrorists,” Wright warns. The expert is aware that no prohibition will succeed in suppressing the risk, especially since a large part of the technologies involved are of civil development and are commercially available for other purposes, unlike the case of nuclear weapons.

However, Wright hopes that states and international collaboration can tackle the development and smuggling of these systems and their components. At the last meeting of the CCW, held last November in Geneva (Switzerland), progress has been made, such as China’s opposition to autonomous weapons. The awareness of the problem has penetrated sufficiently, writes Wright, to sign agreements aimed at preventing “a new era of push button assassination.” “Future generations will thank us when we succeed as we must,” he concludes.

Javier Yanes

@yanes68

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved