Autonomous weapon systems – also known as killer robots – may have killed humans for the first time last year, according to a recent report from the United Nations Security Council regarding the Libyan civil war.
This may also mark the beginning of the next major arms race, a development that could become humanity’s final arms race.
Killer robots could be key components in future conflicts. (Photo: Automationworld)
Autonomous weapon systems are armed robots capable of operating independently, selecting and attacking targets without human decision-making.
Armies around the world are heavily investing in the research and development of autonomous weapons. The United States alone allocated a budget of $18 billion for autonomous weapons from 2016 to 2020.
According to Professor James Dawes, an expert on artificial intelligence weaponry at Macalester College (UK), autonomous weapons create an unstable balance and fragment protective measures within the nuclear world, such as the minimal constraints on the U.S. President’s authority to launch an attack.
Unmanned military robots during a U.S. Navy exercise. (Photo: US Navy).
Professor Dawes identifies four main dangers associated with autonomous weapons.
Misidentifying Targets and Algorithmic Errors
When selecting targets, can autonomous weapons distinguish between hostile soldiers and a 12-year-old child playing with a toy gun? Can they differentiate between civilians fleeing a conflict zone and insurgents executing a tactical retreat?
The issue here is not that machines will make such mistakes while humans will not. It is the difference between human error and algorithmic error, akin to the difference between sending a letter and posting a tweet.
The scale, scope, and speed of killer robot systems—governed by a targeting algorithm—can misidentify targets, similar to the recent incident of a U.S. drone mistakenly attacking in Afghanistan.
Autonomous weapons expert Paul Scharre, from the Center for a New American Security, used the example of a hair-trigger gun to explain this distinction. A hair-trigger gun is a malfunctioning firearm that continues to fire after the trigger has been released. The gun keeps firing until it runs out of ammunition because, naturally, it does not know it has malfunctioned.
While hair-trigger incidents are extremely dangerous, they still involve human control, and the operator can attempt to aim the weapon safely. In contrast, by definition, autonomous weapons will lack such protective measures.
As many studies on algorithmic errors in various fields have indicated, even the best algorithms—designed to perform well—can produce accurate local results while still rapidly propagating serious errors across populations. The problem is that when AI systems go wrong, they do so en masse, and their creators often do not understand why and thus cannot fix them.
Weaponizing artificial intelligence can lead to serious risks. (Illustrative image).
Cheap Killer Robots Could Spread
The next two risks involve the proliferation of autonomous weapons at both low and high levels.
At the low level, militaries are developing autonomous weapons under the assumption that they can prevent and control the use of such weapons. However, if the history of weapon technology has taught the world anything, it is that weapons will proliferate.
Market pressures could lead to the widespread manufacture and trade of what could be considered autonomous weapons, much like the Kalashnikov assault rifle: cheap, effective killer robots that are nearly impossible to control once they circulate globally.
Kalashnikov-style autonomous weapons could fall into the hands of actors outside government control, including international and domestic terrorist elements.
Autonomous Weapons Have Terrible Destructive Power, Leading to War
At the level of high weapon proliferation, nations may race to develop increasingly devastating versions of autonomous weapons, including those capable of deploying biological, chemical, radiological, and nuclear arms. The ethical dangers of increasing the lethality of weapons will escalate.
Advanced autonomous weapons are at risk of leading to conflict and war more easily. Such weapons will also reduce both the need and risk for combatants, significantly altering the cost-benefit analysis that parties must undergo when launching and sustaining wars.
Asymmetric warfare—wars waged on the territory of nations lacking competitive technology—could become more common.
Kargu-2, a hybrid between a drone and a bomb, manufactured by a Turkish defense contractor, may have attacked civilians during the Libyan civil war. (Photo: The Conversation).
Undermining the Laws of War
Finally, autonomous weapons will undermine humanity’s last bulwark against war crimes and acts of brutality: international law of war. These laws, codified in treaties with a history dating back to the Geneva Conventions of 1864, form a fragile boundary separating war from acts of massacre.
The laws of war are based on the premise that humans must be held accountable for their actions even in wartime, that the right to eliminate enemy combatants in battle does not equate to the right to kill civilians.
But how can autonomous weapons be held accountable? Who is responsible for a robot committing war crimes? Who will be prosecuted: the weapon? The soldier? The soldier’s commanders? The weapon manufacturing corporation?
Non-governmental organizations and experts in international law are concerned that autonomous weapons will lead to a significant accountability gap.