WhatFinger

Lethal autonomous weapons systems that reduce the need for human soldiers bring a reduced risk to the lives of fighters

The UN and Autonomous Weapons Systems: A Missed Opportunity?


By INSS Liran Antebi ——--June 9, 2015

World News | CFP Comments | Reader Friendly | Subscribe | Email Us


Last year the United Nations addressed the development of autonomous weapons systems. After years of focusing on the legality of targeted assassinations using attack-enhanced remotely-operated UAVs, UN circles came to the realization that the development of lethal autonomous weapons systems (LAWS), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge.
On April 13-17, 2015, an international forum in the United Nations Office in Geneva dealt with the development of autonomous weapons systems, and raised the possibility of adding an appendix to the convention limiting the use of certain conventional weapons (CCW from 1980) and banning the use of advanced autonomous systems altogether. The convention already limits the use of cluster bombs and other weapons, and the UN has – via this convention – likewise singled out not only the use but also the development of blinding laser weapons. The claim essentially posed before the UN committee discussion on autonomous weapons – an outgrowth of the public discourse and the activity of human rights organizations opposed to such weapons – was that autonomous weapons systems are not a force majeure and that it is possible and desirable to limit their use before intensive development begins and the systems start to play a leading role on the battlefield. In advance of the UN committee deliberation, Human Rights Watch published a detailed research report, the second of its type, on Lethal Autonomous Weapons Systems. While the organization’s previous paper was meant to raise general awareness of the problematic nature of these weapons and draft a uniform set of concepts for debating the topic, the second paper focused on the legal difficulty in attributing accountability to such systems. The report, along with the key points of the UN committee discussion resulting from its publication, stressed the difficulty that such weapons pose when it comes to applying the concept of accountability. The primary claim is that fighters, commanders, and even decision makers at the political echelon bear legal responsibility for committing war crimes, a fact that is supposed to deter them from doing so. By contrast, it is impossible to ascribe the same type of accountability to autonomous systems, because trying a robot in a court of law and punishing it are meaningless acts; furthermore, it is difficult – and makes little sense – to put the engineer or company that developed the autonomous weapon system on trial because of the damage suffered by innocent bystanders years after the system’s development.

The United States recently approved the use of autonomous trucks on the highways, though the US regulatory agency has so far made their use conditional on the presence of a human driver who is involved in some of the operation

This sensitive question is not unique to autonomous weapons systems and challenges legislatures and regulatory bodies of different countries not only when armed systems operate on the battlefield but also in the context of autonomous vehicles, which have become increasingly popular in recent years. Thus, for example, the United States recently approved the use of autonomous trucks on the highways, though the US regulatory agency has so far made their use conditional on the presence of a human driver who is involved in some of the operation, partly because no solution has yet been found for the question of legal liability. The Human Rights Watch report and the international Campaign to Stop Killer Robots do not single out autonomous weapons systems, insofar as their primary concern is how violations of the laws of war will affect human rights during warfare and the lack of a person who can answer for human rights violations at the International Criminal Court. However, the report also indicates that in some areas, those opposed to the development and use of these robots have softened their stance. For example, it makes a clearer distinction than in the past between aerial defense systems with autonomous capabilities, such as the Iron Dome and Patriot systems, and systems that are more likely to harm humans, e.g., armed ground systems and autonomous UAVs, such as the US X-47B UAV, now in testing but which can already take off and land from an aircraft carrier and is capable of refueling in the air, all without any human involvement. The report’s bottom line calls for a total ban on the development and manufacturing of lethal autonomous weapons systems (as was the case with blinding laser weapons) by formulating and implementing international legal prohibitions, along with state-based and implemented prohibitions. The report and the Campaign to Stop Killer Robots are significant, sharing the same element that has also largely dictated the tone of the debates on the topic in the UN committee. Human Rights Watch held five events with an identical agenda on the same days the committee was convened in Geneva; these events even appeared on the official website of the UN committee. However, at the public debate preceding the UN committee meetings – the Human Rights Watch report and the events held concurrently with the discussions – arguments in favor of the development of autonomous weapons systems were not presented. Apparently arguments of that nature were also not debated at the UN committee itself. Could it be that the UN is missing an opportunity to challenge the inclination to reject automatically the development and use of autonomous weapons systems? The answer, which appears to be an unequivocal “yes”, is problematic, as the issue is far more complicated.

Lethal autonomous weapons systems that reduce the need for human soldiers bring a reduced risk to the lives of fighters, especially those engaged in warfare against terrorists and guerrilla organizations

Lethal autonomous weapons systems that reduce the need for human soldiers bring a reduced risk to the lives of fighters, especially those engaged in warfare against terrorists and guerrilla organizations. In addition, such systems might allow the UN’s own peacekeeping forces to grow, without having to resort to national forces. Moreover, it is possible to reduce the risk to innocents that is a byproduct of warfare, the very issue of concern to human rights organizations, thanks to the improved precision of autonomous weapons systems resulting from the use of sensors and calculating abilities superior to human capabilities. Furthermore, it seems that the UN is missing an opportunity to reduce actions violating the laws of warfare. Such a reduction would be made possible, in principle, if the autonomous systems would undertake only the tasks they have been programmed to carry out and based on the information with which they are equipped – programming that would conform to international law. Similarly, an opportunity is being missed to develop the discourse and activity about limiting artificial intelligence in general, the same AI that lies at the core of any autonomous system, not just armed ones. The topic has made headlines over the last year because of some worrisome pronouncements made separately by scientists and technologists, including Elon Musk, Stephen Hawking, and Bill Gates, on dangers to humanity inherent in the uncontrolled development of AI occurring at an accelerated pace all over the world. The initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed on the battlefield or elsewhere, at any time and in any place, and not because these systems are automated. While protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. Therefore, continued action by the UN that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy.

Support Canada Free Press

Donate


Subscribe

View Comments

INSS——

Institute for National Securities Studies, INSS is an independent academic institute.

The Institute is non-partisan, independent, and autonomous in its fields of research and expressed opinions. As an external institute of Tel Aviv University, it maintains a strong association with the academic environment. In addition, it has a strong association with the political and military establishment.


Sponsored