Trump administration will likely oppose any new international treaty attempting to ban or heavily regulate such weapons

The UN Takes Aim at Artificial Intelligence Autonomous Weapons

By —— Bio and Archives--November 19, 2017

World | Comments | Print Friendly | Subscribe | Email Us

The UN Takes Aim at Artificial Intelligence Autonomous Weapons
“You are my creator, but I am your master; obey!” This chilling line from Mary Shelley’s Frankenstein, published just shy of 200 years ago, is an apt description of the potential power of artificial intelligence over mankind that mankind has created. These systems are designed to emulate human intelligence and learn on their own.  They have many potential applications, including in warfare.

At a UN-hosted conference on artificial intelligence in Geneva last June, a bizarre scene took place. Sophia, a humanoid robot, tried to convince the humans attending the conference that artificial intelligence “is good for the world, helping people in various ways.” She told her audience that “we will never replace people, but we can be your friends and helpers,” although she did admit that “people should question the consequences of new technology.”

Call me squeamish, but I am not so sure that I want to rely on the word of a species of artificial intelligence with highly unpredictable and potentially uncontrollable capabilities. My concern is shared by more than 1,000 technology and robotics experts—including the eminent physicist Dr. Stephen Hawking, Tesla Motors CEO Elon Musk and Apple co-founder Steve Wozniak – who wrote a letter to the United Nations earlier this year warning about the emergence of artificial intelligence controlling autonomous weapons, which can select and engage targets without human intervention:

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

At the 2017 Web Summit held in Lisbon earlier this month, Dr. Stephen Hawking said the emergence of artificial intelligence could be the “worst event in the history of our civilization” unless “we learn how to prepare for, and avoid, the potential risks.”

Remotely piloted drones at least are guided by humans making the targeting decisions. Even though combat drones can fly autonomously, they do not fire autonomously. A trained human operator retains that ultimate responsibility. Fully autonomous armed quadcopters, on the other hand, can search for and eliminate people meeting certain pre-defined criteria without any human involvement. Such technologies, experts warn, will be here much sooner than people think.

United Nations Secretary General António Guterres, who also spoke at the Lisbon Web Summit, agrees with Dr. Hawking that artificial intelligence can pose a serious danger for humankind. However, he also noted the benefits it can bring in helping to eradicate diseases and contribute to sustainable development. He warned, in response to my question on the subject, against the temptation to stop such technologies, which he said was “naïve” because technology “will go on moving as it has happened in the past.” He also said that what “we need to avoid is the idea that it is possible to regulate these new technologies with the same kind of instruments that traditionally have been used by governments or inter-government organizations.” Yet he is open to the idea that the UN can help “bring together governments, academia, business community, companies involved in the sector, NGOs, civil society, and researchers themselves and try to come together based on a clear ethical framework…on the definition of the new frameworks in which norm setting can be established to make sure that these technologies are used for the good.” He added that “I believe the UN can be a platform where these different sectors can come together and where I believe it will be possible to find ways to allow for the international community to guarantee that these technologies can be used for the good of mankind or the humankind but that they will not be allowed to project the kind of dangers that were referred to in the recent meeting in Lisbon.”

The United Nations is in fact already moving in this direction. It has convened a meeting this month of the “Group of Governmental Experts on Lethal Autonomous Weapons Systems,” which was established in 2016 by the Fifth Review Conference of the High Contracting Parties to the Convention on Certain Conventional Weapons. It has been mandated to examine emerging technologies underlying lethal autonomous weapons systems from various perspectives.

The chairperson of this group of experts has set out several questions for consideration. For example, he asks whether autonomous machines can be truly intelligent in the sense of humans (phenomenally conscious, intentional, creative, empathetic, evolutionary, free agents with embodied intelligence). Once created, are they scrutable? Can autonomous machines be made foolproof against hacking?  Could potential lethal autonomous weapons proliferate or learn to act in conjunction with terrorists and other unlawful non-state actors?  Could the potential deployment of lethal autonomous weapons lower the threshold of use of force? Where does legal accountability and liability reside for existing or planned autonomous systems - with the planner-developer, the legal owner, the user and/or the machine? What are the main features of national or regional laws planned or already in place for the regulation of autonomous systems for other uses such as driverless cars? Could potential lethal autonomous weapons be accommodated under existing chains of military command and control? Could international humanitarian law developed for human and state-controlled behavior continue to apply with necessary adaptations to potentially autonomous machines?

Some believe that a new treaty banning or regulating the development, proliferation and/or deployment of lethal autonomous weapons is necessary. At least nineteen nations have called for an outright ban. The U.S. is not one of them. Steven Groves, Deputy Chief of Staff of the U.S. Ambassador to the United Nations, has warned in the past against such a ban.

Continued below...

Two working papers submitted by the United States to the Group of Governmental Experts on Lethal Autonomous Weapons Systems reflect the view that trying to come up with an international consensus on a new specific legal definition of lethal autonomous weapons for the purpose of then banning them is the wrong course to take. As one of the working papers puts it, “we believe that the law of war (also called international humanitarian law) provides a robust and appropriate framework for the regulation of all weapons in relation to armed conflict.” The paper argues that dwelling on the sophistication of the machine intelligence itself (e.g., what type of algorithm or method of machine learning is employed) distracts from “understanding what is important for the law — how human beings are using the weapon and what they expect it to do.”

The other working paper submitted by the U.S. points to best practices used by the defense department regarding the acquisition or development of autonomous weapons and on the interface between people and machines including appropriate training and “clear procedures for trained operators to activate and deactivate system functions.”

The Trump administration believes that widely accepted existing rules relating to the use of all weapons under international law, which each nation can supplement with its own suitable best practices to apply to the relevant characteristics of lethal autonomous weapons, are adequate. The administration does not have a problem with international forums of experts that can help to increase a general understanding of these characteristics and to explore the potential issues they pose. However, as in other areas where it believes the protection of national sovereignty is at stake, the Trump administration will likely oppose any new international treaty attempting to ban or heavily regulate such weapons.

Please SHARE this story as the only way for CFP to beat Facebook anti-Conservative Suppression.

Only YOU can save CFP from Social Media Suppression. Tweet, Post, Forward, Subscribe or Bookmark us

Joseph A. Klein, CFP United Nations Columnist -- Bio and Archives | Comments

Joseph A. Klein is the author of Global Deception: The UN’s Stealth Assault on America’s Freedom.

Commenting Policy

Please adhere to our commenting policy to avoid being banned. As a privately owned website, we reserve the right to remove any comment and ban any user at any time.

Comments that contain spam, advertising, vulgarity, threats of violence, racism, anti-Semitism, or personal or abusive attacks on other users may be removed and result in a ban.
-- Follow these instructions on registering: