Act now to avoid an automated weapons race

By Joe Dodgshun

Over 100 leading AI and robotics experts this summer called for the United Nations to outlaw the development and use of killer robots. The open letter, signed by industry leaders like Elon Musk and Mustafa Suleyman, voiced the concern that the technologies developed by its signatories could be repurposed in the development of lethal autonomous weapons.

“Once developed,” read the letter to the UN’s Conference of the Convention on Certain Conventional Weapons (CCW), “they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. Once this Pandora’s box is opened, it will be hard to close.”

The Good Technology Collective spoke to one of the signatories — Dr. Kristinn Thorisson, director of the Icelandic Institute for Intelligent Machines (IIIM) — about why raising the alarm is only the start of a vital discussion around lethal autonomous weapons.

How did you come to be a signatory of this open letter to the UN?

Intelligence is power and wherever power is disproportionately distributed it can — and will — be abused. After 30 years in AI, my views align rather closely with other signatories of the letter to the UN on the general belief that we must curb the unchecked development and proliferation of automated weapons developed explicitly to maim, damage, kill, and destroy.

AI is my field of expertise; to me, it is the most exciting technology of both the 20th and the 21st centuries. Seeing that technology used for threatening, destroying, and killing other humans fills me with dread and disgust.

The potential for AI to do good is enormous, and I will do what I can to make it benefit everyone on this planet, irrespective of country, nation, or belief system. I think automation and AI should be developed for the benefit of all. Not to enable the killing of more people in an increasingly hands-off manner but to improve quality of life. In 2015, IIIM published an ethics policy outlining our goal of staying clear of military funding.

As far as we know, the policy was the first of its kind and it paints in broad strokes an explicit stance for exclusively limiting AI application and research to civilian use. The policy was noted by many parties, including the Future of Life Institute and others pushing for a global ban on automated weapons without humans in the loop (“killer robots”).

What are the main concerns the AI and robotics industries raise in the letter? 

The general idea is not that the demonstration of such new kinds of weapons “might” increase tensions — it will, and not just between superpowers but with a myriad of smaller players as well, pushing development of technological prowess into the lethal autonomous weapons arena, and quite possibly ushering in a new cold war (or more likely, wars).

AI and automation are different to all previous war-related technologies because AI is less dependent on expensive equipment and facilities than traditional weapons, and propelled rather by know-how and general intelligence.

It has more potential to level the field, creating more opportunities for conflict, with greater potential to be employed at multiple levels and for multiple purposes — for anything from spying on individuals to coordinating and controlling large forces and attacks in months-long strategic warfare. Of course, automation will also make inroads into more traditional weapons, generating new concerns, threats, and complexities, and increasing tensions even further. The outcomes and eventualities of an all-out AI-weapons race are much less foreseeable than anything that’s come before.

What are some immediate risks with the development of autonomous weapons?

In the next 20 or so years, I am primarily concerned about hacking, which could turn weapons against their owners and turn everyday technologies into various forms of weapons, and also concerned about local failures, which are very likely to happen with today’s automation. For example, think of how often your laptop behaves in a way you can’t explain and would not have expected.

Then there is, of course, direct abuse of power, which is made much easier with no (or few) humans in the loop and systemic failures, e.g. domino-effects where local failures at one level escalate to the next level. This last point is easy to miss because most people don’t realize we don’t have very good methods for engineering large systems or ensuring their safety, and the speed at which machines can make decisions is much much faster than that of humans or groups of humans.

Is an arms race already underway or is there still time to prevent/restrict this?

I would say the automatic weapons arms race is currently at a very early stage since AI itself is still at a very early stage. No current man-made system can autonomously learn cumulatively on the job, or via observation, or by analogy. The reinforcement learning and that of recurrent neural networks demonstrated in machines are of a lesser sophistication than even the basic learning observed in squirrels. Nevertheless, recent results in AI have convinced even the AI-doubtful of its potential.

Now is, therefore, an opportune time to avoid a nightmarish future unfolding, one where a set of military killer drones with poison darts may chase you down the street because their face recognition mistook you for a wanted criminal. How much time we have will depend on the speed of progress, and it is by no means easy to predict this, although it’s clear that progress is speeding up simply as so many new researchers, research programs, and companies are now entering joining the fray of developing AI.

What do you hope can be achieved by approaching the  UNCCW?

We are hoping for a broad ban on autonomous weapons and whether this can happen this year, next year, or in the next four years depends on a number of factors. Getting the topic on the table for discussion is the first necessary step, together with getting people to understand that this is neither premature, nor too late, but rather an opportune time to have this discussion and to make some unequivocal decisions in the not-too-distant future.

You signed as the director of the IIIM — what are the aims of the institute?

IIIM is a young non-profit research and development institute that develops systems for companies, institutions, governments, and others who need AI, automation and related technologies for their business and services. We catalyze innovation by working closely with a broad range of industry players and university researchers, helping bring blue-sky research ideas faster to market through the targeted development of prototypes, opportunities analysis, and contract work.

The benefits for academia are increased exposure to real market needs and opportunities for working on applied projects; the benefits to industry are manifold, including access to cutting-edge AI know-how, reduced prototyping cost, and targeted connections with academic researchers.

How does your ethics policy compare to what is standard in the industry?

Our ethics policy was a way to put a stake in the ground on the issue by targeting AI technology exclusively for civilian use, for the betterment of mankind. And we really mean it. The policy was signed by the entire company, from the board to our admin assistant. We had hoped that others would take our ethics policy and use it as a basis for their own, but so far we have not heard about any takers. The German Research Center for Artificial Intelligence (DFKI) have followed a similar policy in all their decision making, but have never put that policy in writing or made it explicit. Hopefully they — and many others — will see that this is really a sensible thing to do.

What further measures need to be taken to mitigate this threat?

Transparency is important. One way to get started is to make it the norm, rather than the exception, for companies to explicitly state their intent to shun technologies aimed at maiming and killing. The numerous companies that follow such a policy anyway should do so, making the others stick out like sore thumbs, and changing the thinking from “Why not take military funding for my AI technology?” to the more prosperous “Why would you not want your AI technology to benefit all citizens of planet Earth?”. While companies that create killing machines will probably always exist, it becomes easier to track and curb their unwanted behavior when they are thus exposed.

Another important factor is education. Discussions on these topics almost always devolve to Terminator-style mind experiments involving the future demise of humanity at the hand of conscious robots with a free will. This habit must change because in the near- and mid-term we will face a host of issues related to automation and AI that have nothing to do with such sci-fi scenarios and require our full attention to solve. Politicians and the general public need to increase their understanding of AI technologies, and especially the limitations that current AI is bogged down with. Because, truth be told, the short-term issues have much more to do with artificial stupidity than artificial intelligence.