top of page
Image by Jakob Owens

In 2018, Laura Nolan - a top software engineer - resigned from her role at Google in protest. First thoughts turn to gender pay gaps, poor work/life balance, workplace harassment or similar – but you’d be wrong. She resigned over killer robots.

Faced with the prospect of working on a project that would dramatically enhance US military drone technology, Nolan instead called for a ban on autonomous war systems, stating that those not guided by human remote control should be outlawed by the same sort of international treaty that prohibits chemical weapons.

iluli_BlogSquare_KillerRobots.jpg
28.03.20

Without human control and intervention, Nolan argued, killer robots have the potential to do “calamitous things that they were not originally programmed for.”

In September 2019, The Guardian expanded upon Nolan’s views:

The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once. What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed.

There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.

The project itself focused on developing AI technology that could differentiate between people and objects at an infinitely faster rate than the current set-up (whereby military operatives sift through hour after hour of drone footage to accurately identify enemy targets).

The contract for the project lapsed in March 2019 after a petition protesting Google’s involvement was signed by more than 3,000 of their employees. 

And they weren’t alone in their concern. 

The catchily titled Campaign to Stop Killer Robots was launched in October 2012, and describe themselves as: a coalition of non-governmental organizations that is working to ban fully autonomous weapons and thereby retain meaningful human control over the use of force.

Their website states the following want a ban on fully autonomous weapons:

  • 30 countries

  • 110+ non-governmental organisations

  • 4,500 artificial intelligence experts

  • United Nations Secretary-General

  • The European Parliament

  • Human Rights Council rapporteurs

  • 26 Nobel Peace Laureates

In December 2018, the Campaign conducted a survey of 26 countries to gauge public opinion on autonomous weaponry. The results make for interesting reading: 

  • 61 percent of respondents said they oppose the use of lethal autonomous weapons systems, also known as fully autonomous weapons, while 22 percent support such use and 17 percent said they were not sure

  • 66% of those who opposed the use of killer robots said their main concern was that it would “cross a moral line because machines should not be allowed to kill”, whilst 54% believe the weapons would be unaccountable

  • A majority opposed killer robots in China (60%), Russia (59%), the UK (54%) and the US (52%)

The last statistic is particularly noteworthy. Whilst the bods at the Campaign can claim a victory of sorts – that the majority of respondents from those countries agree with their sentiments – the figures themselves are far from clear cut.

 

Take the US, 52% is hardly a resounding “no” to killer robots. If anything, it’s more like a communal gathering atop a fence. To my mind, 52% suggests that the public don’t really know what to think. Does the eradication of dangerous enemies warrant the manufacture of, what are essentially, murder machines? Would they sleep easier at night knowing that life and death decisions were out of human hands? Could technology be trusted with such crucial decision-making?

 

What we do know, however, is that autonomous weapons are already in development in these “majority opposed” countries. As outlined by The Guardian:

  • The US navy’s AN-2 Anaconda gunboat, which is being developed as a “completely autonomous watercraft equipped with artificial intelligence capabilities” and can “loiter in an area for long periods of time without human intervention”.

  • The US Pentagon has hailed the Sea Hunter autonomous warship as a major advance in robotic warfare. An unarmed 40 metre-long prototype has been launched that can cruise the ocean’s surface without any crew for two to three months at a time.

  • Russia’s T-14 Armata tank, which is being worked on to make it completely unmanned and autonomous. It is being designed to respond to incoming fire independent of any tank crew inside.

The prospect of autonomous warships and gunboats roaming the oceans does little to quieten concerns around accountability. If machines take actions without human intervention, who – or what – is ultimately responsible? The machine? The maker? Somehow I can’t imagine “Ah, sorry about that missile strike last night – those robots were up to mischief again!” will fly from a legal standpoint. Nor a moral one, for that matter.

Advocates for killer robots argue that such technology would diminish the need for troops and reduce casualties. Indeed, Major Kenneth Rose of the US Army's Training and Doctrine Command states the advantages of robotic technology in warfare:

Machines don't get tired. They don't close their eyes. They don't hide under trees when it rains and they don't talk to their friends ... A human's attention to detail on guard duty drops dramatically in the first 30 minutes ... Machines know no fear.

 

Whilst these assertions are surely true – depicting killer robots as deadly assassins with no weaknesses – the reality of their deployment is arguably even darker…

 

Machines don’t answer to anybody. Machines are self-controlled - they make their own decisions and set their own targets. Machines develop and grow in knowledge and power over time. Machines feel no remorse. Machines cannot be held responsible for their actions. Machines may kill innocent children. Machines may destroy entire cities. Machines may turn against us at any moment. Machines could end mankind. 

And we would have nobody but ourselves to blame... 

bottom of page