Skip to main contentSkip to navigationSkip to navigation
Russia’s T-14 Armata tank seen in Moscow
Russia’s T-14 Armata tank is being worked on to make it completely unmanned and autonomous. Photograph: Grigory Dukor/Reuters
Russia’s T-14 Armata tank is being worked on to make it completely unmanned and autonomous. Photograph: Grigory Dukor/Reuters

Ex-Google worker fears 'killer robots' could cause mass atrocities

This article is more than 4 years old

Engineer who quit over military drone project warns AI might also accidentally start a war

A new generation of autonomous weapons or “killer robots” could accidentally start a war or cause mass atrocities, a former top Google software engineer has warned.

Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned.

Nolan said killer robots not guided by human remote control should be outlawed by the same type of international treaty that bans chemical weapons.

Unlike drones, which are controlled by military teams often thousands of miles away from where the flying weapon is being deployed, Nolan said killer robots have the potential to do “calamitous things that they were not originally programmed for”.

There is no suggestion that Google is involved in the development of autonomous weapons systems. Last month a UN panel of government experts debated autonomous weapons and found Google to be eschewing AI for use in weapons systems and engaging in best practice.

Laura Nolan resigned from Google last year in protest. Photograph: Johnny Savage/The Guardian

Nolan, who has joined the Campaign to Stop Killer Robots and has briefed UN diplomats in New York and Geneva over the dangers posed by autonomous weapons, said: “The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once. What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed.

“There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”

Google recruited Nolan, a computer science graduate from Trinity College Dublin, to work on Project Maven in 2017 after she had been employed by the tech giant for four years, becoming one of its top software engineers in Ireland.

She said she became “increasingly ethically concerned” over her role in the Maven programme, which was devised to help the US Department of Defense drastically speed up drone video recognition technology.

Instead of using large numbers of military operatives to spool through hours and hours of drone video footage of potential enemy targets, Nolan and others were asked to build a system where AI machines could differentiate people and objects at an infinitely faster rate.

Google allowed the Project Maven contract to lapse in March this year after more than 3,000 of its employees signed a petition in protest against the company’s involvement.

“As a site reliability engineer my expertise at Google was to ensure that our systems and infrastructures were kept running, and this is what I was supposed to help Maven with. Although I was not directly involved in speeding up the video footage recognition I realised that I was still part of the kill chain; that this would ultimately lead to more people being targeted and killed by the US military in places like Afghanistan.”

Although she resigned over Project Maven, Nolan has predicted that autonomous weapons being developed pose a far greater risk to the human race than remote-controlled drones.

She outlined how external forces ranging from changing weather systems to machines being unable to work out complex human behaviour might throw killer robots off course, with possibly fatal consequences.

“You could have a scenario where autonomous weapons that have been sent out to do a job confront unexpected radar signals in an area they are searching; there could be weather that was not factored into its software or they come across a group of armed men who appear to be insurgent enemies but in fact are out with guns hunting for food. The machine doesn’t have the discernment or common sense that the human touch has.

“The other scary thing about these autonomous war systems is that you can only really test them by deploying them in a real combat zone. Maybe that’s happening with the Russians at present in Syria, who knows? What we do know is that at the UN Russia has opposed any treaty let alone ban on these weapons by the way.

“If you are testing a machine that is making its own decisions about the world around it then it has to be in real time. Besides, how do you train a system that runs solely on software how to detect subtle human behaviour or discern the difference between hunters and insurgents? How does the killing machine out there on its own flying about distinguish between the 18-year-old combatant and the 18-year-old who is hunting for rabbits?”

The ability to convert military drones, for instance into autonomous non-human guided weapons, “is just a software problem these days and one that can be relatively easily solved”, said Nolan.

She said she wanted the Irish government to take a more robust line in supporting a ban on such weapons.

“I am not saying that missile-guided systems or anti-missile defence systems should be banned. They are after all under full human control and someone is ultimately accountable. These autonomous weapons however are an ethical as well as a technological step change in warfare. Very few people are talking about this but if we are not careful one or more of these weapons, these killer robots, could accidentally start a flash war, destroy a nuclear power station and cause mass atrocities.”

Autonomous threat?

Some of the autonomous weapons being developed by military powers around the world include:

  • The US navy’s AN-2 Anaconda gunboat, which is being developed as a “completely autonomous watercraft equipped with artificial intelligence capabilities” and can “loiter in an area for long periods of time without human intervention”.

  • Russia’s T-14 Armata tank, which is being worked on to make it completely unmanned and autonomous. It is being designed to respond to incoming fire independent of any tank crew inside.

  • The US Pentagon has hailed the Sea Hunter autonomous warship as a major advance in robotic warfare. An unarmed 40 metre-long prototype has been launched that can cruise the ocean’s surface without any crew for two to three months at a time.

More on this story

More on this story

  • Campaign to stop 'killer robots' takes peace mascot to UN

  • The rise of the killer robots – and the two women fighting back

  • UK, US and Russia among those opposing killer robot ban

  • Britain funds research into drones that decide who they kill, says report

  • Use of 'killer robots' in wars would breach law, say campaigners

  • The Truth About Killer Robots: the year's most terrifying documentary

  • Weaponised AI is coming. Are algorithmic forever wars our future?

  • Thousands of leading AI researchers sign pledge against killer robots

  • Killer robots will only exist if we are stupid enough to let them

Most viewed

Most viewed