X

Robots could murder us out of kindness, says futurist

Engineer Nell Watson worries that it won't be easy to teach powerful robots values.

Chris Matyszczyk
2 min read

robokiller.png
"Darling, your human success scores have dipped beyond hope." Mac Mave Studios/YouTube screenshot by Chris Matyszczyk/CNET

-What have you done for society lately, huh? Nothing. It's not your fault. You're just past it. You should accept it. You just sit on the sofa all day, eating Kettles New York Cheddar chips and watching "Frasier" reruns.

-You're strangling me.

-It's for your own good. Well, for the good of us all, really.

And so might end a beautiful human life, one that promised so much and, as so many lives do, delivered slightly less.

Such is a scenario recently posited by Nell Watson at a conference in Malmo, Sweden. Watson is an engineer, a futurist, CEO of body-scanning company Poikos and clearly someone who worries whether engineering will always make life better.

As Wired UK reports, one of her worries is that robots might swiftly become very intelligent, but their values might be entirely skewed or even nonexistent.

I have no evidence that she reached this conclusion after lengthy visits to both Google and Facebook.

However, at the Malmo conference, she did say this: "The most important work of our lifetime is to ensure that machines are capable of understanding human value. It is those values that will ensure machines don't end up killing us out of kindness."

But can engineers create software with built-in value systems? Even if a system existed, robots could be so intelligent that they'd learn how to override it. They'd find their own, entirely rational reasons for doing so.

It may well be that, as many experts predict, robots will be our lovers by 2025. But, like all lovers, can they be trusted? Or might they look you in the face with a robotic smile and smite you from existence, blowing a kiss of farewell as they do?

Tesla CEO Elon Musk is one who says that he invests in artificial intelligence so that he can find out what its creators are up to. He and Stephen Hawking are both concerned that there are enormous dangers involved in creating what might be superbeings.

Watson's worries are based on the fact that robots, as currently understood, operate by a set of rules. Humans, on the other hand, gain so much knowledge and wisdom from instinct and intuition.

It's not exactly easy to explain why one person appears shifty and another gives off an air of comfort or understanding.

I know that many engineers believe every problem can be solved. But the path toward solving this one carries with it many inherent dangers, including the potential of dead bodies.

Even when we look at ourselves with an objectivity sometimes fortified by a fine zinfandel, we see so many faults.

A robot might easily decide we can never change and are therefore superfluous to the species. They could insist that we the useless are using up too many natural resources that could be better directed toward those who are allegedly making greater contributions.

Like robots, for example.