I was having a regular coffee with a very good friend and affluent data scientist at the Institute of Directors recently in London, we share many great past times including reading and writing, and were discussing at our meeting ethical and moral values with robots, my question was can we teach robots right from wrong.
The facts are artificial intelligence (AI) is outperforming humans in a range of areas, but can we program or teach robots to behave ethically and morally?
Twitter has admitted that as many as 23 million (8.5 percent) of its user accounts are autonomous Twitterbots. Many are there to increase productivity, conduct research, or even have some fun. Yet many have been created with harmful intentions. In both cases, the bots have been known to behave with questionable ethics – maybe because they’re merely minor specimens of artificial intelligence (AI).
Humans are currently building far-more sophisticated machines which will face questions of ethics on monumental scales. Even in the mortality of human beings. So how do we make sure they make the right choices when the time comes?
Now, all of this is based on the idea that a computer can even comprehend ethics.
If cognitive machines don’t have the capacity for ethics, who is responsible when they break the law?
Currently, no one seems to know. Ryan Calo of the University of Washington School of Law notes: “Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument.”
The process for legislation is arduously slow, while technology, on the other hand, makes exponential haste.
The crimes can be quite serious, too. Netherlands developer Jeffry van der Goot had to defend himself — and his Twitterbot — when police knocked on his door, inquiring about a death threat sent from his Twitter account. Then there’s Random Darknet Shopper; a shopping bot with a weekly allowance of $100 in Bitcoin to make purchases on Darknet for an art exhibition. Swedish officials weren’t amused when it purchased ecstasy, which the artist put on display. (Though, in support for artistic expression they didn’t confiscate the drugs until the exhibition ended.)
In both of these cases, authorities did what they could within the law, but ultimately pardoned the human proprietors because they hadn’t explicitly or directly committed crimes. But how does that translate when a human being unleashes an AI with the intention of malice?
Legions of robots now carry out our instructions unreflectively. How do we ensure that these creatures, regardless of whether they’re built from clay or silicon, always work in our best interests?
Should we teach them to think for themselves? And if so, how are we to teach them right from wrong?
In 2017, this is an urgent question. Self-driving cars have clocked up millions of miles on our roads while making autonomous decisions that might affect the safety of other human road-users. Roboticists in Japan, Europe and the United States are developing service robots to provide care for the elderly and disabled. One such robot carer, which was launched in 2015 and dubbed Robear (it sports the face of a polar-bear cub), is strong enough to lift frail patients from their beds; if it can do that, it can also, conceivably, crush them. Since 2000 the US Army has deployed thousands of robots equipped with machineguns, each one able to locate targets and aim at them without the need for human involvement (they are not, however, permitted to pull the trigger unsupervised).
Public figures have also stoked the sense of dread surrounding the idea of autonomous machines. Elon Musk, a tech entrepreneur, claimed that artificial intelligence is the greatest existential threat to mankind. Last summer the White House commissioned four workshops for experts to discuss this moral dimension to robotics. As Rosalind Picard, director of the Affective Computing Group at MIT puts it: “The greater the freedom of a machine, the more it will need moral standards.”
Teaching robots how to behave on the battlefield may seem straightforward, since nations create rules of engagement by following internationally agreed laws. But not every potential scenario on the battlefield can be foreseen by an engineer, just as not every ethically ambiguous situation is covered by, say, the Ten Commandments. Should a robot, for example, fire on a house in which a high value target is breaking bread with civilians? Should it provide support to a group of five low-ranking recruits on one side of a besieged town, or one high-ranking officer on the other? Should the decision be made on a tactical or moral basis?
In conclusion of whether we should teach robots or not right from wrong, you should think about science fiction or a James Bond movie, the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations. The idea of formalising ethical guidelines is not new.
More than seven decades ago, science-fiction writer Isaac Asimov described the “three laws of robotics”: a moral compass for artificial intelligence. The laws required robots to:
- protect humans
- obey instructions
- preserve themselves
(in that order).
The fundamental premise behind Asimov’s laws was to minimize conflicts between humans and robots. In Asimov’s stories, however, even these simple moral guidelines lead to often disastrous unintended consequences. Either by receiving conflicting instructions or by exploiting loopholes and ambiguities in these laws, Asimov’s robots ultimately tend to cause harm or lethal injuries to humans.Today, robotics requires a much more nuanced moral code than Asimov’s “three laws.”
An iPhone or a laptop may be inscrutably complex compared with a hammer or a spade, but each object belongs to the same category: tools. And yet, as robots begin to gain the semblance of emotions, as they begin to behave like human beings, and learn and adopt our cultural and social values, perhaps the old stories need revisiting. At the very least, we have a moral obligation to figure out what to teach our machines about the best way in which to live in the world. Once we have done that, we may well feel compelled to reconsider how we treat them…..
As Thomas Bangalter once said:
“The concept of the robot encapsulates both aspects of technology. On one hand it’s cool, it’s fun, it’s healthy, it’s sexy, it’s stylish. On the other hand it’s terrifying, it’s alienating, it’s addictive, and it’s scary. That has been the subject of much science-fiction literature.”
Discover more from Freedom after the sharks
Subscribe to get the latest posts sent to your email.
I don’t know if we can teach robots ethics but I bet we can teach them to pronounce the letter R correctly.
Such a topical area per the imacpt on industry, workforce and H2H Geoff. Also more interesting to myself personally as this area is a fine culmination/combination of many of your blog subjects capturing tech and the impact on the human experieince, e-commer +++ all of them.
Keep em’ comin’ G-Miester (Y) you’ve got the mustard so keep walking it 🙂
Drinks soon
Best, DP