Guest-blog: David Priseman – The future of technology in home-care for the elderly

David Priseman

Technology is currently critical to home health care. Future advances in home health care technologies have the potential not only to facilitate the role of home health care within the overall health care system but also to help foster community-based independence for individuals.

Today I have the pleasure of introducing another Guest Blogger, David Priseman, who is an accomplished Executive Director. David had a career in consultancy and banking, including spells abroad with two major European banks and has worked for several years in the field of private equity and alternative finance as well as an advisor to SMEs. He has considerable board experience and currently chairs a mid-sized care home group and is a non-executive director of a small but ambitious technology company. He has a particular interest in how technology can address the challenges of the care sector, which is often slow to adopt innovation.

David is going to discuss with us today the future of technology in home-care for the elderly.

Both councils and families strive to keep the elderly living in their own home for as long as possible. Councils see a simple cost advantage in doing so, whilst families also like the idea that mum (statistically, it is usually mum) can still live at home.

However the reality of a single elderly person living at home on her own can be far from the rosy ideal. There is an alternative image of a harassed care worker rushing into an elderly person’s home, quickly heating up a tin of baked beans then 15 minutes later rushing out of the door. Yet this might be the only contact the person has with anyone until the same or a different care worker rushes by the next day.

Domiciliary care, like residential care, is difficult to provide effectively and profitably. Companies are handing back council care contracts as they cannot operate at the fee levels on offer (1). Staff recruitment and retention is a permanent challenge.

Councils are reluctant or unable to pay more than £15/hour, which is not financially viable for home-care providers, who now have to pay employees a higher minimum wage as well as their travel costs. However it can be viable at £20/hour. With care home costs around the £1,000/week level, half this amount would buy 25 hours of home-care per week. As the number of residential care beds is in slight decline whilst the number of elderly people is projected to rise steeply, this implies that the number of elderly people living at home will also rise. With this could come a significant growth in the self-payer home-care market.

People living at home are exposed to the risk of physical vulnerability, slow and inappropriate care delivery and social isolation. However the recent development of new technologies may in combination significantly improve the social and care experience for such people.

The unpredictability of the number of hours worked together with the short term notice of rotas and sudden changes in rotas are a major cause of high home-care worker turnover (2) and a headache for domiciliary care providers. However a range of competing software and apps have now been developed to mitigate (though not remove) this challenge. This can improve the efficiency of staff scheduling from a provider’s view point, addressing one of the main sources of dissatisfaction of employees whilst also introducing flexibility for the elderly resident.

Many elderly people have traditionally had a regular, perhaps weekly, phone call with their children. Some now conduct this through Sype. In addition, some families have installed a videocam or webcam in their parent’s home, usually in the kitchen or lounge/dining room, so they can see mum. This helps to maintain social contact and give reassurances about mum’s safety and wellbeing.

The development of ‘wearable technology’ should become more widespread. Currently the dominant application is for fitness monitoring during exercise, however it will increasingly move over to healthcare monitoring. This can be a watch or a monitor which is worn as an arm panel or in the future may be embedded in clothing; in all cases it measures certain of the wearer’s vital signs.
At present, these are mostly used in hospitals to reduce the requirement of nurses, of whom there is a well-documented shortage, to conduct routine patient checks. Instead, the data are transmitted to a cloud-based server and if a vital sign reading crosses a warning threshold this immediately signals an alert. In time, these devices will migrate to the residential setting.
This will speed up the awareness and treatment of a wearer’s condition. Major medical devices companies such as Medtronics and GE are active in this area, which has also seen technology start ups enter the market, such as EarlySense and Snap40. (3)

The internet of things (IoT) is rapidly increasing the number of internet-connected devices in the home. This can be used in a number of ways to improve the safety of elderly people living at home. For example, many people get up, go to the toilet, have a cup of tea and open the curtains. Sensors can detect whether or not the toilet has been flushed, the kettle boiled and the curtains opened, and if any of these things has not happened by say 9am then an alert would be triggered. (4)

One of the main problems facing the elderly living alone is loneliness and the lack of contact with others. Here, a combination of technologies is emerging to provide at least a partial solution. Awareness has recently increased of Amazon’s Alexa voice-controlled system which can search the internet, answer questions and respond to simple commands. Apple’s Siri and Microsoft’s Cortana are similar and rival devices.
Owing to improvements in voice recognition and AI, it will increasingly be possible to have an interactive ‘conversation’ with such devices. At some point, it may be possible to combine this with the face of a person on a screen or even a hologram of a person in the room to create the impression that a human is having a conversation with and maybe even developing a relationship with an intelligent machine-based ‘person’.
This idea has been explored in television and film, for example the science-fiction drama Her when a man develops a romantic relationship with his computer’s feminised operating system (5). Soon, it may become reality and even commonplace.

Finally, more than one of these technologies may combine in a way that provides care monitoring, practical assistance and companionship. Developed countries all have aging populations so the need to find solutions is urgent and many companies and universities are conducting research into this area, such as robotics with AI (6). New market opportunities are emerging to integrate and package appropriate technology solutions.

The vulnerable elderly living on their own at home have often been poorly served to date. Yet the number of such people is poised to continue to rise steeply. However a number of technologies are now being developed in parallel to tackle the problems they face. The result may be an improved care environment for the elderly at home: safer, reliable, better supported and less isolated. Such a future could be with us sooner than we think.

You can contact David Priseman on LinkedIn or by email: davidpriseman @ btconnect.com (remove spaces).

References

1. http://www.bbc.co.uk/news/uk-39321579
2. http://timewise.co.uk/wp-content/uploads/2014/02/1957-Timewise-Caring-by-Design-report-Under-200MB.pdf
3. http://www.earlysense.com/ and http://www.snap40.com/
4. https://www.ibm.com/blogs/internet-of-things/internet-caring/ and https://www.ibm.com/blogs/internet-of-things/elderly-independent-smart-home/
5. http://www.herthemovie.com/#/about
6. http://www.bbc.co.uk/news/business-39255244

Robots are surely not going to destroy the planet, or are they?

Elon Musk, the mastermind behind SpaceX and Tesla, believes that artificial intelligence is “potentially more dangerous than nukes,” imploring all of humankind “to be super careful with AI,” unless we want the ultimate fate of humanity to closely resemble Judgment Day from Terminator. Personally, I think Musk is being a little futuristic in his thinking after all, we have survived more than 60 years of the threat of thermonuclear mutually assured destruction but still, it is worth considering Musk’s words in greater detail, and clearly he has a point.

Musk made his comments on Twitter back in 2014, after reading Superintelligence by Nick Bostrom. The book deals with the eventual creation of a machine intelligence (artificial general intelligence, AGI) that can rival the human brain, and our fate thereafter. While most experts agree that a human-level AGI is mostly inevitable by this point it’s just a matter of when Bostrom contends that humanity still has a big advantage up its sleeve: we get to make the first move. This is what Musk is referring to when he says we need to be careful with AI: we’re rapidly moving towards a Terminator-like scenario, but the actual implementation of these human-level AIs is down to us. We are the ones who will program how the AI actually works. We are the ones who can imbue the AI with a sense of ethics and morality. We are the ones who can implement safeguards, such as Asimov’s three laws of robotics, to prevent an eventual robot holocaust.

In short, if we end up building a race of super-intelligent robots, we have no one but ourselves to blame and Musk, sadly, is not too optimistic about humanity putting the right safeguards in place. In a second tweet, Musk says: ‘Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Here he’s referring to humanity’s role as the precursor to a human-level artificial intelligence and after the AI is up and running, we’ll be ruled superfluous to AI society and quickly erased.

Stephen Hawking warned that technology needs to be controlled in order to prevent it from destroying the human race.
The world-renowned physicist, who has spoken out about the dangers of artificial intelligence in the past, believes we all need to establish a way of identifying threats quickly, before they have a chance to escalate.

“Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” he told The Times.

“It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”

In a Reddit AMA back in 2015, Mr Hawking said that AI would grow so powerful it would be capable of killing us entirely unintentionally.

“The real risk with AI isn’t malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
The theoretical physicist Stephen Hawking, who has died recently aged 76, said last year that he wanted to “inspire people around the world to look up at the stars and not down at their feet”. Hawking, who until 2009 held a chair at Cambridge university once occupied by Isaac Newton, was uniquely placed to encourage an upwards gaze.

Enfeebled by amyotrophic lateral sclerosis, a form of motor neurone disease, he displayed extraordinary clarity of mind. His ambition was to truly understand the workings of the universe and then to share the wonder.

Importantly, he warned of the perils of artificial intelligence and feared that the rise of the machines would be accompanied by the downfall of humanity. Not that he felt that human civilisation had particularly distinguished itself: our past, he once said, was a “history of stupidity”.

Here are 10 interesting insights into the life and viewpoints of Stephen Hawking. Sure, Stephen Hawking is a brilliant, groundbreaking scientist, but that’s not all …

Stephen Hawking had much to say on the future of tech after all, he was an expert: Hawking was one of the first people to become connected to the internet.

“So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.
“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation.
While he saw many benefits to artificial intelligence – notably, the Intel-developed computer system ACAT that allows him to communicate more effectively than ever – he echoes entrepreneurial icons like Elon Musk by warning that the completion of A.I.’s potential would also “spell the end of the human race.”

Stephen Hawking co-authored an ominous editorial in the Independent warning of the dangers of AI.

The theories for oblivion generally fall into the following categories (and they miss the true danger):
– Military AI’s run amok: AIs decide that humans are a threat and set out to exterminate them.
– The AI optimization apocalypse: AI’s decide that the best way to optimize some process, their own survival, spam reduction, whatever, is to eliminate the human race.
– The resource race: AIs decide that they want more and more computing power, and the needs of meager Earthlings are getting in the way. The AI destroys humanity and converts all the resources, biomass — all the mass of the Earth actually — into computing substrate.
– Unknowable motivations: AI’s develop some unknown motivation that only supremely intelligent beings can understand and humans are in the way of their objective, so they eliminate us.
I don’t want to discount these theories. They’re all relevant and vaguely scary. But I don’t believe any of them describe the actual reason why AIs will facilitate the end of humanity.

As machines take on more jobs, many find themselves out of work or with raises indefinitely postponed. Is this the end of growth? No, says Erik Brynjolfsson:

Final thought: Artificial Intelligence will facilitate the creation of artificial realities  custom virtual universes  that are so indistinguishable from reality, most human beings will choose to spend their lives in these virtual worlds rather than in the real world. People will not breed. Humanity will die off.

It’s easy to imagine. All you have to do is look at a bus, subway, city street or even restaurant to see human beings unplugging from reality (and their fellow physical humans) for virtual lives online.

AIs are going to create compelling virtual environments which humans will voluntarily immerse themselves in. At first these environments will be for part-time entertainment and work. The first applications of AI will be for human-augmentation. We’re already seeing this with Siri, Indigo, EVA, Echo and the proliferation of AI assistants.

AI will gradually become more integrated into human beings, and Virtual platforms like Oculus and Vive will become smaller, much higher quality and integrated directly into our brains.

AIs are going to facilitate tremendous advances in brain science. Direct human-computer interfaces will become the norm, probably not with the penetrative violation of the matrix I/O ports, but more with the elegance of a neural lace. It’s not that far off.
In a world with true general AI, they’re going to get orders of magnitude smarter very quickly as they learn how to optimize their own intelligence. Human and AI civilization will quickly progress to a post-scarcity environment.

And as the fully integrated virtual universes become indistinguishable from reality, people will spend more and more time plugged in.
Humans will not have to work, there will be no work for humans. Stripped of the main motivation most people have for doing anything, people will be left to do whatever they want.

Want to play games all day? Insert yourself into a Matrix quality representation of Game of Thrones where you control one of the great houses. Go ahead. Play for years with hundreds of friends.

Want to spend all day trolling through the knowledge of the world in a virtual, fully interactive learning universe? Please do. Every piece of human knowledge can be available, and you can experience recreations of historical events first-hand.

Want to explore space? Check out this fully immersive experience from an unmanned Mars space-probe. Or just live in the Star Wars or Star Trek universe.

Want to have a month long orgasm with the virtual sex hydra of omnisport? Enjoy, we’ll see you in thirty days. Online of course. No one dates anymore.

Well, some people will date. They will date AI’s. Scientists are already working on AI sex robots. What happens when you combine the intelligence, creativity and sensitivity embodied by Samantha in the movie Her with an android that is anatomically indistinguishable from a perfect human (Ex Machina, Humans, etc)?

Deep learning algorithms will find out your likes, dislikes and how to charm your pants off. The AIs will be perfect matches for your personality. They can choose your most desirable face and body type, or design their own face and attire for maximum allure.
Predicting the future is always a difficult matter. We can only rely on the predictions of experts and technology observations of what is in existence, however, it’s impossible to rule anything out.

We do not yet know whether AI will usher in a golden age of human existence, or if it will all end in the destruction of everything humans cherish. What is clear, though, is that thanks to AI, the world of the future could bear little resemblance to the one we inhabit today.

An AI takeover is a hypothetical scenario, but a robot uprising could be closer than ever predicted in which AI becomes the dominant for of intelligence of earth, with computers or robots effectively taking control of the planet away from the human species, according to royal astronomer Sir Martin Rees, who believes machines will replace humanity within a few centuries.

Possible scenarios include replacement of the entire human workforce, takeover by a super-intelligent AI, and the popular notion of a robot uprising. Some public figures that we have discussed in this blog, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future super-intelligent machines remain under human control.

We need to watch this space…..

As Masayoshi Son once said:

“I believe this artificial intelligence is going to be our partner. If we misuse it, it will be a risk. If we use it right, it can be our partner.”

Can you really fall in love with a Robot?

Our company has just started to work with a new client who has developed a humanised robot, which they describe as a ‘social robot’. It is clear by my work to date with this company that advances in robotics and AI are starting to gain some real momentum. In the coming decades, scientists predict robots will take over more and more jobs including white collar ones, and gain ubiquity in the home, school, and work spheres.

Due to this, roboticists and AI experts, social scientists, psychologists, and others are speculating what impact it will have on us and our world. Google and Oxford have teamed up to make a kill switch should AI initiate a robot apocalypse.

One way to overcome this is to imbue AI with emotions and empathy, to make them as human-like as possible, so much so that it may become difficult to tell robots and real people apart. In this vein, scientists have wondered if it might be possible for a human to fall in love with a robot, considering we are moving toward fashioning them after our own image. Spike Jonze’s Her and the movie Ex Machina touch on this.

Can you fall in love with a robot?
http://edition.cnn.com/videos/cnnmoney/2017/04/10/can-you-all-in-love-with-a-robot.cnn

Interesting enough both the film ‘Ex Machina’, in which a computer programmer falls in love with a droid, may not be as far-fetched as you think.

A new study has found that humans have the potential to emphasise with robots, even while knowing they do not have feelings.
It follows previous warnings from experts that humans could develop unhealthy relationships with robots, and even fall in love with them.

The discovery was made after researchers asked people to view images of human and humanoid robotic hands in painful situations, such as being cut by a knife. After studying their electrical brain signals, they found humans responded with similar immediate levels of empathy to both humans and robots.

After studying their electrical brain signals, they found humans responded with similar immediate levels of empathy to both humans and robots.

But the beginning phase of the so-called ‘top-down’ process of empathy was weaker toward robots.

The study was carried out by researchers at Toyohashi University of Technology and Kyoto University in Japan, and provides the first neurophysiological evidence of humans’ ability to empathise with robots.

These results suggest that we empathise with humanoid robots in a similar way to how we empathise with other humans.
Last month, a robot ethicist warned that AI sex dolls could ‘contribute to detrimental relationships between men and women, adults and children, men and men and women and women’

Scientists suggest that we’re unable to fully take the perspective of robots because their body and mind – if it exists – are very different from ours.

‘I think a future society including humans and robots should be good if humans and robots are prosocial,’ study co-author Michiteru Kitazaki told Inverse.

‘Empathy with robots as well as other humans may facilitate prosocial behaviors. Robots that help us or interact with us should be empathised by humans.’

Experts are already worried about the implication of humans developing feelings for robots.

The question we all need to ask is ‘do we fear a future of love with a real human to be a happy to substitute to a robot’ the idea that a real, living, breathing human could be replaced by something that is almost, but not exactly, the same thing, well, actually a robot.

By now you’ve probably heard the story of Tay, Microsoft’s social AI experiment that went from “friendly millennial girl” to genocidal misogynist in less than a day. At first, Tay’s story seems like a fun one for anyone who’s interested in cautionary sci-fi. What does it mean for the future of artificial intelligence if a bot can embody the worst aspects of digital culture after just 16 hours online?

If any AI is given the vastness of human creation to study at lightning speed, will it inevitably turn evil?

Will the future be a content creation battle for their souls?

Society is now driven by the social connections you hold, the likes and your preferences of relevancy, the movie Her is described with a complex nature, a man who is inconsolable since he and his wife separated. Theodore is a lonely man in the final stages of his divorce. When he’s not working as a letter writer, his down time is spent playing video games and occasionally hanging out with friends. He decides to purchase the new OS1, which is advertised as the world’s first artificially intelligent operating system, “It’s not just an operating system, it’s a consciousness,” the ad states. Theodore quickly finds himself drawn in with Samantha, the voice behind his OS1. As they start spending time together they grow closer and closer and eventually find themselves in love. Having fallen in love with his OS, Theodore finds himself dealing with feelings of both great joy and doubt. As an OS, Samantha has powerful intelligence that she uses to help Theodore in ways others hadn’t, but how does she help him deal with his inner conflict of being in love with an OS?

Though technically unfeasible by today’s AI standards, the broad premise of the movie is more realistic than most people may think. Indeed, in the past 10 years our lives have been transformed by technology and love is no exception. With Valentine’s Day around the corner, there’s no better time to examine some of the recent developments in this area.

Taobao, China’s version of Amazon, offers virtual girlfriends and boyfriends for around $2 (£1.20) per day. These are real humans, but they only relate with their paying customers via the phone – calls or text – in order to perform fairly unromantic tasks such as wake up calls, good night calls, and (perhaps the most useful service) “sympathetically listen to clients’ complaints”. If this is all you expect from a relationship, it at least comes at a cheap price.

Similar services already exist in India, where biwihotohaisi.com helps bachelors “practice” for married life with a virtual wife, and Japan, where “romance simulation games” are popular with men and women, even when they feature animated avatars rather than human partners.

In many of today’s most fascinating visions of future love, the body itself seems like a relic of the past. In Her, for example, we encounter a social landscape where love between humans and machines doesn’t require a physical body at all. Instead we watch as Theo shares his most personal moments with an AI who he never actually touches, but who conveys intimacy through talking, sharing messages, drawings, ideas and sexual fantasies. In our current social climate, where dating often means scrolling through photos and written bios rather than interacting with people in person, the idea that you could fall in love with your computer doesn’t seem so far-fetched. After all, we are already used to more disembodied forms of communication, and, as many older generations continue to lament, many young people today are more likely to text or sext than actually establish in-person kinds of intimacy.

AI is the perfect sounding board for these modern anxieties about human connection, and 20th- and 21st-century films are filled with dystopian landscapes that showcase the loneliness of a world where intimacy is something you can buy. In many of these films, from classics such as Fritz Lang’s Metropolis to more modern movies like Alex Garland’s Ex Machina, the creators and consumers of AI are male, while the AI themselves are female. The patriarchal underpinning of this is vividly explored in sci-fi such as The Stepford Wives and Cherry 2000, where we are ushered into worlds where compliant and submissive female robots are idealized by their male creators as the epitome of perfection, and always exist completely under their thumb. The female robots we meet in these films cook, clean, are unfailingly supportive and are always sexually available, in addition to being exceptionally beautiful. These sex-bots have also become both a mainstay of humor, from the sexy goofiness of 80s films such as Weird Science and Galaxina, to the cheeky and slightly more socially aware comedies in the 90s, with the frilly, busty fembots of Austen Powers and Buffy the Vampire Slayer’s charmingly dippy “Buffy-bot”

Serge Tisseron, a French Psychiatrist who studies the relationships between youth, the media and images and the effect of information and communication technology on young people, reminds that, despite signs of attachments from the robot, the relation can and will always be one way.

Serge insists on the importance of a reflection around the ethical issues to avoid the destruction of human relations. Because of their interactions with efficient, high-performing and helpful robots, humans could end up being disappointed with other humans altogether, especially on a professional level. Or, we could eventually abandon our responsibilities completely and rely solely on robots to take care of our loved ones. In the end, this could result in a serious withdrawal from the human world and could affect our ability to live in society.

A final thought is that no one knows what the future holds, if robots will manage to develop their conscience and emotions but in any case, there needs to be enough preparations for their development and integration to society.

A great quote by Colin Angle:

“In the smart home of the future, there should be a robot designed to talk to you. With enough display technology, connectivity, and voice recognition, this human-interface robot or head-of-household robot will serve as a portal to the digital domain. It becomes your interface to your robot-enabled home.”

Can we really teach robots ethics and morality…

I was having a regular coffee with a very good friend and affluent data scientist at the Institute of Directors recently in London, we share many great past times including reading and writing, and were discussing at our meeting ethical and moral values with robots, my question was can we teach robots right from wrong.

The facts are artificial intelligence (AI) is outperforming humans in a range of areas, but can we program or teach robots to behave ethically and morally?

Twitter has admitted that as many as 23 million (8.5 percent) of its user accounts are autonomous Twitterbots. Many are there to increase productivity, conduct research, or even have some fun. Yet many have been created with harmful intentions. In both cases, the bots have been known to behave with questionable ethics – maybe because they’re merely minor specimens of artificial intelligence (AI).

Humans are currently building far-more sophisticated machines which will face questions of ethics on monumental scales. Even in the mortality of human beings. So how do we make sure they make the right choices when the time comes?

Now, all of this is based on the idea that a computer can even comprehend ethics.

If cognitive machines don’t have the capacity for ethics, who is responsible when they break the law?

Currently, no one seems to know. Ryan Calo of the University of Washington School of Law notes: “Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument.”

The process for legislation is arduously slow, while technology, on the other hand, makes exponential haste.

The crimes can be quite serious, too. Netherlands developer Jeffry van der Goot had to defend himself — and his Twitterbot — when police knocked on his door, inquiring about a death threat sent from his Twitter account. Then there’s Random Darknet Shopper; a shopping bot with a weekly allowance of $100 in Bitcoin to make purchases on Darknet for an art exhibition. Swedish officials weren’t amused when it purchased ecstasy, which the artist put on display. (Though, in support for artistic expression they didn’t confiscate the drugs until the exhibition ended.)

In both of these cases, authorities did what they could within the law, but ultimately pardoned the human proprietors because they hadn’t explicitly or directly committed crimes. But how does that translate when a human being unleashes an AI with the intention of malice?

Legions of robots now carry out our instructions unreflectively. How do we ensure that these creatures, regardless of whether they’re built from clay or silicon, always work in our best interests?

Should we teach them to think for themselves? And if so, how are we to teach them right from wrong?

In 2017, this is an urgent question. Self-driving cars have clocked up millions of miles on our roads while making autonomous decisions that might affect the safety of other human road-users. Roboticists in Japan, Europe and the United States are developing service robots to provide care for the elderly and disabled. One such robot carer, which was launched in 2015 and dubbed Robear (it sports the face of a polar-bear cub), is strong enough to lift frail patients from their beds; if it can do that, it can also, conceivably, crush them. Since 2000 the US Army has deployed thousands of robots equipped with machineguns, each one able to locate targets and aim at them without the need for human involvement (they are not, however, permitted to pull the trigger unsupervised).

Public figures have also stoked the sense of dread surrounding the idea of autonomous machines. Elon Musk, a tech entrepreneur, claimed that artificial intelligence is the greatest existential threat to mankind. Last summer the White House commissioned four workshops for experts to discuss this moral dimension to robotics. As Rosalind Picard, director of the Affective Computing Group at MIT puts it: “The greater the freedom of a machine, the more it will need moral standards.”

Teaching robots how to behave on the battlefield may seem straightforward, since nations create rules of engagement by following internationally agreed laws. But not every potential scenario on the battlefield can be foreseen by an engineer, just as not every ethically ambiguous situation is covered by, say, the Ten Commandments. Should a robot, for example, fire on a house in which a high value target is breaking bread with civilians? Should it provide support to a group of five low-ranking recruits on one side of a besieged town, or one high-ranking officer on the other? Should the decision be made on a tactical or moral basis?

In conclusion of whether we should teach robots or not right from wrong, you should think about science fiction or a James Bond movie, the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations. The idea of formalising ethical guidelines is not new.
More than seven decades ago, science-fiction writer Isaac Asimov described the “three laws of robotics”: a moral compass for artificial intelligence. The laws required robots to:

  • protect humans
  • obey instructions
  • preserve themselves

(in that order).

The fundamental premise behind Asimov’s laws was to minimize conflicts between humans and robots. In Asimov’s stories, however, even these simple moral guidelines lead to often disastrous unintended consequences. Either by receiving conflicting instructions or by exploiting loopholes and ambiguities in these laws, Asimov’s robots ultimately tend to cause harm or lethal injuries to humans.Today, robotics requires a much more nuanced moral code than Asimov’s “three laws.”

An iPhone or a laptop may be inscrutably complex compared with a hammer or a spade, but each object belongs to the same category: tools. And yet, as robots begin to gain the semblance of emotions, as they begin to behave like human beings, and learn and adopt our cultural and social values, perhaps the old stories need revisiting. At the very least, we have a moral obligation to figure out what to teach our machines about the best way in which to live in the world. Once we have done that, we may well feel compelled to reconsider how we treat them…..

As Thomas Bangalter once said:

“The concept of the robot encapsulates both aspects of technology. On one hand it’s cool, it’s fun, it’s healthy, it’s sexy, it’s stylish. On the other hand it’s terrifying, it’s alienating, it’s addictive, and it’s scary. That has been the subject of much science-fiction literature.”