Predictions for the start of 2020

2019 was definitely an interesting year!

As Abraham Lincoln once said: “The best way to predict your future is to create it.”

It’s hard to imagine that we’re living in the year 2020. Though we’ve seen plenty of impressive technological advances, like artificial intelligence and phones that unlock by scanning our faces, it’s not quite the world of flying cars and robot butlers people once imagined we’d have by now.

As crazy as these all seem, the world is on track for some spectacular innovations in 2020. Privately operated space flights, self-driving taxis and increases in cyberwarfare would have all seemed like science fiction a few decades ago, but now they’re very real possibilities.

So, let’s have a look at some of the expectations for 2020:

Space Travel
Humans living on other planets is a staple in sci-fi, but it’s growing closer to reality thanks to private space travel initiatives.

As greater advances in space travel are made, the media’s interest will be revitalised. Those private companies will likely capitalise on that attention, which could lead to opportunities to bid on government contracts. Jobs will be created. Auxiliary innovations will be developed. And our chance to become a multiplanet species will (infinitesimally) increase.

Self-Driving Cars

Ride-hailing services are already part of everyday life, but self-driving cars are set to cause seismic changes to the industry. Once safety concerns are addressed, many passengers might find that they prefer being driven by a computer rather than a nosy human. And implementing a network of self-driving cars will be crucial in order for these platforms to finally make a profit.

Companies may adapt to self-driving cars as well. Autonomous transport obviates the need for large fleets of corporate cars. Transportation costs for employees could be drastically reduced. The company could get depreciating assets off the books. And energy efficiency would increase. It’s a win-win-win.

Cybersecurity

Cybersecurity continues to grow in importance as more of our information moves online. Unfortunately, we’ve seen how woefully unprepared even trusted sectors like finance and government can be when it comes to keeping data safe.

No one wants their credit card information appearing on a hacker’s forum, so cybersecurity is crucial for any company doing business online. Cyberattacks are becoming more sophisticated, but fortunately, innovation in countermeasures has surged forward as well. Going into the next year, the cybersecurity industry will likely grow, assisted by cutting-edge technology like artificial intelligence (AI) and machine learning.

We are amidst the 4th Industrial Revolution, and technology is evolving faster than ever. Companies and individuals that don’t keep up with some of the major tech trends run the risk of being left behind. Understanding the key trends will allow people and businesses to prepare and grasp opportunities.

Artificial Intelligence (AI) is one of the most transformative tech evolutions of our times. Most companies have started to explore how they can use AI to improve the customer experience and to streamline their business operations. This will continue in 2020, and while people will increasingly become used to working alongside AIs, designing and deploying our own AI-based systems will remain an expensive proposition for most businesses.

For this reason, much of the AI applications will continue to be done through providers of as-a-service platforms, which allow us to simply feed in our own data and pay for the algorithms or compute resources as we use them.

Currently, these platforms, provided by the likes of Amazon, Google, and Microsoft, tend to be somewhat broad in scope, with (often expensive) custom-engineering required to apply them to the specific tasks an organization may require. During 2020, we will see wider adoption and a growing pool of providers that are likely to start offering more tailored applications and services for specific or specialized tasks. This will mean no company will have any excuses left not to use AI.

The 5th generation of mobile internet connectivity is going to give us super-fast download and upload speeds as well as more stable connections. While 5G mobile data networks became available for the first time in 2019, they were mostly still expensive and limited to functioning in confined areas or major cities. 2020 is likely to be the year when 5G really starts to fly, with more affordable data plans as well as greatly improved coverage, meaning that everyone can join in the fun.

Super-fast data networks will not only give us the ability to stream movies and music at higher quality when we’re on the move. The greatly increased speeds mean that mobile networks will become more usable even than the wired networks running into our homes and businesses.

Companies must consider the business implications of having super-fast and stable internet access anywhere. The increased bandwidth will enable machines, robots, and autonomous vehicles to collect and transfer more data than ever, leading to advances in the area of the Internet of Things (IoT) and smart machinery.

Extended Reality (XR) is a catch-all term that covers several new and emerging technologies being used to create more immersive digital experiences. More specifically, it refers to virtual, augmented, and mixed reality. Virtual reality (VR) provides a fully digitally immersive experience where you enter a computer-generated world using headsets that blend out the real world.

Augmented reality (AR) overlays digital objects onto the real world via smartphone screens or displays (think Snapchat filters). Mixed reality (MR) is an extension of AR, that means users can interact with digital objects placed in the real world (think playing a holographic piano that you have placed into your room via an AR headset).

These technologies have been around for a few years now but have largely been confined to the world of entertainment – with Oculus Rift and Vive headsets providing the current state-of-the-art in videogames, and smartphone features such as camera filters and Pokemon Go-style games providing the most visible examples of AR.

With so many changes to our technology coming so fast, it can be hard to grasp the sheer scale of innovation underway. The list above highlights some of the more interesting developments, but is far from exhaustive. Whatever happens, 2020 will be an interesting year for major tech companies and budding entrepreneurs alike.

2020 will be a year of reckoning for those that have held on too long or tried to bootstrap their way through transforming their business.

Simply put, the distance between customer expectations and the reality on the ground is becoming so great that a slow and gradual transition is no longer possible. Incrementalism may feel good, but it masks the quiet deterioration of the business.

Whether CEOs in these companies start to use their balance sheet wisely, find new leaders, develop aggressive turnaround plans, or do all of the above, they and their leadership teams must aggressively get on track to preserve market share and market standing.

Purposeful Discussions cover

Finally, 2020 brings ‘Purposeful Discussions’ which is now my fifth book in a series of books that provide purpose driven outcomes in support of some of the most talked-about subjects in life today. This book demonstrates the relationship between communications (human 2 human), strategy and business development and life growth. It is important to understand that a number of the ideas, developments and techniques employed at the beginning as well as the top of business can be successfully made flexible to apply.

As Swami Vivekananda once said:

“Take up one idea. Make that one idea your life – think of it, dream of it, live on that idea. Let the brain, muscles, nerves, every part of your body, be full of that idea, and just leave every other idea alone. This is the way to success.”

Do we need AI, if humans can grow in development?

It seems like every day there is a new article or story about artificial intelligence (AI). AI is going to take over all of the jobs. AI is going to do all of the repetitive, menial tasks carried out by admins on a daily basis. AI is going to rise up and take over. AI is not going to take over but instead be natively baked into all systems to produce more human interactions.

For all the things AI is allegedly going to do, it can already do a lot right now, such as automation, custom searches, security interventions, analysis and prediction on data, serve as a digital assistant, perform algorithm-based machine learning and more.

It will be a good number of years before we get AI doing everything for us, the real question is can humans survive without AI?

Does anyone recall the Trachtenberg speed system of basic mathematics?

The Trachtenberg Speed System of Basic Mathematics is a system of mental mathematics which in part did not require the use of multiplication tables to be able to multiply. The method was created over seventy years ago. The main idea behind the Trachtenberg Speed System of Basic Mathematics is that there must be an easier way to do multiplication, division, squaring numbers and finding square roots, especially if you want to do it mentally.

Jakow Trachtenberg spent years in a Nazi concentration camp and to escape the horrors he found refuge in his mind developing these methods. Some of the methods are not new and have been used for thousands of years. This is why there is some similarity between the Trachtenberg System and Vedic math for instance. However, Jackow felt that even these methods could be simplified further. Unlike Vedic math and other systems like Bill Handley’s excellent Speed Math where the method you choose to calculate the answer depends on the numbers you are using, the Trachtenberg System scales up from single digit multiplication to multiplying with massive numbers with no change in the method.

Multiplication is done without multiplication tables “Can you multiply 5132437201 times 4522736502785 in seventy seconds? One young boy (grammar school-no calculator) did successfully by using the Trachtenberg Speed System of Basic Mathematics.

So, with human intelligence why do we need AI, AGI, deep learning or machine learning?

Faster than a calculator, Arthur Benjamin discusses the speed of mathematics TEDxOxford

Albert Einstein is widely regarded as a genius, but how did he get that way? Many researchers have assumed that it took a very special brain to come up with the theory of relativity and other stunning insights that form the foundation of modern physics. A study of 14 newly discovered photographs of Einstein’s brain, which was preserved for study after his death, concludes that the brain was indeed highly unusual in many ways. But researchers still don’t know exactly how the brain’s extra folds and convolutions translated into Einstein’s amazing abilities.

Experts say Einstein programmed his own brain, that he had a special brain when the field of physics was ripe for new insights, that he had the right brain in the right place at the right time.

Can we all program our brains for advancement, does our civilisation really need our brains rely on AI/AGI?

Artificial intelligence is incredibly advanced, at least, at certain tasks. AI has defeated world champions in chess, Go, and now poker. But can artificial intelligence actually think?

The answer is complicated, largely because intelligence is complicated. One can be book-smart, street-smart, emotionally gifted, wise, rational, or experienced; it’s rare and difficult to be intelligent in all of these ways. Intelligence has many sources and our brains don’t respond to them all the same way. Thus, the quest to develop artificial intelligence begets numerous challenges, not the least of which is what we don’t understand about human intelligence.

Still, the human brain is our best lead when it comes to creating AI. Human brains consist of billions of connected neurons that transmit information to one another and areas designated to functions such as memory, language, and thought. The human brain is dynamic, and just as we build muscle, we can enhance our cognitive abilities we can learn. So can AI, thanks to the development of artificial neural networks (ANN), a type of machine learning algorithm in which nodes simulate neurons that compute and distribute information. AI such as AlphaGo, the program that beat the world champion at Go last year, uses ANNs not only to compute statistical probabilities and outcomes of various moves, but to adjust strategy based on what the other player does.

Facebook, Amazon, Netflix, Microsoft, and Google all employ deep learning, which expands on traditional ANNs by adding layers to the information input/output. More layers allow for more representations of and links between data. This resembles human thinking when we process input, we do so in something akin to layers. For example, when we watch a football game on television, we take in the basic information about what’s happening in a given moment, but we also take in a lot more: who’s on the field (and who’s not), what plays are being run and why, individual match-ups, how the game fits into existing data or history (does one team frequently beat the other? Is the centre forward passing the ball or scoring?), how the refs are calling the game, and other details. In processing this information we employ memory, pattern recognition, statistical and strategic analysis, comparison, prediction, and other cognitive capabilities. Deep learning attempts to capture those layers.

You’re probably already familiar with deep learning algorithms. Have you ever wondered how Facebook knows to place on your page an ad for rain boots after you got caught in a downpour? Or how it manages to recommend a page immediately after you’ve liked a related page? Facebook’s DeepText algorithm can process thousands of posts, in dozens of different languages, each second. It can also distinguish between Purple Rain and the reason you need galoshes.

Deep learning can be used with faces, identifying family members who attended an anniversary or employees who thought they attended that rave on the down-low. These algorithms can also recognise objects in context such a program that could identify the alphabet blocks on the living room floor, as well as the pile of kids’ books and the bouncy seat. Think about the conclusions that could be drawn from that snapshot, and then used for targeted advertising, among other things.
Google uses Recurrent Neural Networks (RNNs) to facilitate image recognition and language translation. This enables Google Translate to go beyond a typical one-to-one conversion by allowing the program to make connections between languages it wasn’t specifically programmed to understand. Even if Google Translate isn’t specifically coded for translating Icelandic into Vietnamese, it can do so by finding commonalities in the two tongues and then developing its own language which functions as an interlingua, enabling the translation.

Machine thinking has been tied to language ever since Alan Turing’s seminal 1950 publication “Computing Machinery and Intelligence.” This paper described the Turing Test—a measure of whether a machine can think. In the Turing Test, a human engages in a text-based chat with an entity it can’t see. If that entity is a computer program and it can make the human believe he’s talking to another human, it has passed the test.

But what about IBM’s Watson, which thrashed the top two human contestants in Jeopardy?

Watson’s dominance relies on access to massive and instantly accessible amounts of information, as well as its computation of answers’ probable correctness.

Why humans will always be smarter than AI…..

This concept of context is one that is central to Hofstadter’s lifetime of work to figure out AI. In a seminal 1995 essay he examines an earlier treatise on pattern recognition by Russian researcher Mikhail Bongard, a Russian researcher, and comes to the conclusion that perception goes beyond simply matching known patterns:

… in strong contrast to the usual tacit assumption that the quintessence of visual perception is the activity of dividing a complex scene into its separate constituent objects followed by the activity of attaching standard labels to the now-separated objects (ie, the identification of the component objects as members of various pre-established categories, such as ‘car’, ‘dog’, ‘house’, ‘hammer’, ‘airplane’, etc)

… perception is far more than the recognition of members of already-established categories — it involves the spontaneous manufacture of new categories at arbitrary levels of abstraction.
For Booking.com, those new categories could be defined in advance, but a more general-purpose AI would have to be capable of defining its own categories. That’s a goal Hofstadter has spent six decades working towards, and is still not even close.

In her BuzzFeed article, Katie Notopoulos goes on to explain that this is not the first time that Facebook’s recallbration of the algorithms driving its newsfeeds has resulted in anomalous behavior. Today, it’s commenting on posts that leads to content being overpromoted. Back in the summer of 2016 it was people posting simple text posts. What’s interesting is that the solution was not a new tweak to the algorithm. It was Facebook users who adjusted — people learned to post text posts and that made them less rare.

And that’s always going to be the case. People will always be faster to adjust than computers, because that’s what humans are optimized to do. Maybe sometime many years in the future, computers will catch up with humanity’s ability to define new categories — but in the meantime, humans will have learned how to harness computing to augment their own native capabilities. That’s why we will always stay smarter than AI.

Final thought, perhaps the major limitation of AI can be captured by a single letter: G. While we have AI, we don’t have AGI—artificial general intelligence (sometimes referred to as “strong” or “full” AI). The difference is that AI can excel at a single task or game, but it can’t extrapolate strategies or techniques and apply them to other scenarios or domains you could probably beat AlphaGo at Tic Tac Toe. This limitation parallels human skills of critical thinking or synthesis—we can apply knowledge about a specific historical movement to a new fashion trend or use effective marketing techniques in a conversation with a boss about a raise because we can see the overlaps. AI has restrictions, for now.

Some believe we’ll never truly have AGI; others believe it’s simply a matter of time (and money). Last year, Kimera unveiled Nigel, a program it bills as the first AGI. Since the beta hasn’t been released to the public, it’s impossible to assess those claims, but we’ll be watching closely. In the meantime, AI will keep learning just as we do: by watching YouTube videos and by reading books. Whether that’s comforting or frightening is another question.

Stephen Hawking on AI replacing humans:

‘The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.’

From an interview with Wired, November 2017

Robots are surely not going to destroy the planet, or are they?

Elon Musk, the mastermind behind SpaceX and Tesla, believes that artificial intelligence is “potentially more dangerous than nukes,” imploring all of humankind “to be super careful with AI,” unless we want the ultimate fate of humanity to closely resemble Judgment Day from Terminator. Personally, I think Musk is being a little futuristic in his thinking after all, we have survived more than 60 years of the threat of thermonuclear mutually assured destruction but still, it is worth considering Musk’s words in greater detail, and clearly he has a point.

Musk made his comments on Twitter back in 2014, after reading Superintelligence by Nick Bostrom. The book deals with the eventual creation of a machine intelligence (artificial general intelligence, AGI) that can rival the human brain, and our fate thereafter. While most experts agree that a human-level AGI is mostly inevitable by this point it’s just a matter of when Bostrom contends that humanity still has a big advantage up its sleeve: we get to make the first move. This is what Musk is referring to when he says we need to be careful with AI: we’re rapidly moving towards a Terminator-like scenario, but the actual implementation of these human-level AIs is down to us. We are the ones who will program how the AI actually works. We are the ones who can imbue the AI with a sense of ethics and morality. We are the ones who can implement safeguards, such as Asimov’s three laws of robotics, to prevent an eventual robot holocaust.

In short, if we end up building a race of super-intelligent robots, we have no one but ourselves to blame and Musk, sadly, is not too optimistic about humanity putting the right safeguards in place. In a second tweet, Musk says: ‘Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Here he’s referring to humanity’s role as the precursor to a human-level artificial intelligence and after the AI is up and running, we’ll be ruled superfluous to AI society and quickly erased.

Stephen Hawking warned that technology needs to be controlled in order to prevent it from destroying the human race.
The world-renowned physicist, who has spoken out about the dangers of artificial intelligence in the past, believes we all need to establish a way of identifying threats quickly, before they have a chance to escalate.

“Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” he told The Times.

“It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”

In a Reddit AMA back in 2015, Mr Hawking said that AI would grow so powerful it would be capable of killing us entirely unintentionally.

“The real risk with AI isn’t malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
The theoretical physicist Stephen Hawking, who has died recently aged 76, said last year that he wanted to “inspire people around the world to look up at the stars and not down at their feet”. Hawking, who until 2009 held a chair at Cambridge university once occupied by Isaac Newton, was uniquely placed to encourage an upwards gaze.

Enfeebled by amyotrophic lateral sclerosis, a form of motor neurone disease, he displayed extraordinary clarity of mind. His ambition was to truly understand the workings of the universe and then to share the wonder.

Importantly, he warned of the perils of artificial intelligence and feared that the rise of the machines would be accompanied by the downfall of humanity. Not that he felt that human civilisation had particularly distinguished itself: our past, he once said, was a “history of stupidity”.

Here are 10 interesting insights into the life and viewpoints of Stephen Hawking. Sure, Stephen Hawking is a brilliant, groundbreaking scientist, but that’s not all …

Stephen Hawking had much to say on the future of tech after all, he was an expert: Hawking was one of the first people to become connected to the internet.

“So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.
“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation.
While he saw many benefits to artificial intelligence – notably, the Intel-developed computer system ACAT that allows him to communicate more effectively than ever – he echoes entrepreneurial icons like Elon Musk by warning that the completion of A.I.’s potential would also “spell the end of the human race.”

Stephen Hawking co-authored an ominous editorial in the Independent warning of the dangers of AI.

The theories for oblivion generally fall into the following categories (and they miss the true danger):
– Military AI’s run amok: AIs decide that humans are a threat and set out to exterminate them.
– The AI optimization apocalypse: AI’s decide that the best way to optimize some process, their own survival, spam reduction, whatever, is to eliminate the human race.
– The resource race: AIs decide that they want more and more computing power, and the needs of meager Earthlings are getting in the way. The AI destroys humanity and converts all the resources, biomass — all the mass of the Earth actually — into computing substrate.
– Unknowable motivations: AI’s develop some unknown motivation that only supremely intelligent beings can understand and humans are in the way of their objective, so they eliminate us.
I don’t want to discount these theories. They’re all relevant and vaguely scary. But I don’t believe any of them describe the actual reason why AIs will facilitate the end of humanity.

As machines take on more jobs, many find themselves out of work or with raises indefinitely postponed. Is this the end of growth? No, says Erik Brynjolfsson:

Final thought: Artificial Intelligence will facilitate the creation of artificial realities  custom virtual universes  that are so indistinguishable from reality, most human beings will choose to spend their lives in these virtual worlds rather than in the real world. People will not breed. Humanity will die off.

It’s easy to imagine. All you have to do is look at a bus, subway, city street or even restaurant to see human beings unplugging from reality (and their fellow physical humans) for virtual lives online.

AIs are going to create compelling virtual environments which humans will voluntarily immerse themselves in. At first these environments will be for part-time entertainment and work. The first applications of AI will be for human-augmentation. We’re already seeing this with Siri, Indigo, EVA, Echo and the proliferation of AI assistants.

AI will gradually become more integrated into human beings, and Virtual platforms like Oculus and Vive will become smaller, much higher quality and integrated directly into our brains.

AIs are going to facilitate tremendous advances in brain science. Direct human-computer interfaces will become the norm, probably not with the penetrative violation of the matrix I/O ports, but more with the elegance of a neural lace. It’s not that far off.
In a world with true general AI, they’re going to get orders of magnitude smarter very quickly as they learn how to optimize their own intelligence. Human and AI civilization will quickly progress to a post-scarcity environment.

And as the fully integrated virtual universes become indistinguishable from reality, people will spend more and more time plugged in.
Humans will not have to work, there will be no work for humans. Stripped of the main motivation most people have for doing anything, people will be left to do whatever they want.

Want to play games all day? Insert yourself into a Matrix quality representation of Game of Thrones where you control one of the great houses. Go ahead. Play for years with hundreds of friends.

Want to spend all day trolling through the knowledge of the world in a virtual, fully interactive learning universe? Please do. Every piece of human knowledge can be available, and you can experience recreations of historical events first-hand.

Want to explore space? Check out this fully immersive experience from an unmanned Mars space-probe. Or just live in the Star Wars or Star Trek universe.

Want to have a month long orgasm with the virtual sex hydra of omnisport? Enjoy, we’ll see you in thirty days. Online of course. No one dates anymore.

Well, some people will date. They will date AI’s. Scientists are already working on AI sex robots. What happens when you combine the intelligence, creativity and sensitivity embodied by Samantha in the movie Her with an android that is anatomically indistinguishable from a perfect human (Ex Machina, Humans, etc)?

Deep learning algorithms will find out your likes, dislikes and how to charm your pants off. The AIs will be perfect matches for your personality. They can choose your most desirable face and body type, or design their own face and attire for maximum allure.
Predicting the future is always a difficult matter. We can only rely on the predictions of experts and technology observations of what is in existence, however, it’s impossible to rule anything out.

We do not yet know whether AI will usher in a golden age of human existence, or if it will all end in the destruction of everything humans cherish. What is clear, though, is that thanks to AI, the world of the future could bear little resemblance to the one we inhabit today.

An AI takeover is a hypothetical scenario, but a robot uprising could be closer than ever predicted in which AI becomes the dominant for of intelligence of earth, with computers or robots effectively taking control of the planet away from the human species, according to royal astronomer Sir Martin Rees, who believes machines will replace humanity within a few centuries.

Possible scenarios include replacement of the entire human workforce, takeover by a super-intelligent AI, and the popular notion of a robot uprising. Some public figures that we have discussed in this blog, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future super-intelligent machines remain under human control.

We need to watch this space…..

As Masayoshi Son once said:

“I believe this artificial intelligence is going to be our partner. If we misuse it, it will be a risk. If we use it right, it can be our partner.”

Can you really fall in love with a Robot?

Our company has just started to work with a new client who has developed a humanised robot, which they describe as a ‘social robot’. It is clear by my work to date with this company that advances in robotics and AI are starting to gain some real momentum. In the coming decades, scientists predict robots will take over more and more jobs including white collar ones, and gain ubiquity in the home, school, and work spheres.

Due to this, roboticists and AI experts, social scientists, psychologists, and others are speculating what impact it will have on us and our world. Google and Oxford have teamed up to make a kill switch should AI initiate a robot apocalypse.

One way to overcome this is to imbue AI with emotions and empathy, to make them as human-like as possible, so much so that it may become difficult to tell robots and real people apart. In this vein, scientists have wondered if it might be possible for a human to fall in love with a robot, considering we are moving toward fashioning them after our own image. Spike Jonze’s Her and the movie Ex Machina touch on this.

Can you fall in love with a robot?
http://edition.cnn.com/videos/cnnmoney/2017/04/10/can-you-all-in-love-with-a-robot.cnn

Interesting enough both the film ‘Ex Machina’, in which a computer programmer falls in love with a droid, may not be as far-fetched as you think.

A new study has found that humans have the potential to emphasise with robots, even while knowing they do not have feelings.
It follows previous warnings from experts that humans could develop unhealthy relationships with robots, and even fall in love with them.

The discovery was made after researchers asked people to view images of human and humanoid robotic hands in painful situations, such as being cut by a knife. After studying their electrical brain signals, they found humans responded with similar immediate levels of empathy to both humans and robots.

After studying their electrical brain signals, they found humans responded with similar immediate levels of empathy to both humans and robots.

But the beginning phase of the so-called ‘top-down’ process of empathy was weaker toward robots.

The study was carried out by researchers at Toyohashi University of Technology and Kyoto University in Japan, and provides the first neurophysiological evidence of humans’ ability to empathise with robots.

These results suggest that we empathise with humanoid robots in a similar way to how we empathise with other humans.
Last month, a robot ethicist warned that AI sex dolls could ‘contribute to detrimental relationships between men and women, adults and children, men and men and women and women’

Scientists suggest that we’re unable to fully take the perspective of robots because their body and mind – if it exists – are very different from ours.

‘I think a future society including humans and robots should be good if humans and robots are prosocial,’ study co-author Michiteru Kitazaki told Inverse.

‘Empathy with robots as well as other humans may facilitate prosocial behaviors. Robots that help us or interact with us should be empathised by humans.’

Experts are already worried about the implication of humans developing feelings for robots.

The question we all need to ask is ‘do we fear a future of love with a real human to be a happy to substitute to a robot’ the idea that a real, living, breathing human could be replaced by something that is almost, but not exactly, the same thing, well, actually a robot.

By now you’ve probably heard the story of Tay, Microsoft’s social AI experiment that went from “friendly millennial girl” to genocidal misogynist in less than a day. At first, Tay’s story seems like a fun one for anyone who’s interested in cautionary sci-fi. What does it mean for the future of artificial intelligence if a bot can embody the worst aspects of digital culture after just 16 hours online?

If any AI is given the vastness of human creation to study at lightning speed, will it inevitably turn evil?

Will the future be a content creation battle for their souls?

Society is now driven by the social connections you hold, the likes and your preferences of relevancy, the movie Her is described with a complex nature, a man who is inconsolable since he and his wife separated. Theodore is a lonely man in the final stages of his divorce. When he’s not working as a letter writer, his down time is spent playing video games and occasionally hanging out with friends. He decides to purchase the new OS1, which is advertised as the world’s first artificially intelligent operating system, “It’s not just an operating system, it’s a consciousness,” the ad states. Theodore quickly finds himself drawn in with Samantha, the voice behind his OS1. As they start spending time together they grow closer and closer and eventually find themselves in love. Having fallen in love with his OS, Theodore finds himself dealing with feelings of both great joy and doubt. As an OS, Samantha has powerful intelligence that she uses to help Theodore in ways others hadn’t, but how does she help him deal with his inner conflict of being in love with an OS?

Though technically unfeasible by today’s AI standards, the broad premise of the movie is more realistic than most people may think. Indeed, in the past 10 years our lives have been transformed by technology and love is no exception. With Valentine’s Day around the corner, there’s no better time to examine some of the recent developments in this area.

Taobao, China’s version of Amazon, offers virtual girlfriends and boyfriends for around $2 (£1.20) per day. These are real humans, but they only relate with their paying customers via the phone – calls or text – in order to perform fairly unromantic tasks such as wake up calls, good night calls, and (perhaps the most useful service) “sympathetically listen to clients’ complaints”. If this is all you expect from a relationship, it at least comes at a cheap price.

Similar services already exist in India, where biwihotohaisi.com helps bachelors “practice” for married life with a virtual wife, and Japan, where “romance simulation games” are popular with men and women, even when they feature animated avatars rather than human partners.

In many of today’s most fascinating visions of future love, the body itself seems like a relic of the past. In Her, for example, we encounter a social landscape where love between humans and machines doesn’t require a physical body at all. Instead we watch as Theo shares his most personal moments with an AI who he never actually touches, but who conveys intimacy through talking, sharing messages, drawings, ideas and sexual fantasies. In our current social climate, where dating often means scrolling through photos and written bios rather than interacting with people in person, the idea that you could fall in love with your computer doesn’t seem so far-fetched. After all, we are already used to more disembodied forms of communication, and, as many older generations continue to lament, many young people today are more likely to text or sext than actually establish in-person kinds of intimacy.

AI is the perfect sounding board for these modern anxieties about human connection, and 20th- and 21st-century films are filled with dystopian landscapes that showcase the loneliness of a world where intimacy is something you can buy. In many of these films, from classics such as Fritz Lang’s Metropolis to more modern movies like Alex Garland’s Ex Machina, the creators and consumers of AI are male, while the AI themselves are female. The patriarchal underpinning of this is vividly explored in sci-fi such as The Stepford Wives and Cherry 2000, where we are ushered into worlds where compliant and submissive female robots are idealized by their male creators as the epitome of perfection, and always exist completely under their thumb. The female robots we meet in these films cook, clean, are unfailingly supportive and are always sexually available, in addition to being exceptionally beautiful. These sex-bots have also become both a mainstay of humor, from the sexy goofiness of 80s films such as Weird Science and Galaxina, to the cheeky and slightly more socially aware comedies in the 90s, with the frilly, busty fembots of Austen Powers and Buffy the Vampire Slayer’s charmingly dippy “Buffy-bot”

Serge Tisseron, a French Psychiatrist who studies the relationships between youth, the media and images and the effect of information and communication technology on young people, reminds that, despite signs of attachments from the robot, the relation can and will always be one way.

Serge insists on the importance of a reflection around the ethical issues to avoid the destruction of human relations. Because of their interactions with efficient, high-performing and helpful robots, humans could end up being disappointed with other humans altogether, especially on a professional level. Or, we could eventually abandon our responsibilities completely and rely solely on robots to take care of our loved ones. In the end, this could result in a serious withdrawal from the human world and could affect our ability to live in society.

A final thought is that no one knows what the future holds, if robots will manage to develop their conscience and emotions but in any case, there needs to be enough preparations for their development and integration to society.

A great quote by Colin Angle:

“In the smart home of the future, there should be a robot designed to talk to you. With enough display technology, connectivity, and voice recognition, this human-interface robot or head-of-household robot will serve as a portal to the digital domain. It becomes your interface to your robot-enabled home.”

Can we really teach robots ethics and morality…

I was having a regular coffee with a very good friend and affluent data scientist at the Institute of Directors recently in London, we share many great past times including reading and writing, and were discussing at our meeting ethical and moral values with robots, my question was can we teach robots right from wrong.

The facts are artificial intelligence (AI) is outperforming humans in a range of areas, but can we program or teach robots to behave ethically and morally?

Twitter has admitted that as many as 23 million (8.5 percent) of its user accounts are autonomous Twitterbots. Many are there to increase productivity, conduct research, or even have some fun. Yet many have been created with harmful intentions. In both cases, the bots have been known to behave with questionable ethics – maybe because they’re merely minor specimens of artificial intelligence (AI).

Humans are currently building far-more sophisticated machines which will face questions of ethics on monumental scales. Even in the mortality of human beings. So how do we make sure they make the right choices when the time comes?

Now, all of this is based on the idea that a computer can even comprehend ethics.

If cognitive machines don’t have the capacity for ethics, who is responsible when they break the law?

Currently, no one seems to know. Ryan Calo of the University of Washington School of Law notes: “Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument.”

The process for legislation is arduously slow, while technology, on the other hand, makes exponential haste.

The crimes can be quite serious, too. Netherlands developer Jeffry van der Goot had to defend himself — and his Twitterbot — when police knocked on his door, inquiring about a death threat sent from his Twitter account. Then there’s Random Darknet Shopper; a shopping bot with a weekly allowance of $100 in Bitcoin to make purchases on Darknet for an art exhibition. Swedish officials weren’t amused when it purchased ecstasy, which the artist put on display. (Though, in support for artistic expression they didn’t confiscate the drugs until the exhibition ended.)

In both of these cases, authorities did what they could within the law, but ultimately pardoned the human proprietors because they hadn’t explicitly or directly committed crimes. But how does that translate when a human being unleashes an AI with the intention of malice?

Legions of robots now carry out our instructions unreflectively. How do we ensure that these creatures, regardless of whether they’re built from clay or silicon, always work in our best interests?

Should we teach them to think for themselves? And if so, how are we to teach them right from wrong?

In 2017, this is an urgent question. Self-driving cars have clocked up millions of miles on our roads while making autonomous decisions that might affect the safety of other human road-users. Roboticists in Japan, Europe and the United States are developing service robots to provide care for the elderly and disabled. One such robot carer, which was launched in 2015 and dubbed Robear (it sports the face of a polar-bear cub), is strong enough to lift frail patients from their beds; if it can do that, it can also, conceivably, crush them. Since 2000 the US Army has deployed thousands of robots equipped with machineguns, each one able to locate targets and aim at them without the need for human involvement (they are not, however, permitted to pull the trigger unsupervised).

Public figures have also stoked the sense of dread surrounding the idea of autonomous machines. Elon Musk, a tech entrepreneur, claimed that artificial intelligence is the greatest existential threat to mankind. Last summer the White House commissioned four workshops for experts to discuss this moral dimension to robotics. As Rosalind Picard, director of the Affective Computing Group at MIT puts it: “The greater the freedom of a machine, the more it will need moral standards.”

Teaching robots how to behave on the battlefield may seem straightforward, since nations create rules of engagement by following internationally agreed laws. But not every potential scenario on the battlefield can be foreseen by an engineer, just as not every ethically ambiguous situation is covered by, say, the Ten Commandments. Should a robot, for example, fire on a house in which a high value target is breaking bread with civilians? Should it provide support to a group of five low-ranking recruits on one side of a besieged town, or one high-ranking officer on the other? Should the decision be made on a tactical or moral basis?

In conclusion of whether we should teach robots or not right from wrong, you should think about science fiction or a James Bond movie, the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations. The idea of formalising ethical guidelines is not new.
More than seven decades ago, science-fiction writer Isaac Asimov described the “three laws of robotics”: a moral compass for artificial intelligence. The laws required robots to:

  • protect humans
  • obey instructions
  • preserve themselves

(in that order).

The fundamental premise behind Asimov’s laws was to minimize conflicts between humans and robots. In Asimov’s stories, however, even these simple moral guidelines lead to often disastrous unintended consequences. Either by receiving conflicting instructions or by exploiting loopholes and ambiguities in these laws, Asimov’s robots ultimately tend to cause harm or lethal injuries to humans.Today, robotics requires a much more nuanced moral code than Asimov’s “three laws.”

An iPhone or a laptop may be inscrutably complex compared with a hammer or a spade, but each object belongs to the same category: tools. And yet, as robots begin to gain the semblance of emotions, as they begin to behave like human beings, and learn and adopt our cultural and social values, perhaps the old stories need revisiting. At the very least, we have a moral obligation to figure out what to teach our machines about the best way in which to live in the world. Once we have done that, we may well feel compelled to reconsider how we treat them…..

As Thomas Bangalter once said:

“The concept of the robot encapsulates both aspects of technology. On one hand it’s cool, it’s fun, it’s healthy, it’s sexy, it’s stylish. On the other hand it’s terrifying, it’s alienating, it’s addictive, and it’s scary. That has been the subject of much science-fiction literature.”