Robots are surely not going to destroy the planet, or are they?

Elon Musk, the mastermind behind SpaceX and Tesla, believes that artificial intelligence is “potentially more dangerous than nukes,” imploring all of humankind “to be super careful with AI,” unless we want the ultimate fate of humanity to closely resemble Judgment Day from Terminator. Personally, I think Musk is being a little futuristic in his thinking after all, we have survived more than 60 years of the threat of thermonuclear mutually assured destruction but still, it is worth considering Musk’s words in greater detail, and clearly he has a point.

Musk made his comments on Twitter back in 2014, after reading Superintelligence by Nick Bostrom. The book deals with the eventual creation of a machine intelligence (artificial general intelligence, AGI) that can rival the human brain, and our fate thereafter. While most experts agree that a human-level AGI is mostly inevitable by this point it’s just a matter of when Bostrom contends that humanity still has a big advantage up its sleeve: we get to make the first move. This is what Musk is referring to when he says we need to be careful with AI: we’re rapidly moving towards a Terminator-like scenario, but the actual implementation of these human-level AIs is down to us. We are the ones who will program how the AI actually works. We are the ones who can imbue the AI with a sense of ethics and morality. We are the ones who can implement safeguards, such as Asimov’s three laws of robotics, to prevent an eventual robot holocaust.

In short, if we end up building a race of super-intelligent robots, we have no one but ourselves to blame and Musk, sadly, is not too optimistic about humanity putting the right safeguards in place. In a second tweet, Musk says: ‘Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Here he’s referring to humanity’s role as the precursor to a human-level artificial intelligence and after the AI is up and running, we’ll be ruled superfluous to AI society and quickly erased.

Stephen Hawking warned that technology needs to be controlled in order to prevent it from destroying the human race.
The world-renowned physicist, who has spoken out about the dangers of artificial intelligence in the past, believes we all need to establish a way of identifying threats quickly, before they have a chance to escalate.

“Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” he told The Times.

“It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”

In a Reddit AMA back in 2015, Mr Hawking said that AI would grow so powerful it would be capable of killing us entirely unintentionally.

“The real risk with AI isn’t malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
The theoretical physicist Stephen Hawking, who has died recently aged 76, said last year that he wanted to “inspire people around the world to look up at the stars and not down at their feet”. Hawking, who until 2009 held a chair at Cambridge university once occupied by Isaac Newton, was uniquely placed to encourage an upwards gaze.

Enfeebled by amyotrophic lateral sclerosis, a form of motor neurone disease, he displayed extraordinary clarity of mind. His ambition was to truly understand the workings of the universe and then to share the wonder.

Importantly, he warned of the perils of artificial intelligence and feared that the rise of the machines would be accompanied by the downfall of humanity. Not that he felt that human civilisation had particularly distinguished itself: our past, he once said, was a “history of stupidity”.

Here are 10 interesting insights into the life and viewpoints of Stephen Hawking. Sure, Stephen Hawking is a brilliant, groundbreaking scientist, but that’s not all …

Stephen Hawking had much to say on the future of tech after all, he was an expert: Hawking was one of the first people to become connected to the internet.

“So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.
“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation.
While he saw many benefits to artificial intelligence – notably, the Intel-developed computer system ACAT that allows him to communicate more effectively than ever – he echoes entrepreneurial icons like Elon Musk by warning that the completion of A.I.’s potential would also “spell the end of the human race.”

Stephen Hawking co-authored an ominous editorial in the Independent warning of the dangers of AI.

The theories for oblivion generally fall into the following categories (and they miss the true danger):
– Military AI’s run amok: AIs decide that humans are a threat and set out to exterminate them.
– The AI optimization apocalypse: AI’s decide that the best way to optimize some process, their own survival, spam reduction, whatever, is to eliminate the human race.
– The resource race: AIs decide that they want more and more computing power, and the needs of meager Earthlings are getting in the way. The AI destroys humanity and converts all the resources, biomass — all the mass of the Earth actually — into computing substrate.
– Unknowable motivations: AI’s develop some unknown motivation that only supremely intelligent beings can understand and humans are in the way of their objective, so they eliminate us.
I don’t want to discount these theories. They’re all relevant and vaguely scary. But I don’t believe any of them describe the actual reason why AIs will facilitate the end of humanity.

As machines take on more jobs, many find themselves out of work or with raises indefinitely postponed. Is this the end of growth? No, says Erik Brynjolfsson:

Final thought: Artificial Intelligence will facilitate the creation of artificial realities  custom virtual universes  that are so indistinguishable from reality, most human beings will choose to spend their lives in these virtual worlds rather than in the real world. People will not breed. Humanity will die off.

It’s easy to imagine. All you have to do is look at a bus, subway, city street or even restaurant to see human beings unplugging from reality (and their fellow physical humans) for virtual lives online.

AIs are going to create compelling virtual environments which humans will voluntarily immerse themselves in. At first these environments will be for part-time entertainment and work. The first applications of AI will be for human-augmentation. We’re already seeing this with Siri, Indigo, EVA, Echo and the proliferation of AI assistants.

AI will gradually become more integrated into human beings, and Virtual platforms like Oculus and Vive will become smaller, much higher quality and integrated directly into our brains.

AIs are going to facilitate tremendous advances in brain science. Direct human-computer interfaces will become the norm, probably not with the penetrative violation of the matrix I/O ports, but more with the elegance of a neural lace. It’s not that far off.
In a world with true general AI, they’re going to get orders of magnitude smarter very quickly as they learn how to optimize their own intelligence. Human and AI civilization will quickly progress to a post-scarcity environment.

And as the fully integrated virtual universes become indistinguishable from reality, people will spend more and more time plugged in.
Humans will not have to work, there will be no work for humans. Stripped of the main motivation most people have for doing anything, people will be left to do whatever they want.

Want to play games all day? Insert yourself into a Matrix quality representation of Game of Thrones where you control one of the great houses. Go ahead. Play for years with hundreds of friends.

Want to spend all day trolling through the knowledge of the world in a virtual, fully interactive learning universe? Please do. Every piece of human knowledge can be available, and you can experience recreations of historical events first-hand.

Want to explore space? Check out this fully immersive experience from an unmanned Mars space-probe. Or just live in the Star Wars or Star Trek universe.

Want to have a month long orgasm with the virtual sex hydra of omnisport? Enjoy, we’ll see you in thirty days. Online of course. No one dates anymore.

Well, some people will date. They will date AI’s. Scientists are already working on AI sex robots. What happens when you combine the intelligence, creativity and sensitivity embodied by Samantha in the movie Her with an android that is anatomically indistinguishable from a perfect human (Ex Machina, Humans, etc)?

Deep learning algorithms will find out your likes, dislikes and how to charm your pants off. The AIs will be perfect matches for your personality. They can choose your most desirable face and body type, or design their own face and attire for maximum allure.
Predicting the future is always a difficult matter. We can only rely on the predictions of experts and technology observations of what is in existence, however, it’s impossible to rule anything out.

We do not yet know whether AI will usher in a golden age of human existence, or if it will all end in the destruction of everything humans cherish. What is clear, though, is that thanks to AI, the world of the future could bear little resemblance to the one we inhabit today.

An AI takeover is a hypothetical scenario, but a robot uprising could be closer than ever predicted in which AI becomes the dominant for of intelligence of earth, with computers or robots effectively taking control of the planet away from the human species, according to royal astronomer Sir Martin Rees, who believes machines will replace humanity within a few centuries.

Possible scenarios include replacement of the entire human workforce, takeover by a super-intelligent AI, and the popular notion of a robot uprising. Some public figures that we have discussed in this blog, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future super-intelligent machines remain under human control.

We need to watch this space…..

As Masayoshi Son once said:

“I believe this artificial intelligence is going to be our partner. If we misuse it, it will be a risk. If we use it right, it can be our partner.”

Share your thoughts with us