Do we need AI, if humans can grow in development?

It seems like every day there is a new article or story about artificial intelligence (AI). AI is going to take over all of the jobs. AI is going to do all of the repetitive, menial tasks carried out by admins on a daily basis. AI is going to rise up and take over. AI is not going to take over but instead be natively baked into all systems to produce more human interactions.

For all the things AI is allegedly going to do, it can already do a lot right now, such as automation, custom searches, security interventions, analysis and prediction on data, serve as a digital assistant, perform algorithm-based machine learning and more.

It will be a good number of years before we get AI doing everything for us, the real question is can humans survive without AI?

Does anyone recall the Trachtenberg speed system of basic mathematics?

The Trachtenberg Speed System of Basic Mathematics is a system of mental mathematics which in part did not require the use of multiplication tables to be able to multiply. The method was created over seventy years ago. The main idea behind the Trachtenberg Speed System of Basic Mathematics is that there must be an easier way to do multiplication, division, squaring numbers and finding square roots, especially if you want to do it mentally.

Jakow Trachtenberg spent years in a Nazi concentration camp and to escape the horrors he found refuge in his mind developing these methods. Some of the methods are not new and have been used for thousands of years. This is why there is some similarity between the Trachtenberg System and Vedic math for instance. However, Jackow felt that even these methods could be simplified further. Unlike Vedic math and other systems like Bill Handley’s excellent Speed Math where the method you choose to calculate the answer depends on the numbers you are using, the Trachtenberg System scales up from single digit multiplication to multiplying with massive numbers with no change in the method.

Multiplication is done without multiplication tables “Can you multiply 5132437201 times 4522736502785 in seventy seconds? One young boy (grammar school-no calculator) did successfully by using the Trachtenberg Speed System of Basic Mathematics.

So, with human intelligence why do we need AI, AGI, deep learning or machine learning?

Faster than a calculator, Arthur Benjamin discusses the speed of mathematics TEDxOxford

Albert Einstein is widely regarded as a genius, but how did he get that way? Many researchers have assumed that it took a very special brain to come up with the theory of relativity and other stunning insights that form the foundation of modern physics. A study of 14 newly discovered photographs of Einstein’s brain, which was preserved for study after his death, concludes that the brain was indeed highly unusual in many ways. But researchers still don’t know exactly how the brain’s extra folds and convolutions translated into Einstein’s amazing abilities.

Experts say Einstein programmed his own brain, that he had a special brain when the field of physics was ripe for new insights, that he had the right brain in the right place at the right time.

Can we all program our brains for advancement, does our civilisation really need our brains rely on AI/AGI?

Artificial intelligence is incredibly advanced, at least, at certain tasks. AI has defeated world champions in chess, Go, and now poker. But can artificial intelligence actually think?

The answer is complicated, largely because intelligence is complicated. One can be book-smart, street-smart, emotionally gifted, wise, rational, or experienced; it’s rare and difficult to be intelligent in all of these ways. Intelligence has many sources and our brains don’t respond to them all the same way. Thus, the quest to develop artificial intelligence begets numerous challenges, not the least of which is what we don’t understand about human intelligence.

Still, the human brain is our best lead when it comes to creating AI. Human brains consist of billions of connected neurons that transmit information to one another and areas designated to functions such as memory, language, and thought. The human brain is dynamic, and just as we build muscle, we can enhance our cognitive abilities we can learn. So can AI, thanks to the development of artificial neural networks (ANN), a type of machine learning algorithm in which nodes simulate neurons that compute and distribute information. AI such as AlphaGo, the program that beat the world champion at Go last year, uses ANNs not only to compute statistical probabilities and outcomes of various moves, but to adjust strategy based on what the other player does.

Facebook, Amazon, Netflix, Microsoft, and Google all employ deep learning, which expands on traditional ANNs by adding layers to the information input/output. More layers allow for more representations of and links between data. This resembles human thinking when we process input, we do so in something akin to layers. For example, when we watch a football game on television, we take in the basic information about what’s happening in a given moment, but we also take in a lot more: who’s on the field (and who’s not), what plays are being run and why, individual match-ups, how the game fits into existing data or history (does one team frequently beat the other? Is the centre forward passing the ball or scoring?), how the refs are calling the game, and other details. In processing this information we employ memory, pattern recognition, statistical and strategic analysis, comparison, prediction, and other cognitive capabilities. Deep learning attempts to capture those layers.

You’re probably already familiar with deep learning algorithms. Have you ever wondered how Facebook knows to place on your page an ad for rain boots after you got caught in a downpour? Or how it manages to recommend a page immediately after you’ve liked a related page? Facebook’s DeepText algorithm can process thousands of posts, in dozens of different languages, each second. It can also distinguish between Purple Rain and the reason you need galoshes.

Deep learning can be used with faces, identifying family members who attended an anniversary or employees who thought they attended that rave on the down-low. These algorithms can also recognise objects in context such a program that could identify the alphabet blocks on the living room floor, as well as the pile of kids’ books and the bouncy seat. Think about the conclusions that could be drawn from that snapshot, and then used for targeted advertising, among other things.
Google uses Recurrent Neural Networks (RNNs) to facilitate image recognition and language translation. This enables Google Translate to go beyond a typical one-to-one conversion by allowing the program to make connections between languages it wasn’t specifically programmed to understand. Even if Google Translate isn’t specifically coded for translating Icelandic into Vietnamese, it can do so by finding commonalities in the two tongues and then developing its own language which functions as an interlingua, enabling the translation.

Machine thinking has been tied to language ever since Alan Turing’s seminal 1950 publication “Computing Machinery and Intelligence.” This paper described the Turing Test—a measure of whether a machine can think. In the Turing Test, a human engages in a text-based chat with an entity it can’t see. If that entity is a computer program and it can make the human believe he’s talking to another human, it has passed the test.

But what about IBM’s Watson, which thrashed the top two human contestants in Jeopardy?

Watson’s dominance relies on access to massive and instantly accessible amounts of information, as well as its computation of answers’ probable correctness.

Why humans will always be smarter than AI…..

This concept of context is one that is central to Hofstadter’s lifetime of work to figure out AI. In a seminal 1995 essay he examines an earlier treatise on pattern recognition by Russian researcher Mikhail Bongard, a Russian researcher, and comes to the conclusion that perception goes beyond simply matching known patterns:

… in strong contrast to the usual tacit assumption that the quintessence of visual perception is the activity of dividing a complex scene into its separate constituent objects followed by the activity of attaching standard labels to the now-separated objects (ie, the identification of the component objects as members of various pre-established categories, such as ‘car’, ‘dog’, ‘house’, ‘hammer’, ‘airplane’, etc)

… perception is far more than the recognition of members of already-established categories — it involves the spontaneous manufacture of new categories at arbitrary levels of abstraction.
For Booking.com, those new categories could be defined in advance, but a more general-purpose AI would have to be capable of defining its own categories. That’s a goal Hofstadter has spent six decades working towards, and is still not even close.

In her BuzzFeed article, Katie Notopoulos goes on to explain that this is not the first time that Facebook’s recallbration of the algorithms driving its newsfeeds has resulted in anomalous behavior. Today, it’s commenting on posts that leads to content being overpromoted. Back in the summer of 2016 it was people posting simple text posts. What’s interesting is that the solution was not a new tweak to the algorithm. It was Facebook users who adjusted — people learned to post text posts and that made them less rare.

And that’s always going to be the case. People will always be faster to adjust than computers, because that’s what humans are optimized to do. Maybe sometime many years in the future, computers will catch up with humanity’s ability to define new categories — but in the meantime, humans will have learned how to harness computing to augment their own native capabilities. That’s why we will always stay smarter than AI.

Final thought, perhaps the major limitation of AI can be captured by a single letter: G. While we have AI, we don’t have AGI—artificial general intelligence (sometimes referred to as “strong” or “full” AI). The difference is that AI can excel at a single task or game, but it can’t extrapolate strategies or techniques and apply them to other scenarios or domains you could probably beat AlphaGo at Tic Tac Toe. This limitation parallels human skills of critical thinking or synthesis—we can apply knowledge about a specific historical movement to a new fashion trend or use effective marketing techniques in a conversation with a boss about a raise because we can see the overlaps. AI has restrictions, for now.

Some believe we’ll never truly have AGI; others believe it’s simply a matter of time (and money). Last year, Kimera unveiled Nigel, a program it bills as the first AGI. Since the beta hasn’t been released to the public, it’s impossible to assess those claims, but we’ll be watching closely. In the meantime, AI will keep learning just as we do: by watching YouTube videos and by reading books. Whether that’s comforting or frightening is another question.

Stephen Hawking on AI replacing humans:

‘The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.’

From an interview with Wired, November 2017


Discover more from Freedom after the sharks

Subscribe to get the latest posts sent to your email.