Guest-blog: Neil Cattermull – ‘A Guide to Machine Learning’

Neil Cattermull

Because of new computing technologies, machine learning today is not like machine learning of the past. It was born from pattern recognition and the theory that computers can learn without being programmed to perform specific tasks; researchers interested in Artificial Intelligence wanted to see if computers could learn from data.

The iterative aspect of machine learning is important because as models are exposed to new data, they are able to

They learn from previous computations to produce reliable, repeatable decisions and results. It’s a science that’s not new, “but one that has gained fresh momentum”.

While many machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data “over and over, faster and faster“ is a recent development. Here are a few widely publicized examples of machine learning applications you may be familiar with:

• The heavily hyped, self-driving Google car? The essence of machine learning.
• Online recommendation offers such as those from Amazon and Netflix? Machine learning applications for everyday life.
• Knowing what customers are saying about you on Twitter? Machine learning combined with linguistic rule creation.
• Fraud detection? One of the more obvious, important uses in our world today.

Resurging interest in machine learning is due to the same factors that have made data mining and Bayesian analysis more popular than ever.
Things like growing volumes and varieties of available data, computational processing that is cheaper and more powerful, and affordable data storage.

All of these things mean it’s possible to quickly and automatically produce models that can analyze bigger, more complex data and deliver faster, more accurate results – even on a very large scale. And by building precise models, an organization has a better chance of identifying profitable opportunities – or avoiding unknown risks.

In the second of a two series blog (“Digital-transformation“), we welcome back Neil Cattermull – Public Speaker and Commercial Director living in London, United Kingdom and a public figure in writing about technology, and entrepreneurship, he is considered a global Industry influencer and authority within the tech scene.

Neil has travelled around the world assisting small to large firms with business models. Ranked as a global business influencer and technical analyst.

Neil has held directorship positions within technology divisions within the financial services market, such as Merrill Lynch, WestLB, Thomson Financial and I have created many small to midsize organisations.

Neil is going to discuss ‘A guide to Machine Learning’.

Thank you Geoff,
Machine learning is one of the most innovative and interesting fields of modern science around today. Something that you probably associate with things such as Watson, Deep Blue, and even the infamous Netflix algorithm.

However, as sparkly as it is, machine learning isn’t exactly something totally new. In fact, the concept and science of machine learning has been around for much longer than you think.

The beginnings of machine learning
Considered to be the father of machine learning, Thomas Bayes’ theorem was pretty much left alone until the rockin 50’s when, in 1950, famed scientist Alan Turing managed to create and develop his imaginatively named ‘Alan Turing’s Learning Machine’.

The machine itself was capable of putting into practice what Thomas Bayes had conceptualised 187 years earlier. This was a huge breakthrough for the field and along with the acceleration of computer development, the next few decades saw a gigantic rise in development of machine learning techniques such as artificial neural networks, and explanation based learning.
These formed the basis of modern systems being managed by artificial intelligence. The latter being arguably the most integral to the development of systems management technology.

Explanation based learning was primarily developed by Gerald Dejong III at the Chicago Centre for Computer Science. He essentially managed to build upon previous methods and develop a new kind of algorithm, enter the “explanation based algorithm!”

Yes, the explanation based learning algorithm was fairly standard in that it created new business rules based on what had happened before. However, what sets this apart as a breakthrough is that Dejong III had managed to create something that would independently be able to disregard older rules once they had become unnecessary.
Explanation based learning was one of the key technologies behind chess playing AI’s such as IBM’s Deep Blue.

A cold AI Winter
However, there was a period during the 70’s when funding was disastrously reduced because people had started thinking that machine learning wasn’t living up to it’s original billing.
This was compounded when Sir James Lighthill released his independent report which stated that the grandiose expectations of what artificial intelligence and machine learning could achieve would never be fulfilled.

This report led to many projects being defunded or closed down. This was incredibly unfortunate timing as the UK was considered a market leader when it came to machine learning. This dark period of time was effectively known as the ‘AI Winter’ and bar a momentary slip in the early 90’s, was the only real time that the possibilities of machine learning were ever really discounted by the scientific community.

Who is pushing the technology forward now?
Machine learning has reached a level now where companies such as DataKinetics have the capability to transform legacy systems into business driven analytics.

DataKinetics are at the forefront of their field and have been entrusted by many blue-chip companies, such as Nissan and Prudential, to streamline and optimize complex technology environments. With the advancements within technology today IT professionals are now capable of achieving so much more due to new innovations in machine learning.
However, this is just the beginning – if funding and interest into machine learning and AI remains consistent, there’s no telling what can be achieved.

Machine learning algorithms that can predict future outcomes, giving us – the humans – to react accordingly.

In essence, the main idea behind machine learning is that it’s essentially where a computer or a system takes a set of data created previously, applies a set of rules to it and provides you with an output that in that is more efficient.

In much the same way, there’s a cycle between the innovators and forefathers of machine learning and with the companies and groups of people that are doing it today.

That’s why companies such as DataKinetics are proud to be associated with such a rich and storied period of human endeavour.
Innovators are equally as important as pioneers, without innovation we have static evolution that does not progress our species further and we are staring at a near constant change in the tech space.

Datakinetics are innovators within technology and have had the foresight to predict the evolution of mainframe, machine learning and analytics with a tech roadmap spanning for over 30 years!

You can contact Neil Cattermull:
– LinkedIn:
– Twitter: @NeilCattermull
– email:

Share your thoughts with us