Guest-blog: Neil Cattermull – Digital Transformation and ‘Open source on steroids’

Neil Cattermull


The two words ‘Digital Transformation’ seem to be words that we hear and see everyday across internal discussions at main executive board and c-suite. But exactly why are these discussions important, and why should they be a priority?

Firstly, what is Digital Transformation and why is it so important?




Let’s start from the beginning. Heraclitus once said:
“The only thing that is constant is change,” – and this is very true and relevant today.

With major moves forward in technology and accessibility toward digital media in the past 10 years, people now view technology in a completely different way and also learn in a different way.
This has been a huge factor in creating a need for companies to evolve and stay relevant, transforming the way they run their business, and also train their staff.

With the general concentration span of millennials being much shorter than that of their predecessors, businesses must change the way they interact with millennial employees and customers.

If we look at this from an internal perspective too, we see everything from employee training to onboarding and productivity can be improved through digital transformation in the correct way.
It is important to remember though that digital transformation will generate some push back and resistance. This is very normal, and this is also why it is important to implement it in the right way.

Effectively, Digital Transformation is an ongoing effort to rewire all operations for the ever-evolving digital world, by adopting the latest technologies in order to improve processes, strategies, and the bottom line.

Digital transformation became a term, decades ago, and at that time largely meant digitising. But today, a company needs to leverage digital tools to be more competitive, not just more digital.

Going forward, companies will need to harness machine learning (ML), artificial intelligence (AI) and the Internet of things (IoT) to be pre-emptive in their business strategies, rather than reactive or presumptive.

And after that? We can only speculate. Technology is advancing at a faster pace than we can adapt to it. What is clear is that digital maturity is a moving target, which makes digital transformation ongoing.

Today I have the pleasure of introducing another Guest Blogger, Neil Cattermull – Neil is a Public Speaker and Commercial Director living in London, United Kingdom and a public figure in writing about technology, and entrepreneurship. He is considered a global Industry influencer and authority within the tech-scene.

Neil has travelled around the world assisting small to large firms with business models. Ranked as a global business influencer and technical analyst. He has held directorship positions within technology divisions within the financial services market, such as Merrill Lynch, WestLB, Thomson Financial and he has created many small to midsize organisations.

Neil is going to discuss ‘Open Source on Steroids’.

The words “digital transformation” are on the lips of every person in technology and tech media, as well as many business leaders – from company CIOs and CTOs to technology to business line managers to writers in news publications and tech blogs.

At its core, a digital transformation is the enablement of technologies and workplaces tuned to today’s digital economy. The beating heart of this digital economy is the API, and is being followed now by emerging technologies like IoT (Internet of things) and FinTech technologies like Blockchain.

Today the transformation of processes, IT services, database schemas and storage are proceeding at exponential rates with Cloud, AI and Big Data currently taking center stage as new ways of working in the enterprise.

The glue to the majority of developments in the technology world is the adoption, proliferation and acceptance of Open Source technologies. Community developments such as Hadoop, Apache Spark, MongoDB, Ubuntu and the Hyperledger project are some of the names that freely fall from any Open Source discussion.

The question is where do you run these workloads? How do you run these workloads? And in what form should these workloads take?

Most major companies will have mainframe systems at the core of their IT systems, so the question is really whether to run new workloads there, or on other platforms?

Any building architect will tell you that before tearing down the walls of an old house or doing any significant structural changes, you should always consult the original architectural plans. In the same way, any systems architect would look very closely at what a mainframe system is doing now before considering running workloads elsewhere.

However, it is imperative to understand the difference between mainframe and midrange server technology at a very high-level:

• Mainframe systems are designed to scale vertically not horizontally
• Input / Output is designed in mainframe systems to move processing away from the core and very fast I/O is built in to the core of a mainframe system, even at the hardware layer
• Centralized architecture is a key feature allowing mainframe systems to manage huge workloads extremley efficiently – catering for 100% utilisation without any degradation of perfromance
• Resilience is built in to every key component of a mainframe; redundancy at the core
• At a transactional level there is no other system that comes anywhere near the level of processing of data that a mainframe system can process.

An argument against the mainframe could be to decouple software systems onto commodity hardware or cloud systems; but this tends to create server and cost sprawl, particularly if an important goal is to mirror the mainframe’s performance, security, transaction throughput capacity, reliability, maintainability and flexibility.

But as we move further into the world of IoT, with databases and Big Data systems acquiring vast amounts of ingested social media and transactional data, how are we scoping the growth and security of these systems?
We are not if we simply just keep adding to existing IT infrastructures – we need to be able to scale access and throughput to manage, interigate and optimse the hottest commodity we have: data!

Data is becoming a currency in its own right but we need to secure this new currency in a way we do today with traditional moentary systems.
And perhaps the best way to leverage this valuable asset is via APIs that allow enterprises to take advantage of the mainframe investments already made – enter a LinuxOne Open Source ready mainframe that assists with creating a familiar Open Source tooling stack on steroids!

The argument here is that a digital transformation is more than just empty words and data thrown on cloud servers, it is a state of mind, and an architecture that should encompass current and future systems towards overall business goals.

At the heart of this goal is the end-user consumer, something that every system architect should be very mindful of; however, downtime and security are quite often understated when creating the initial framework for key infratsucture projects.

These key elements must be baked into every project and at the very core of future technology initiatives – something that the Open Source Ready LinuxOne infrastructure delivers extremely well.

You can contact Neil Cattermull:
– LinkedIn: linkedin.com/in/neilcattermull
– Twitter: @NeilCattermull
– email: Neil.Cattermull@gmail.com

Not just data… Meaningful Data that enables decisions

I have been discussing on the board of a company that I represent as a Non-Executive Director at a great level of detail the subject of Meaningful Data and the value of Meaningful Data vs Data and Information, in making informed decisions across the business. As the subject seems to becoming a business imperative, I thought a great opportunity for my next blog discussion.

It is very clear in today’s world that most organisations recognise that being a successful, data-driven company requires skilled developers and analysts. Fewer grasp how to use data to tell a meaningful story that resonates both intellectually and emotionally with an audience.

Joseph Rudyard Kipling was an English journalist, short-story writer, poet, and novelist who once wrote, “If history were taught in the form of stories, it would never be forgotten.” The same applies to data. Companies must understand that data will be remembered only if presented in the right way. And often a slide, spreadsheet or graph is not the right way; a story is.

Boards of Executives and managers are being bombarded with dashboards brimming with analytics. They struggle with data-driven decision making because they do not know the story behind the data.

Sometimes the right data is big. Sometimes the right data is small. But for innovators the key is figuring out what those critical pieces of data are that drive competitive position. Those will be the pieces of right data that you should seek out fervently. To get there, I would strongly suggest asking the following three questions as a process for drilling down to the right data.

  1. What decisions drive waste in your business?
  2. Which decisions could you automate to reduce waste?
  3. What data would you need to do so?

Information systems might differ wildly in form and application but essentially they serve a common purpose which is to convert data into meaningful information which in turn enables the organisation to build knowledge:

Data is unprocessed facts and figures without any added interpretation or analysis. “The price of crude oil is £50 per barrel.”

Information is data that has been interpreted so that it has meaning for the user. “The price of crude oil has risen from £30 to £50 per barrel” gives meaning to the data and so is said to be information to someone who tracks oil prices.

Knowledge is a combination of information, experience and insight that may benefit the individual or the organisation. “When crude oil prices go up by £10 per barrel, it’s likely that petrol prices will rise by 2p per litre” is knowledge.

The boundaries between the three terms are not always clear. What is data to one person is information to someone else. To a commodities trader for example, slight changes in the sea of numbers on a computer screen convey messages which act as information that enables a trader to take action. To almost anyone else they would look like raw data. What matters are the concepts and your ability to use data to build meaningful information and knowledge.

The ability to gather meaningful data is as important as the insights the data can generate. Those insights, the end result of any data collection, is what people see and judge.
The hard truth here is that bad data leads to bad decisions. Thus, it is important to take the time necessary to build a proper data collection process.

Data is meaningful if we have some way to act upon it. Otherwise, we are mere spectators. This is one of the most problematic aspects of the current fetish of data visualisation, which appears to treat data as an unquestionable justification for itself, rather than as a proxy for things that we actually want to understand or probe.

You generally can’t put yourself into a visualisation, tell it a little about yourself, and nudge it towards a better understanding of the questions you want to ask of it (like you would any person you want to find out more about).

If we are satisfied with mere data, datasets or data visualisations as the end goal – rather than all the contextual complexity behind who, why and how it was collected, and what was excluded from the presentation – then we are contenting ourselves with just one dimension, not four.

Data doesn’t need to be numeric, digital or electronic; it’s anything that helps you to make an assessment, and in many senses if it’s non-digital it can integrate a whole host of other phenomena, providing a much deeper, if more complex, proxy.

A wonderful example of this was an air quality experiment led by professor Barbara Maher of Lancaster University. In the test, four houses had 30 potted birch trees placed directly outside their doors; and four households, acting as control subjects, did not have any trees placed outside.

A major innovation in the experiment was that levels of particulate pollution were evaluated by collecting dust particles that settled on television screens, which had been wiped clean at the beginning of the experiment, and comparing the two sets of households to see which had amassed more particulate. The experiment showed – viscerally, visibly and physically – that planting trees reduced particulate. It didn’t require a digital sensor sitting on a mantelpiece.

DIY data
One of the best ways to make data more meaningful is to make it yourself. Measure something – your body, your home, your neighbourhood – and it helps you to not only understand something about it, but more importantly it helps you to figure out the questions you want to ask and the hypotheses you want to assess. Measuring something yourself (the way your body temperature fluctuates; the cycles of noise in your neighbourhood) means you can better decide why and what you might do to affect or act upon it.

A city hackathon bringing dozens, if not hundreds, of software developers together for a short space of time to work for free on government-approved historical datasets is all well and good, but you have to ask how transformative it actually is to work on something without questioning why and how the data was collected, or which data has been excluded.

Collective collecting
When you join with others to measure something, you make meaning by having conversations about the data you are collecting. Sensemaking in this situation becomes a collective activity – you don’t even need to be using the same measuring equipment, you just need to be able to talk about what you’re doing with each other. “I’m measuring air quality,” you say. “Well I’m recording atmospheric humidity levels,” says your neighbour. Have a discussion and you’ll start to build up an intuition of how they correlate, or even better, look at ways of affecting them together, ideally for the better.

User experience
The most important aspect of making data more meaningful is to experience it, somehow, in situ. Even if you were not part of the process of collecting a dataset, to be near to where and when it was captured you are far more likely to be able to integrate all the unspoken, ambient, implicit, informal and unrecorded metadata that datasets and visualisations strip out with their numeric authority.

To stand in a space, a neighbourhood or a city and experience its windy mess while simultaneously being able to interrogate, prod and affect a dataset provides you with the kind of multivalence that is crucial to constructing any useful meaning. You are far more likely to be held accountable, and to hold others accountable, for making use of the data in any decision making process.

Most captivating storytellers grasp the importance of understanding the audience. They might tell the same story to a child and adult, but the intonation and delivery will be different. In the same way, a data-based story should be adjusted based on the listener. For example, when speaking to an executive, statistics are likely key to the conversation, but a business intelligence manager would likely find methods and techniques just as important to the story.

In a Harvard Business Review article titled “How to Tell a Story with Data,” Dell Executive Strategist Jim Stikeleather segments listeners into five main audiences: novice, generalist, management, expert and executive. The novice is new to a subject but doesn’t want oversimplification.
The generalist is aware of a topic but looks for an overview and the story’s major themes. The management seeks in-depth, actionable understanding of a story’s intricacies and interrelationships with access to detail. The expert wants more exploration and discovery and less storytelling. And the executive needs to know the significance and conclusions of weighted probabilities.

Discerning an audience’s level of understanding and objectives will help the storyteller to create a narrative. But how should we tell the story? The answer to this question is crucial because it will define whether the story will be heard or not.

As Stewart Butterfield once said:

“Hard numbers tell an important story; user stats and sales numbers will always be key metrics. But every day, your users are sharing a huge amount of qualitative data, too – and a lot of companies either don’t know how or forget to act on it.”