Data as the fuel of the future

Better use of data should reduce the number of vehicles on the road and increase the utilisation rate.
Better use of data should reduce the number of vehicles on the road and increase the utilisation rate. Peter Rae
by Mark Eggleton

This content is produced by The Australian Financial Review in commercial partnership with KPMG.

Digital Data is referred to as the oil or even the soil of the 21st century. Either analogy is apt as they both illustrate the importance of data and how it will power or feed the world economy in the digital age. Data is the fuel of the future as well as the rich soil in which everything will grow.

Its ubiquity already spreads far and wide, ranging from the consumer data story such as knowledge of our online search history, financial transactions and social media interactions. More widely, data drives the analysis of whole industries, small businesses or traffic patterns.

It has moved far beyond crunching the numbers to find out your age, gender and income to gaining real time insights into how the world works and (hopefully) how it can be improved.

A good example would be the humble automobile. There are millions of cars on the road yet most of them sit idle over 95 per cent of the time. Better use of data should reduce the number of vehicles on the roads and increase the utilisation rate of cars actually on the roads. Drive share programs and driverless vehicles will mean fewer cars are sold but the data generated by each one as well as each user and their travel patterns will be invaluable.

Yet while every technological soothsayer suggests all of this is just around the corner, there is still an extraordinary amount we do not know as the new economy takes shape.

NSW Chief Data Scientist and CEO of the NSW Data Analytics Centre Dr Ian Oppermann is a little more sanguine about where we are now.

"The way companies have been taking advantage of a new digital future is better targeted services, more individual services. Quite often we talk about know your customer or a market size of one.

"For government, we're helping agencies re-think the delivery of services. It's doing old things in new ways where you can actually look across barriers, across agency boundaries and silos."

Oppermann warns organisations should not fall into the trap of making decisions based solely on data as it is a very simplistic observation of the world.

"What we do with data analytics or with artificial intelligence is we try to recapture the information in that data and then make informed decisions based on the little pieces of information scattered throughout many different sources."

He cautions against blind faith in algorithms or in data "as a dangerous place to be".

"It's like following the GPS down a goat track even though when you look outside into the real world you realise you really should not be driving down that road. As long as we're aware of the risk and as long as we question the results then I think we stand ourselves in good stead to make better decisions.

"What we ultimately want is to help people use more data from a variety of sources to assist them make better decisions, but we also have to build trust."

Oppermann says trust comes with so many different aspects and it is an evolving journey. He cites what we already do with banks as an example of how far along the journey we already are when it comes to digital trust.

"The data which a bank holds, is our salaries, it's our pay – it's something we can translate to cash but realistically banks are data centres or trust centres. We are quite comfortable having our salary paid into a bank where it goes in as data, sits there as data, and we draw it out as data until we use an ATM. All the while it's just data until we get it to manifest as a polymer bank note.

"We trust it implicitly and explicitly because we have been trusting banks for hundreds of years. Most of the time we're pretty comfortable dealing with a bank but if you take that same data and say, now this data isn't money, it is information about me or information about my preferences or other people we don't actually have that same level of trust.

"Even if the governance processes, the security processes, the decision-making algorithms, are exactly the same we don't have that same level of trust because we are not used to the idea of a government or a web services company delivering services to us in a way that we've interacted with for hundreds of years."

For Oppermann, our data journey is similar to the journey from gold to paper, to bank notes and now to data. He says ultimately trust will build slowly because of reliable and expected performance.

Part of the trust problem exists around the fear of too much information being held by too few. The big data refineries such as Amazon, Alphabet, Facebook and Apple already have a huge first mover advantage, as do many financial institutions. This has bred the fear they are too large and are in danger of becoming monopolistic in the same way Standard Oil was in the United States in the early 20th century. Many have asked the question as to whether they need to be broken up or heavily regulated.

Partner in Charge, KPMG's Data & Analytics in the Netherlands, Professor Sander Klous, says governments are trying to play catch-up but it is difficult because the old rules around ethics have been thrown on their head.

"We know how to apply them to human activity but how do you apply ethics to an algorithm? It's something completely new," he says.

As to whether the big data refineries should be broken up, he indicates that data is the new element in antitrust considerations. It is a winner-takes-all ecosystem, in which it becomes impossible to outperform the largest players because of all the data they possess.

"It's a bit like the big banks where governments wanted to exercise some control over them because they became too big to fail. It's the same with Google or a Facebook, if either of them broke down tomorrow, you could claim they are probably too big to fail as well," Klous says.

Klous does suggest the sheer size of the large data refineries will see them eventually broken up because they are unsustainable.

"The data refineries are basically just really big pipes where raw material is processed and something smart comes out. So the whole idea that one party is controlling that pipeline is not a sustainable model.

"I think what in the end what you need is multiple parties that are able to work together in a platform-like structure and the data refinery has to become more complex because there is more than one party in control."

He draws an analogy with traffic lights where one party is in control, as opposed to a roundabout, which is a simpler concept but more parties in control as long as everyone abides by a simple rule.

"In a roundabout, there is a simple standard where right goes first and then you make your decision to enter and everything flows. The same applies in a data environment where there is a simple set of standards that need to be complied with and then as you add intelligence (or information) to the data refinery it informs decisions. You're not relying on one single party to make the right decisions."

He says data refineries will eventually turn into this platform model where multiple parties will work together to create value and eventually domain specific refineries will develop in areas around health or logistics where the dominant players will work together.

As for the future, Klous says we are inevitably moving towards a smart society but there are things we need to get under control. Ideally, he would like to see some sort of control framework in place that allows individuals to be able to trust what the large data refineries are actually doing.

"We are sorting out how to deal properly with privacy and other ethical issues without losing benefits like more convenience or increased efficiency."

reports.afr.com