2017-01-11

Big data is no longer just a buzzword. In EMEA we’ve seen changes in how IT views big data.  The emphasis is no longer on just understanding big data technologies.  Businesses want to learn about new projects and most importantly, how organisations are actually benefitting from the technology in practice. MapR founder and executive chairman John Schroeder recently shared his vision on the direction of big data in 2017. I’ve included them here with some added perspective on what it means for EMEA. As the number of big data deployments continues to grow globally, the focus for businesses will move toward the value of data.

Welcome back, AI

Looking back to the 60s, Ray Solomonoff introduced a universal Bayesian method for inductive inference and prediction, laying the foundations of a mathematical theory of artificial intelligence (AI). Fast forward to today and we’re seeing AI enter back into mainstream discussions about cognitive computing, machine learning and machine intelligence.  In EMEA we’ve seen this applied toward financial and retail use cases and expect to see more.

There are a few reasons for a renewed interest in AI, boiling down to the “three V’s” – velocity, variety and volume. Any platform that can process these “three V’s” with modern and traditional processing models that scale horizontally can provide 10-20 times cost efficiency over traditional platforms.

Google has also demonstrated that simple algorithms executed frequently against large datasets often achieve better results than when using smaller sets. In 2017, we’re set to gain the best value from AI when it is applied to high volume repetitive tasks where consistency is more effective than human intuition.

Governance or competitive advantage?

Businesses have access to a host of key information about customers and partners, which means the argument between data value and governance will not only rumble on in 2017 but will take poll position. Leading businesses will manage their data between regulated and non-regulated use cases.

In the case of regulated scenarios, it’s vital a regulatory body is able to report and track data throughout any transformation process to its original source. While this is a mandatory step for regulatory use cases, it is limiting for non-regulatory use cases like customer 360. In cases like this, more effective results come from higher cardinality, real-time and a mix of structured and unstructured yields more effective results.

A business-driven approach to data

The next year will see organisations move away from a “build it and they will come” attitude to data lakes. Instead, 2017 will be the year of the business-driven data approach. Today’s businesses need a convergence of historical analytics with immediate operational capabilities to address customers, process claims and interface to devices in real time at an individual level.

Merging analytics and operational applications can be explained through several use cases: all ecommerce sites now need to provide personal recommendations and price checks in real time; Insurance firms need to determine which claims are fraudulent and which are valid by combining analytics with fast transactional processing systems; And media companies need to personalise content through set top boxes.

In order to deliver these, organisations need access to an agile platform that will provide both operational and analytical processing. As such, 2017 will be the year global companies move beyond simply asking questions but look to architect to drive long term value for the business.

Competitive advantage through agility

As businesses begin to recognise the importance of understanding data in context and taking subsequent business action, the coming year will see processing and analytic models begin to provide similar levels of agility.

This source of competitive advantage isn’t achievable with a large data lake alone. Agile processing models will continue to emerge, enabling the same instance of data to support batch analytics, interactive analytics, global messaging, database and file-based models. Not only that, a more agile approach to analytics arises when a single instance of data supports a broader set of tools.

Berlin-based HelloFresh is a great example of processing data faster and adding even more data to the mix, so they can see how data relates to each other and recognise more advanced relationships.  The end result is an agile development and application platform, which is able to support the broadest range of processing and analytic models.

The transformational value of blockchain

In 2017, we are set to see a rise in the use of blockchain, which changes the way data is stored and transactions are processed by providing a global distributed ledger. The chains can be viewed by anyone as the blockchain runs on computers distributed worldwide. Transactions are stored in timestamped blocks which, as they are stored in this way, means the data cannot be altered. It’s incredibly difficult for hackers to break into this and access data since the blockchain is spread throughout the world and can be viewed by everyone.

As an example of this efficiency in practice, there will be no need for customers to worry about the impact of a central data centre leak, and no need to wait for a SWIFT transaction. In terms of business value, blockchain provides cost savings as well as competitive advantage. The financial services industry will be a key driver of change here, with some broad implications about the way data is stored and transactions processed within the industry.

Microservices and machine learning

In the coming year, there will be a substantial increase in activity for the integration of microservices and machine learning. In the past, the deployment of microservices has been focused primarily on lightweight services. And until now those that have incorporated machine learning have been limited to so-called ‘fast data’ integrations, applied to narrow bands of streaming data.

In 2017, we will see a shift in development to stateful applications that make the most of big data. When there is data generated by multiple countries with advanced analytic applications running, microservices become even more critical. Microservices are simple processes that work together to enable the development of next gen applications. These applications will incorporate key aspects of machine learning to look back at historical data, have a better picture of the context of incoming streaming data, and in turn provide more accurate insights.

Martin Darling, Vice President, UK, Ireland and Middle East, MapR Technologies

Image source: Shutterstock/wk1003mike

Show more