Thursday, January 14, 2016

Blueprint: What’s in Store for the Database in 2016?

by Roger Levy, VP of Product at MariaDB

In 2015, CIOs focused on DevOps and similar technologies such as containers as a way to improve time to value. During 2016, greater attention will be focused on data analytics and data management as a way to improve the timeliness and quality of business decisions. How best to store and manage data is on the minds of most CIOs as they kick off the New Year. It’s exciting to see that databases, which underlie every app and enterprise on the planet, are now back in the spotlight. Here’s what organizations anticipate for next year.

Securing your data at multiple layers
2015 saw every type of organization, from global retailers to the Catholic Church, experience financial losses and reputation damage from data breaches. Security has long been a concern of CIOs, but the growing frequency of high-profile attacks and new regulations make data protection a critical 2016 priority for businesses, governments, and non-profit organizations.

Organizations can no longer rely on just a firewall to protect your data. Amidst a myriad of threats, a robust security regimen requires multiple levels of protection including network access, firewalls, disk-level encryption, identity management, anti-phishing education, and so forth. Ultimately, hackers want access to the contents of an enterprise's database, so securing the database itself must be a core component of every organization’s IT strategy.


Prudent software development teams will use database technology with native encryption to protect data as it resides in the database, and SSL encryption to protect data as it moves between applications. They also will control access to the database with stronger password validation and a variety of access authorization levels based on a user’s role. Of course organizations can’t kick back and rely on software alone; they still have to hold themselves accountable via regular audits and testing.

Migrating to the cloud 
With the recent revenue announcements by public cloud providers such as Amazon AWS and Microsoft Azure, it is clear that adoption of public cloud services is becoming mainstream. But they may never fully replace on-premise data storage. While the cloud offers greater scalability and flexibility, better business continuity, disaster recovery, and capital cost savings, for economic and security reasons companies continue to optimize a mix of public and private cloud and traditional on-premise data management solutions.


Managing data across multiple environments also presents challenges. Navigating the myriad of data privacy regulations across the globe, integrating applications and data across private and public infrastructures, and managing latency issues are a few of the challenges organizations face in their migration to the cloud. Enter the hybrid cloud where IT organizations are achieving the best of today’s cloud solutions – traditional data storage, private cloud and public cloud benefits.

In 2016, we’ll likely see hybrid clouds experiencing a surge in popularity as an alternative to either a public or a private cloud solution. Greater focus will be applied to developing solutions that improve migration to hybrid cloud infrastructures for overall security and efficiency, as well as instances such as cloud bursting when bandwidth demand spikes or disaster recovery by replicating databases in the cloud as backups.

Multi-model databases 
The variety, velocity and volume of data is exploding.  Every minute we send over 200 million emails and over 300 thousand tweets. Already by 2013, 90% of the world's data had been created in two years. But size is not everything. Not only have the volume and velocity of data increased, there is also an increasing variety of formats of data that organizations are collecting, storing and processing.

While different data models have different needs in terms of insert and read rates, query rates and data set size, companies are getting tired of the complexity of juggling different databases. Next year will kick off an increased trend toward data platforms which offer “polyglot persistence” – the ability to handle multiple data models within a single database. The demand for multi-model databases is exploding as Structured Query Language (SQL) relational data from existing applications and connected devices must be processed along-side JavaScript Object Notation (JSON) documents, unstructured data, graph data, geospatial and other forms of data generated in social media, customer interactions, and machine to machine communications.

Growth in applying machine learning
With the rapid growth in the type and volume of data being created and collected comes the opportunity for enterprises to mine that data for valuable information and insights into their business and their customers. As IT recruiters know well, more and more companies are employing specialist “data scientists” to introduce and implement machine learning technologies. But the number of experts in this field simply isn’t growing fast enough, and this rarity makes hiring a data scientist cost-prohibitive for most companies. In fact, the US alone faces a shortage of 140,000 to 190,000 people with analytical expertise and 1.5 million managers and analysts with the skills to understand and make decisions based on the analysis of big data, according to McKinsey & Company. In response, organizations are turning to machine learning tools that enable all of their employees to derive insights without needing to rely on specialists. Just as crucial as collecting data is the need to understand what lies in a company’s database and how it can be turned into valuable insights.

Recently the major public cloud vendors have introduced a variety of offerings to provide machine learning services. These include offers such as Azure ML Studio from Microsoft, the Google Prediction API, Amazon Machine Learning and IBM’s Watson Analytics. We can expect that 2016 will be a year when additional solutions appear and mature, and are recognized as a critical, possibly required, piece of enterprise IT operations. The growth of machine learning will place new demands on databases which store and manage the data “fuel” for such applications.  In 2016, look for a focus on database capabilities that facilitate real-time analytical processing of large data sets.

What can IT personnel do?
With the recent rise of the Chief Data Officer, the widespread adoption of new database technologies, and the acute need for better IT security, the database is back in the spotlight. A CIO’s best bet for staying on top of these new trends in 2016 will be the same strategy as in years past, laying down clear policies for who can access data and what it gets used for, all the while staying on top of new technologies and new threats targeting the integrity of a company’s data.

About the Author

Roger Levy brings extensive international, engineering and business leadership experience to his role as VP, Products, at MariaDB. He has a proven track record of growing businesses, transforming organizations and fostering innovation in the areas of data networking, security, enterprise software, cloud computing and mobile communications solutions, resulting in on-time, high-quality and cost-effective products and services. Previous roles include VP and GM of HP Public Cloud at Hewlett-Packard, SVP of Products at Engine Yard, as well as founding R.P. Levy Consulting LLC.



Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.