Business must stop treating the cloud as a data graveyard

The cloud can provide extensive storage cost efficiency and a powerful compute environment – depending on your specific data needs at any given time. By Krishna Subramanian, President & COO, Komprise.

  • 2 years ago Posted in

90 per cent of the world’s data was created in the last two years alone. Having doubled in this period, analysts IDC predict that the data created over the coming three years will be more than the data created over the past thirty. In fact, over 59 zettabytes of data were created, captured, copied and consumed last year alone. Very few businesses planned for such an explosion in data (and particularly the unstructured type) and rather than IT budgets being doubled to match, they have largely stayed flat.

From a storage perspective, while many organisations are maximising the benefits of transforming to the cloud, there remain multitudes who use it purely as a cheap and deep storage locker where data goes to die. With 80 per cent of the world’s data remaining unstructured, many also plan to keep up to ten copies of the data they create in the hope that they can turn it into information they can monetise.

There is now an urgent need to fundamentally re-think how enterprise IT organisations approach data storage and back up. Similarly, to what we saw with cloud application migrations, thinking you can just lift and shift your growing volumes of unstructured data to a cheap storage locker will result in greater costs and end user frustration. Alongside not treating the cloud as a data graveyard, it’s going to be imperative that enterprise IT organisations stop the endless cycle of buying more storage, backing up data before analysis and treating all data the same.

With these flawed approaches, organisations are missing huge business opportunities to leverage the compute power of the cloud and realising the full value and potential of their data. Many IT leads need to ask themselves why they are keeping large amounts of data if they will never extract value from it.

The data cost crunch

There’s an important OpEx / CapEx implication here. Most companies overspend on their cloud budgets every year because they’re not actively managing their data to ensure cost efficiency. Because many don't have sufficient datacentre space to store all their data, there's a frequent temptation to just ‘push it to the cloud’ while the cloud is cheap.

While it’s true that the cloud provides economies of scale, companies are often paying by the hour, so every hour they’re keeping data in there is a daily operational cost – with much of it that that’s unlikely to be used and ‘delete-able’. A vast array of data crunching and data analytics services are available now, so it’s important to look at the cloud as potentially a giant supercomputer that can be used ‘on demand’. Organisations that see it simply as a cheap storage locker are missing a huge business opportunity in terms of computing power and ability to leverage their data.

By considering whether they are keeping data in the right place in the cloud, companies can move from an OpEx situation to a CapEx one. An example would include using more premium cloud services when performing number crunching tasks and pushing it back into a cheap or deeper tier when it needs to be archived. This type of active management is crucially important in the cloud, because many approach cloud services from a low cost perspective and are shocked when they receive their ultimate invoices based on time and data quantities.

We’re all data hoarders

Aside from the extraction of value there’s also an urgency to manage data effectively from a wider backup perspective. By nature, we're all data hoarders. Few businesses nowadays can afford to lose data when a virus hits, so many keep around five to ten copies of each piece of data. Backing up everything in this way incurs huge costs and is highly inefficient, resulting in backups taking longer than necessary and slowing down productivity.

The issue is becoming an increasing challenge for businesses, with IT departments typically having no clear idea of what data is important, how frequently it is being used or what can be archived permanently. Without this information on the data they own, businesses IT departments often just continue buying further storage with minimal visibility over how to prioritise the data they’re sending to it. In the worst cases, disposable data can often be sent to the expensive but fast flash arrays, where the important and business critical data should sit.

There are several challenges facing organisations in their data management today. These include massive data growth within flat budgets, accessing the right data in the right place at the right time and the costly and complex challenges of managing legacy data. But these challenges can be addressed with the right approach – and examining certain key considerations towards maximising the benefits of cloud storage.

Real cloud storage value

The real value of the cloud is in its elastic compute and analytics services. If you can’t use your data to operate on it in tiers, then the value of that data will be diminished. The first step therefore is to get help to gain insight into the data you have. Working with a service that works with all of the appropriate storage and cloud vendors and uses open rather than proprietary standards will help at this point.

Services like these will help you re-purpose your data, rather than just storing and protecting it. This will also help you extract more value from your expensive flash storage investment. While organising data in a similar way to file storage was once expensive, new object storage (locker type model) functionality also bridges the worlds of data storage at much lower cost.

Avoid vendor software lock in of your cloud data

Some business complain that their data is locked into a particular vendor and that they’re then forced to remain with that vendor, even within the cloud. There’s an assumption that going with a single vendor using all their native tools, provides a ‘better’ and more seamless experience and a single point of contact when things go wrong. But these businesses often end up getting locked in by assuming they're getting superior functionality from a single homogenous stack .

The reality is that you don't have to take this route because it’s possible to work through standards across all of these vendors, using the best native functionality of every vendor without lock in. Here, you can sit withing a heterogeneous environment. Many enterprises already plan to use multiple cloud vendors as part of their business strategy. This is wise because it eases the process of moving their data across vendors, while still being able to leverage their native capabilities.

Ultimately, limiting the future portability and use of your data can create expensive licensing alongside unnecessary limitations on evolving your cloud choices over time. If your data is kept accessible through a standard language, you don't have lock in. When you look at anything that's moving data, you should ask, “How is it storing the data in the new location?” and “is it just storing pieces of the data”.

Paying attention to how data is moved and stored is important. To avoid being locked into a particular vendor, ask whether you can access your data without needing third party software.

Reducing cloud costs

30 per cent of IT/cloud spend nowadays is on storage – and we estimate that it’s possible to cut 70 per cent of these costs. With some businesses throwing more money at storage as they run out and backup processes start taking longer, many don’t realise this ability to reduce costs as part of the process.

It’s easier to manage data and accommodate the bottom line than many organisations think. Consider the functionality you are getting and the corresponding trade offs, because nothing is perfect and businesses frequently end up paying for way more functionality than they need in the short to medium term.

You should also consider how flexible and future proof your data solution is, because it will continue to grow at a similar pace. So future proofing yourself means preparing for the fact that you will be migrating that data at some point – and that’s becoming increasingly essential.

Show your true cloud native colours

The cloud can provide extensive storage cost efficiency and a powerful compute environment – depending on your specific data needs at any given time. Cloud native is really about getting the most out of this new environment. Following a huge movement around cloud native applications and development, storage is evolving with it.

If 30 per cent of IT budgets are being spent on storage in a lift and shift approach there’s huge value in preparing and extracting value from your data’s ‘home’. The new breed of storage administrator will be thinking cleverly about where to put their company’s data that's exploding everywhere. Soon, these data ‘stewards’ will be thinking more analytically about the value of their companies’ data, and the role will become highly valued commercially.

For now, gaining control of your data positions you well to leverage the compute power of the cloud – gaining easy access to the data you need at the right place and time and using only what’s needed.

By Brian Sibley, Solutions Architect, Espria.
By Lori MacVittie, F5 Distinguished Engineer.
By Adam Gaca, Vice President of Cloud Solutions at Future Processing.
By Jo Debecker, Managing Partner and Global Head of Wipro FullStride Cloud.
By Charles Custer, senior technology researcher, Cockroach Labs.
By Tim Whiteley, Co-Founder of Inevidesk.
By Russell Crowley, co-founder at Principle Networks.