How Organisations Can Take a Cost-Effective Approach to Data

By James Hall, UK Country Manager, Snowflake.

  • 3 months ago Posted in

An uncertain economic outlook and poor growth figures over the past few months in the UK have forced a wave of cutbacks across several industries. As a result, organisations are exploring ways to reduce spend and make savings while still being profitable. To remain competitive in tough economic times, forward-thinking business leaders must avoid reducing their investments in data technology and analysis. 

Data has never been more important for organisations: the world is producing more data than ever, as much as 181 zettabytes by 2025, equal to the capacity of 45 trillion data DVDs, according to IDC. The advantages of engaging with data are proven, from improved customer experiences to enhanced product innovations. Data is also a key differentiator when it comes to capitalising on AI as the more comprehensive and diverse the data set, the better the performance delivered by AI. 

Effective cost management and planning, combined with the power of data insights, will give businesses the competitive advantage they are looking for in today’s business landscape.

Delivering transparency

Achieving transparency on existing costs is the first step towards becoming data efficient. For data admins – someone responsible for processing data into a convenient data model – this means using their analytical skills to scrutinise existing workloads, allowing them to identify which are actually delivering valuable insights. From this point, they can take a view on whether to re-architecture, increase or decrease the usage of the workload, or even retire ones which are not delivering results. A full understanding of data lineage, including where data comes from and what happens to it, can also be a useful starting point to help establish cost controls, as well as pinpointing costly errors.

Business transparency must also derive from the SaaS vendor and platform they select to use when it comes to spend. This enables businesses to understand what they are investing in each workload and weigh this up against return on investments. Understanding per-query costs can highlight the most expensive queries and allow admins or IT leaders to rethink them in terms of rewriting or refactoring. Increased visibility and control of spend will provide businesses with the best chance of maximising existing resources. 

Looking to the future

Only when businesses get hold of their data costs can they truly begin to predict future costs, and implement measures to keep spending as efficiently as possible. Many legacy data platforms are highly inflexible, with fixed cost pricing and long-term vendor lock-in contracts, making it harder to implement changes when times are tough, or even when scaling back requirements during quieter periods of data analysis. Such tools often require complex, time-consuming capacity planning in order to keep control of data costs, which can ironically prove expensive in itself.

The costs of data processing, monitoring and control mechanisms cannot be an afterthought. Flexible scaling and consumption-based pricing models are a great way of avoiding unnecessary overprovisioning and paying for processing and storage that does not deliver for the business. A growing number of organisations are also choosing to set up budgets in advance, with spending limits, digital ‘guards’ against overspending, and daily alert notifications and warnings. This allows businesses to pinpoint where money is being spent, how much value it is delivering, and how it can be reined in. 

Modern data platforms built in the cloud provide an intuitive UI to examine usage and usage trends, with clear dashboards visualising which teams, customers and cost centres are responsible for the bulk of spending. Rather than waiting for spending to go over budget, companies can get ahead of the game and see when spending limits are projected to be exceeded. In the long run, this will help technical leaders and CFOs reduce operational costs through more efficient usage.

Tracking usage at a granular level — think account level, per user, or per task — will be a key differentiator. However, larger companies should also contemplate taking control at an organisational level. This can require restricting the actions of teams or individuals to perform credit-consuming resources, such as warehouse creation. Such capabilities also offer in-depth control over factors such as size and number of clusters, and offer granular control over when clusters are spun up, to help to control costs now and in the future. Per-job-cost attribution helps organisations manage department costs and maximise resources as they scale to more teams and jobs. Furthermore, auto-suspend and auto-resume capabilities can be enabled by default. This capability turns platforms off when they aren’t required, preventing paying for unnecessary usage and thus saving customers money. 

Fit for the future

It has never been more important for organisations to get to grips with and understand their data, regardless of the sector they operate in. Business leaders must focus their tech investments where technology can deliver results, which involves moving away from costly, unpredictable legacy on-premises platforms, to modern SaaS data platforms. 

This will enable business leaders to focus on reducing costs by offering transparency and the ability to plan investments more effectively. Taking control of tech investments in this way can be a key differentiator in tough economic times, enabling organisations to forge fearlessly into a data-driven, profitable future. 

By Francesca Colenso, Director of Azure Business Group at Microsoft UK.
By Andy Baillie, VP, UK&I at Semarchy.
By Kevin Kline, SolarWinds database technology evangelist.
By Vera Huang, Sales Director, Data Services at IQ-EQ.
By Trevor Schulze, Chief Information Officer at Alteryx.
By Jonny Dixon, Senior Project Manager at Dremio.
By Barley Laing, the UK Managing Director at Melissa.