HOW BUSINESSES GO ABOUT measuring this efficiency is an entirely different matter; too often organisations don’t have a full understanding of exactly how well their data centres currently use energy, let alone how any changes would affect that. Techniques such as predicting energy use and managing IT equipment more efficiently can have huge cost benefits, yet organisations are still falling at the first hurdle.
Predicting and measuring
A recent SAP report found that a lack of energy insight prevents organisations from answering critical questions about their data centre operations. For example, what components consume the most energy? How does consumption change on a daily, weekly, monthly, or yearly basis due to climate variations or varying IT loads? Businesses are increasingly metering and measuring their energy use in order to answer these questions.
However, even assuming that businesses use the metering technology available to them, they must also ensure that they are predicting data centre performance, rather than simply measuring - a distinction many still struggle to make.
Measurement and metering allows businesses to see how their data centre is performing, yet only prediction allows them to see if what they are measuring is right and the system is actually working as planned.
Once they understand this difference, businesses can measure data and predict trends, managing costs, planning investment and uncovering any inefficiencies within their data centre. If organisations can both monitor and predict their data centres’ energy use, then they can begin to address their primary concern - ensuring that the data centre is operating as it should be.
The big questions
Once a business has made sure it can predict energy use or, in other words, determine the energy it should use, as well as measure it, there is still the question of how to decipher the information available. Correctly-monitored data centres now produce huge amounts of information; the challenge for businesses is knowing which information is relevant to their understanding of energy use.
Many become paralysed when trying to determine this. Either they try to measure everything and find themselves overwhelmed, or they ignore data that would give vital insights - meaning that any prediction and measurement will merely be a waste of resources.
Many data centres are designed with vast amounts of metering points, many unnecessary. Businesses must strike the right balance in order to avoid any wasted efforts. For example, metering relies upon the principle that businesses will be able to gain vital information on the performance of their data centre.
Therefore, it is essential that organisations first work out exactly what the crucial factors are in energy performance, ensure they can predict what those factors should be, and then put metering in place to be able to monitor and validate those predictions.
Understanding the environment
In order to identify these crucial factors, businesses must understand the environment that creates them. Understanding the data centre environment can help businesses avoid spending resources on metering when it’s not needed. There is the example of a team at GE, which placed thousands of sensors to troubleshoot a manufacturing process before discovering that just one, single, key sensor identified when the process was in the optimum state. Having a clear picture of exactly how a data centre fits together and which points of measurement are of most significance for predicting and monitoring energy use is a crucial first step. That is in addition to the significant savings made by not installing and not having to manage unnecessary meters.
Such knowledge also allows organisations to recognise exactly what a meter is telling them. For example, if a meter goes up or down over time was that expected? And at that scale? Again, the key is understanding the system and its sensitivities rather than gathering data for the sake of it. Having a model of the system and being able to predict how it should behave in given circumstances is essential. For instance, if an organisation understands its software workloads, it should be able to identify precisely whether a meter’s change in reading is due to the predicted software load or another factor.
Beyond efficiency
The ability to predict and then measure data centre efficiency and performance can have benefits beyond reducing energy costs; for example, it can be easy to spot when a component is using far more or less energy than expected, and so may be at fault. However, cost should still be the primary focus. In a rapidly changing data centre market, metering and measurement tools must now be recognised as valuable tools for businesses; yet at the same time they are only part of the solution. To gain complete control of their data centres, businesses need to be smart about what data to collect and how to process it.
Success rests on four simple steps: decide what information is needed; accurately predict what that information should be; identify what to measure to get this information; and then forecast and monitor to ensure expectations are met. Anything else is simply window dressing.
Predicting and controlling TCO
ROMONET PORTAL 2.0 is the updated version of Romonet’s SaaS-based data center performance and lifecycle management software suite. Historically data center owners have only been able to estimate IT costs, making it impossible to judge whether the data center is actually working as expected.
With Portal 2.0, users can predict, analyse and continuously improve the performance of their data center portfolio and thereby minimize the Total Cost of Ownership (TCO) to a level of granularity and precision that is impossible with traditional tools.
Portal 2.0 brings a new operational capability that uses the same powerful predictive modelling technology that Romonet are well known for. Portal 2.0 adds the ability to compare ‘expected vs actual’ performance down to individual sub-systems, enabling operational teams to quickly identify operational issues and fix before they become service impacting.
Romonet Portal 2.0 is based on Romonet’s proprietary and award-winning predictive modeling technology; allowing users to forecast, plan and measure business performance with greater insight, accuracy and agility than previously possible. Unlike many DCIM solutions Romonet’s Portal 2.0 can be deployed and delivering value in days without disruptive and costly hardware or software agents.
Romonet Portal 2.0 helps organizations in the following ways:
£ Identify exactly where and how many meters organizations
actually need to use in their data center infrastructure to gain the
best possible insight into their performance and therefore
justifying an investment in the right level of metering.
£ Reduce risk in capital investment by accurately modeling the
outcome of each investment option.
£ Optimize performance by swiftly identifying discrepancies
between expected and actual performance of new infrastructure
while still at the commissioning stage. Allowing issues to be
quickly identified and addressed before going live.
£ Lower the Total Cost of Ownership, by giving organizations
insight into their IT costs and allowing them to see exactly where
savings can be made.
£ Increase predictability in IT strategy through allowing
organizations to see precisely how their data center portfolio
should be preforming at any point in each site’s lifecycle.
£ Streamlined user interface, allowing organizations to quickly and
intuitively identify and analyze discrepancies.
With Portal 2.0, businesses have an immediate view into how efficiently it is managing and utilising its data center infrastructure spend. As a result, it is much easier to reveal the true costs of IT services. For enterprises, this prevents the data center becoming a cost black hole: absorbing investment with no clear indication of how that investment benefits the business. For service providers, understanding total delivery costs means they can ensure that they understand margin per client or per service.