Research conducted by the Enterprise Strategy Group (ESG) and co-sponsored by Virtual Instruments, the leader in application-centric infrastructure performance management, and multiple other technology leaders, has found that IT teams across Europe are putting their IT infrastructures – and businesses – at risk by failing to profile application workloads and test data storage systems before purchase and deployment.
Understanding and assessing application workload profiles before acquiring storage solutions provides critical insight into what’s truly needed by application end-users. Without profiling production applications and analysing their I/O behaviour, properly evaluating and sizing storage systems can be extremely difficult as IT teams lack accurate information regarding the storage performance needed to keep them working optimally and to protect against slow-downs and outages. Yet, of the 412 IT professionals surveyed, only 41% said that they will profile their workloads before buying their next storage system and 56% trust either storage vendors or their VARs to advise them on the right solution. Solely depending on vendor recommendations or partners who are aligned with specific vendors could leave IT teams vulnerable. At best, there’s a significant likelihood of over-provisioning and wasted financial expenditures, at worst it can mean a new storage solution is unable to keep up with the organisation’s business requirements and lead to lost revenues.
A lack of insight into application workload behaviour is compounded by a failure to understand how storage solutions will perform until they’re put to work. Only 29% of the respondents surveyed said they would conduct on-premises load testing themselves before their next storage systems purchase, while another 11% indicated they would work with vendors or partners to conduct load testing before deciding on their next storage system purchase. Without understanding how a solution is likely to behave under normal, dynamic, and anticipated peak load conditions, it’s much harder to predict whether a solution is correctly configured to manage an organisation’s unique demands. It’s also likely to impact overall performance.
“Assuming that upgrading to flash-based storage will solve all data-related application performance issues is a myth,” stated Mark Peters, Practice Director & Senior Analyst at the Enterprise Strategy Group. “Application performance is heavily impacted by the I/O characteristics and patterns employed by the application itself and its interactions with other applications that might be sharing the same, invariably virtualised, infrastructure. How each vendor has designed its all-flash arrays to handle the plethora of different workloads varies greatly. Five-fold performance differences or more are not uncommon for the same identical workload.”
In spite of this, the research shows that storage performance and availability are top priorities. Seventy percent of respondents plan to establish service level agreements (SLAs) with their application owners for either performance and/or availability. For performance, specifically, 43% of respondents will establish performance-related SLAs. Without utilising both load testing and monitoring, SLA adherence is practically impossible.
Ninety-four percent of respondents indicated that their organization ensures performance and availability by using monitoring tools. Fifty-four percent prefer to use vendor-independent monitoring tools. This type of solution analyses performance across the entire infrastructure, making it much easier to spot performance issues and to understand what may be causing them. A slow-down in the storage solution could be triggered by an overloaded application elsewhere in the infrastructure, for example.
With the performance of both applications and storage so closely linked, it should come as no surprise that the majority (74%) of respondents surveyed said their organisation’s application owners care about storage deployment decisions. Yet, only 16% said their application owners are involved in the storage solution deployment process. This is a missed opportunity: collaboration between storage and application teams brings a much better understanding of the overall infrastructure, and each component’s effect on its performance.
Chris James, EMEA marketing director, Virtual Instruments said: “The results of this research give both enterprise CIOs and vendors something to think about. For CIOs, the research showcases the gaps in knowledge that are putting their businesses at risk. But they also show opportunities for increasing insight into how their infrastructures are working, and how they can stop wasting capital and avoid over-provisioning. That level of understanding enables higher, more consistent performance, which brings benefits across the IT infrastructure and the wider business.
“For vendors, the results give a strong message about where they should be focusing their efforts. Availability should be a given, it’s performance that matters to customers, and the vendors that enable their customers to establish performance-based SLAs that leverage both load testing and monitoring will be setting the bar for the rest of the industry.”