2013 is the year that virtualisation will finally come of age.
Virtualisation first came into play just as the economy dropped. The recession meant that belts were tightened, forcing a lot of significant IT decisions to be put on hold. Instead, businesses sweated their technology assets for a few years.
Now, renewed business growth is enabling organisations to review their current IT infrastructure and upgrade technology in line with their growth strategy. There are proven business continuity benefits to using hypervisor technology providing rapid deployment and recovery of virtual machines, and disaster recovery is also high on the list of motivations to move to virtualisation.
Businesses are ready to invest, which puts the hypervisor at the forefront of IT decisions.
Many businesses are also looking at moving into the cloud, and again virtualisation and the hypervisor are key in this process. The consolidation of data centre and comms rooms into centralised public, private and hybrid clouds enables businesses to take advantage of subscription-based services. The cost benefits are more clearly defined and ROI figures stack up for CIOs and IT technical directors to invest, so we’ll see a lot more businesses looking to move into the cloud over the next 12 months.
Technology grows up
Virtualisation technology has now fully matured. New technologies mean many more virtual machines can be run from the same rackspace footprint, meaning businesses of any size can do a whole lot more with less and get the very best out of their hardware.
We’ve seen a decrease in costs and complexity for deploying Virtual Desktop Infrastructure (VDI) and vendors are now providing single box solutions for the SME market. Traditional hardware appliance-based applications such as unified communication, wireless controllers, voice recording and security devices have migrated to virtualised appliances, meaning a reduction in rackspace footprint, power and cooling. We’re also witnessing the emergence of new data centre hardware infrastructure specifically designed for virtualised environments, moving away from traditional server infrastructure and design.
Hot new orchestration tools geared towards the hypervisor, such as the Cisco Unified Computing System (UCS) series, enable the easy management of multi-site, multi-vendor, storage, switching and server infrastructure from a single management platform, offering the ability to deploy virtual services within minutes.
This is music to the ears of many IT departments that suffered cuts during the recession and are still running lean. These tools allow a single department to provision their business in a matter of hours as opposed to weeks, freeing up IT teams to be proactive and do what the business needs, making the IT service seamless for the end users.
These management tools also help to protect the virtual infrastructure investment by monitoring every aspect of its performance as well as protecting against “virtual sprawl”. This is brought on by the proliferation of VMs and over-provisioning, which can lead to security breaches, software and operating system licensing issues, unnecessary power consumption and more.
Choosing a hypervisor
The hypervisor is one of the important decisions an IT decision maker will make as they move to virtualisation, as it impacts all of the other aspects of the virtualised environment. VMware was the key player in the hypervisor market for a long time, but Microsoft’s Hyper V is now starting to mature and make in-roads with a range of features and attractive pricing, as well as open source technology such as Xen.
With this newly competitive market offering more choice and more features than ever before, it’s important that IT managers look closely into what’s best for their business, rather than just going by a product review or recommendation. Doing a full audit of the business’s technology first will mean they go into the process with their eyes wide open and can check compatibility and effectiveness against their hardware, applications and virtual machines.
A Type 1, or bare-metal virtualisation hypervisor offers the highest performance and advanced resource controls. But while they can be easy to install, they are often complicated to configure, whereas Type 2 or hosted hypervisors tend to be easier to maintain. This shows the importance of taking a hypervisor’s on-going management and surrounding eco-system into account. What may seem cost-effective at first may turn out to be not such a smart choice once factors such as training, experts and third-party vendors are taken into account.
It’s also vital to look to the future and ensure that a hypervisor can be scaled up as the business grows – how will this impact on costs? Many hypervisors are billed as “free” but as requirements increase, the cost of platform management and advanced features will also grow.
When one isn’t enough
2013 may also see a growing trend toward multi-hypervisor environments. In the US, IDC estimates around 15% of enterprises currently deploy multiple hypervisors, but it expects that number to double in the next one to two years, as companies start to experiment with new virtualisation technologies.
Here, multi-hypervisor environments are more the domain of cloud providers providing multiple virtualisation environments to multiple users. However some organisations may wish to use different hypervisors for different services or application workloads, for example some Microsoft services may perform better with Hyper V.
In summary
For any IT manager looking to move their business to virtualisation or the cloud, hypervisors hold the key. In an ever-changing landscape, it’s well worth taking the time to explore all of the options before making such a critical investment.