Perimeter based intelligence, the next step in data centre design

By Frank Denneman, technology evangelist at PernixData.

  • 10 years ago Posted in

Workload consolidation was one of the initial reasons to move to virtualisation. For many organisations, this provided the ability to utilise idle resources and run multiple workloads on a smaller physical server footprint. Soon thereafter, organisations discovered many other benefits to virtualization, such as rapid provisioning and mobility of workloads. The former started taking days instead of months, and the latter provided the ability to manage the strategic infrastructure without impacting business workloads.


Fast forward to today, “virtualize-first” is a core policy applied in many IT organisations. Virtualization has become the primary platform onto which workloads are deployed. Usually, this platform is comprised of different components from different vendors, each having their own language, management structures and algorithms to optimise performance. We expect this collection of disparate platforms to work seamlessly, providing us deterministic performance levels.


Software is essential to aggregating these disparate systems into a cohesive solution. This is what is commonly referred to as the Software Defined DataCenter (SDDC), a new model that allows IT organisations to align resources with application demand, provisioning resources when required. However it is a challenge to integrate all these components into a single element, manageable from a single point with universal controls that initiate the proper commands on all these separate components.


Instead of relying on a universal language to stitch everything together, another trend is currently happening in the industry. Resources are being commoditised, with intelligence moving to the perimeter of the architecture. For example, the network virtualization platform VMware NSX utilises common network components to provide bandwidth and connectivity while levering the hypervisor to providing controls and policy based solutions at the virtual infrastructure level.


Moving intelligence to where it matters can also be seen in the storage stack. For example, this is what VMware is doing with VSAN™ and PernixData is doing with FVP™. VSAN provides an end-to-end solution, replacing your whole storage infrastructure with a policy based architecture for storage performance and capacity in the hypervisor. PernixData FVP software aligns more with NSX by decoupling performance from capacity. FVP leverages the storage array to provide capacity and data-services while utilising server resources to provide storage performance and intelligence where it matters, close to the application.


What these solutions have in common is that they are tightly integrated into the Hypervisor kernel. Because the Hypervisor kernel is rich with information, including a collection of multiple tightly knit resource schedulers, it is the perfect place to introduce policy based management engines.


Once an I/O leaves the hypervisor and beginning its journey towards the storage system, it gets stripped of essential application level information making it very difficult to guarantee the correct amount of resources to this application. The age-old solution is to create a catchall solution (giant disk pools) that is oversised in resources in order to satisfy the incoming resources. Typically this model does not allow for an application centric model where resources can be aligned at application granularity levels.


This is not the case when leveraging resources that are under the control of the hypervisor. As the hypervisor maps each I/O to the virtual machine belonging to the application, policies may be applied to ensure a specific level of Quality of Service. The hypervisor manages the resource thus resource management is in the best position to control both the resource availability and resource demand in such a way that satisfies the requirements of the application. All of this happens within the same hypervisor, providing a single construct to automate instructions in a single language providing a correct model to automate at application granularity levels.
Moving the intelligence up the stack is the natural move to do. Combine the powers of software and the hypervisor structure by creating an intelligence platform that allows you to control your environment in a way that benefits and contributes to your business goals.
 

Dell Technologies continues to make enterprise AI adoption easier with the Dell AI Factory,...
El Capitan, powered by the AMD Instinct MI300A APU, becomes the second AMD supercomputer to surpass...
Talent and training partner, mthree, which supports major global tech, banking, and business...
New AI-ready scale-out networking offering.
NetApp has updated its portfolio of enterprise storage offerings including new NetApp AFF A-Series...
CTERA has launched the next generation of CTERA Insight, a powerful visualization and analytics...
Infinidat has introduced its Retrieval-Augmented Generation (RAG) workflow deployment architecture...
With a flagship campus in Coventry, UK, the Coventry University Group operates and delivers in both...