Three Things to Know When You’re Making Cloud Choices

By Mike Hicks, Principal Solutions Analyst, Cisco ThousandEyes.

  • 1 year ago Posted in

This year, Gartner forecasts a 20% growth in end-user spending on public cloud services. Continued growth is not surprising given how ubiquitous cloud services have become for enterprises today, and the role that public cloud networks play within enterprises’ day-to-day infrastructure. But while cloud adoption has reached a stage of maturity in some areas, such as migration pathways and pipelines, it’s not uniform across all cloud domains.

While cloud cost containment dominated cloud migration strategies of yesteryear, 2023 is shaping as the year where a performance-based lens is applied to cloud-based architecture and configuration decisions. That performance focus is part of the course with any evolution of new technology adoption, but perhaps even more so for enterprises operating within today’s broader challenges of inflation, global uncertainty, and skills shortages. Remaining competitive and able to deliver excellent customer and employee services in today’s environment requires a level of ROI thinking that goes beyond cost.

Today, selecting a cloud is often based on its suitability to run certain types of workloads, or for commercial reasons such as introducing competition or certain sovereignty requirements. The focus remains on the cloud provider and its most obvious capabilities.

However, for a complete cloud experience, considerations must be given to both the core cloud architecture and the infrastructure that underpins these cloud environments.

First, there’s no steady state in the cloud Based on three years worth of data examining public cloud network performance and connectivity architectures, one thing becomes clear: there’s no steady state in the cloud.

The modular, API-centric (application programming interface) architecture of today’s applications, as well as the widespread adoption of business SaaS (software as a service), has created an extensive web of interdependence; with the cloud at the heart of it.

The sheer number of changes these applications are exposed to might come as a surprise when in fact, any update to the APIs, services, or third-party code libraries that these application components rely on can - and routinely does - break these applications. Given the dynamic nature of the cloud, performance snapshots run the risk of not accurately reflecting current conditions and for that reason, they’re insufficient as a tool.

In other words, there is no steady state in the cloud. To overcome this, teams will need greater visibility into the cloud to keep up with its dynamic nature, and to continue to experience benefits rather than acquire additional costs.

Second, latency is inevitable so you should plan for it

Latency is often tackled in more controlled environments by operation teams. The traditional response to latency is to shorten the distance and number of hops required to move traffic. Theoretically, by applying a shorter route, this would create less physical distance to cover and improve the overall performance.

However, when it comes to cloud-based operations, it gets a bit more complicated. As we already know, change is the only constant in the cloud. Public cloud connectivity architectures

are constantly progressing with alterations at the provider's discretion. Cloud providers' decisions, such as how they advertise service endpoints, or use shared infrastructure as their backbone, all have the potential to add more trip time to data traffic. So, the enterprise has very little control over these actions.

The way to address latency is in the application architecture itself, and in setting (or resetting) user expectations.

The end goal isn’t to minimise latency to the shortest time possible, but instead to make performance quick enough to satisfy the application’s and users’ requirements. Low latency may be possible by hosting workloads in a single cloud region that is close to corporate users. However, this option may not be preferable if the local region costs more for compute, or would cause potential resiliency issues due to single country dependency.

By comparison, a higher latency might be acceptable if it balanced cost and convenience, and if the application was architected in such a way that it could account for and work with that additional latency, without causing a degraded experience.

Third, It all comes down to the cable

Enterprises often think about their data being in the cloud and the immediate potential bottlenecks to ingress and egress, but the reality is that capacity bottlenecks can occur far away. Global cabling systems that carry the world’s traffic are both a core asset and critical concern for today’s digital and cloud-first businesses.

Big tech companies such as Google, Amazon, and Microsoft, have all made significant investments in infrastructure projects, including subsea cable systems. However, even with such investments, they are still forced to rely on shared physical links to transfer traffic between different parts of their networks.

Between cloud providers, the contracted or leased capacities on these cables varies. The cables themselves are also different in usage for different landing zones. For instance, in some regions these may cross over in fairly shallow waters susceptible to damage.

The performance of a network can also shift over time and among various regions. Depending on the destination, some cloud service providers attempt to route traffic through the internet and bring it closer to their physical locations. While other cloud providers endeavour to bring traffic into their networks as close as possible to its base.

Ultimately, cloud-based workloads support an enormous scope of digital services today. Yet, as these three examples show, businesses have everything to gain when IT teams are equipped with the visibility they need to better understand the specifics and the impact of different cloud provider network behaviours and anomalies. Performance optimisation is possible at every stage, as long as you know where to look.

By Brian Sibley, Solutions Architect, Espria.
By Lori MacVittie, F5 Distinguished Engineer.
By Adam Gaca, Vice President of Cloud Solutions at Future Processing.
By Adriaan Oosthoek, Chairman Portus Data Centers.
By Jo Debecker, Managing Partner and Global Head of Wipro FullStride Cloud.
By Tim Whiteley, Co-Founder of Inevidesk.
By Russell Crowley, co-founder at Principle Networks.
Reaping the rewards of AI-powered services is putting the quality of fintech IT infrastructure to...