The Evolution of Solid-State Drives in the Public Cloud

More public cloud providers are turning to solid-state drives (SSDs) for cloud storage workloads. In addition to featuring high performance, SSDs provide a smaller physical footprint. Notably, a smaller physical footprint benefits cloud customers because it enables data centers to keep costs down. By Eli Hsu, Project Manager at Phison Electronics Corporation.

  • 8 months ago Posted in

The use of NAND-flash-based solid-state drives (SSDs) for cloud storage is evolving from a premium service, used for apps that had a special storage performance requirement, to a more commonplace option. As hybrid cloud and infrastructure-as-a-service (IaaS) grow in popularity, IT managers are selecting high-performing SSDs for a wider range of cloud storage workloads.

The rise of the hybrid cloud

A few years ago, some technology analysts were predicting that public cloud platforms would make on-premises IT a thing of the past. This prediction has turned out to be premature, if not flat-out wrong. Instead, the hybrid cloud model has caught on by offering IT managers a greater flexibility as well as economic advantages.

A hybrid cloud infrastructure is one that mixes on-premises deployments of hardware and software with integrated public cloud infrastructure. There are infinite use cases and styles of implementation, but in general, hybrid clouds provide value by balancing demands for rapid scale and agility with the need for control and protection of core, critical assets and intellectual property (IP). The value is translating into growth. Research from Cisco reveals that 82% of IT leaders are adopting hybrid clouds, compared to 60% in 2021.

SSDs factor significantly into the hybrid cloud story because they offer IT managers new storage options that are high-performing but economical. Previously, the workloads migrated to the cloud might have had low storage performance expectations. Applications that needed higher storage performance stayed in the data center. This is no longer the rule. It’s possible to move almost any workload to the cloud without a storage performance penalty or the need to pay a premium price for a cloud SSD.

Increasing use of SSDs in public clouds

The use of SSDs in the cloud is not new. In fact, it’s been almost a decade since Amazon Web Services (AWS) first introduced an SSD option for their Elastic Block Store (EBS). At the time, though, it was a premium option. With a relatively high cost, it was reserved for primary-tier storage workloads that needed high storage performance—and could justify the expense. Microsoft Azure and Google Cloud Platform (GCP) also introduced SSD services around that time.

Over time, as SSDs became less expensive in terms of dollars per gigabyte and more diverse, cloud platforms have been able to introduce SSD service options that are more affordable. SSDs benefit cloud providers partly through their high density. An SSD can store far more data in a single “U” of data center rackspace than a comparably sized hard disk drive (HDD). The cloud service provider can save space and pack more revenue-generating services into the same data center footprint.

In terms of workloads, IaaS customers now expect cloud storage performance that meets or exceeds what’s available to them on-premises. They are now using the cloud for workloads that need fast storage, such as data analytics and database applications.

In a turnaround from earlier approaches, cloud customers are looking to the cloud for better storage performance than they can get on-premises. Maturing IaaS users are now opting to deploy primary business-critical workloads in the cloud because they know they can get high-performing storage to go with cloud-hosted applications. They’re getting the accessibility, availability and service level agreements (SLAs) they demand.

At the same time, even some lower-tier workloads such as backup and data archiving are moving to cloud SSDs. Part of this shift in storage architecture is an increasing reliance on the cloud provider’s ability to abstract storage management tasks. System owners can focus on the workload, not the storage.

Keeping the tiered storage model in focus

Using SSDs in the cloud is (or should be) part of an overall tiered storage model. Not every workload requires the fastest storage. Rather, it makes sense to assign storage to an appropriate tier that matches the workload’s expected performance and SLA. The highest-performing SSDs can handle the critical business systems. Data archives might be suitable for the lowest-performing and cheapest HDD services. For tiered storage to work, it is necessary to have a flexible implementation of IaaS that enables customers to easily tailor storage deployment on a server-by-server basis.

Best practices

Implementing a tiered storage model in the cloud, at least for the time being, will likely involve selective use of SSDs. Traditionally, common practices recommended a mix of storage that was about 10% “hot,” meaning frequently accessed, 30% “warm,” which is accessed regularly but not frequently, and 60% “cold,” which is very rarely accessed. SSDs would factor into the 40% of storage that’s hot or warm. Data in these top two storage tiers require fast access to compute, along with redundancy and high availability.

Today, however, public cloud deployment is much more dynamic. The recommendations of specific percentages of hot, warm or cold storage are outdated and each enterprise chooses the right mix depending on its business and application usage.

While the cost of storage is a factor when sorting out hot versus warm storage, the more important factors are performance, availability and accessibility. For cold storage, though, it’s all about cost. Data in cold storage may never be accessed. Expectations of performance are low, as are SLAs (if there are any). Cold storage customers are looking for cloud storage services that deliver the lowest possible cost. SSDs do show up in some cold storage implementations, but they are usually used for caching.

Conclusion

SSDs are no longer just for special, premium cloud workloads. They are becoming increasingly common, especially as the price for SSD cloud storage comes down. The drives are a good fit for storage assigned to the “warm” and “hot” tiers of the tiered storage model in the cloud. In addition to high performance, SSDs enable a smaller physical footprint, which helps keep data center costs down—ultimately benefiting cloud customers in the form of lower prices.

Eli Hsu’s Bio

Eli Hsu is a Project Manager at Phison Electronics Corporation with a focus on enterprise SSD solutions. Prior to Phison, Eli was a Production Planning Engineer at Powertech Technology Inc., where he specialized in storage product production line investment planning. Eli holds a BS degree in Industrial Management from the National Taiwan University of Science and Technology in Taiwan.

By Ian Wood, Senior SE Director at Commvault.
It’s no wonder that in PwC’s 24th Annual Global CEO Survey, leaders ranked cyberattacks second...
By Eric Herzog, Chief Marketing Officer at Infinidat.
The Detroit Pistons of the National Basketball Association (NBA) had a game plan to improve its...
It’s been around 20 years since flash memory – in its hugely dominant NAND variant – first...
High-Performance Computing (HPC), has become critical in assisting the energy sector, offering the...
By Eric Herzog, Chief Marketing Officer at Infinidat.