We all know Moore’s Law, which states that the number of transistors doubles every 18 months. However, Metcalfe’s Law, which suggests that the value of a network is proportional to the square of the number of connected devices, is far less recognised. On this basis, two devices can make one connection; five can make 10 and so on, almost ad infinitum. Mathematically, this can result in some daunting numbers in terms of bandwidth and capacity requirements.
These two laws demonstrate a simple truth which all those involved in data (be it in management, storage or otherwise) must keep in mind: the demands placed on existing and future systems is only going one way and that’s up.
The internet is getting exponentially bigger, but it still fits in the palm of your hand. This commoditisation and commercialization of data (the Internet of Things or “IoT”) is forcing greater – and more varied demands – on data related infrastructure than was envisioned even five years ago. Yet thanks to RFID (Radio Frequency Identification), the logistical challenges we face can be turned from a risk into an opportunity.
Consider your current weekly shopping - traced from the farm to the factory to your fridge. What would have seemed fantastic not long ago is now almost mundane. Tomorrow, areas like surveillance, security, healthcare, transport, food safety and document management will follow suit. It is predicted that this surge will lead to between 25 to 30 billion IoT devices by 2020. That’s only six years from now. Tomorrow is today – no longer just a phrase but our reality.
When talking about bandwidth growth it’s important to place it into real world context. The next generation of superfast computers, also called High Performance Computers (HPC), will approach or exceed the 1 ExaFlop (1 ExeFlop = 1018 Floating-point Operations Per Second) mark. This is slated to occur in 2016. Looking back we see that the computing speed of yesterday’s HPC is equal to the computing speed of the average laptop 12 years later. Three years after that, it’s equal to your mobile phone.
Staggeringly, this means we will carry around a mobile phone with the computing power of one Exaflop just 17 years from now. A noteworthy (possibly shocking) thought is the idea that this is roughly the same as another well known data processor: the human brain. It’s clear that the bandwidth need will continue increasing to truly mind boggling values.
This has profound implications for data centres; both now and in the future. Innovation can no longer be a ‘USP’ for clients or customers – it’s a survival requirement. Current data centre signals are (mainly) based on 10Gb/s. Even when we talk about the migration from 10Gb/s to 40Gb/s and later 100Gb/s, the fundamental bandwidth remains the same as 100Gb/s is actually – in practice - 10x10Gb/s rather than a stand alone.
Yet thanks to recent advances in technology this same result can be achieved by utilising 4x25Gb/s engines. This saves both time and money on installation, maintenance and use. For instance, it is well worth checking out TE Connectivity’s new Coolbit engine which comes highly recommended in this regard.
In my next blog, I will discuss the migration from 10 to 40 and 100Gb/s as well as the next steps toward 400Gb/s.