Everyone has heard of the “insatiable demand” for data today. Most of us are aware of the added load on datacenters brought on by smart phones and apps, live video streaming, the Internet of Things (IoT), and other on-demand content. But is the industry prepared for the hefty data demand our everyday, seemingly easy tasks require of intra-data center traffic?
Put in Perspective
Facebook recently divulged that a simple 1 kB request to their datacenter resulted in 930 kBs of data traffic within the datacenter. This is a staggering three orders of magnitude increase in data traffic.
A similar scenario applies to the intra-data center traffic generated from a typical search on Amazon, the world’s largest retail search engine. A consumer typing a few words into the Amazon search engine is entering approximately 1 kB of data. Amazon’s search engine algorithm sends the data out to be matched with the search and it then requires over 900 kB of data to generate the required result—a remarkable ratio, particularly if you consider that if the consumer revises their search, they are exchanging another 1 kB of data for more than 900 kB. For the average consumer, this simple action is likely happening a multitude of times before an order is placed!
The strain of these small transactions on the data center in terms of connectivity, data management and energy use is significant. It is estimated that Amazon receives over 1.1 million search queries per second, which means that over 95 billion searches are happening every day, easily calling upon over 86 trillion kB of data. Considering all the online retail options available today, we are looking at an unfathomable amount of data being transmitted from these queries alone. Bearing in mind that these are trivial examples, consider the magnitude of intra-data center strain caused by data intensive, real-time applications such as financial trading and weather tracking.
The Race to Increased Computing Power
It is common knowledge that to service this “insatiable” demand, supercomputers, datacenters and the like have to step it up. Current petascale supercomputers, capable of performing in excess of one quadrillion operations per second, are often not enough.
As a result, today in the technology world, companies are racing to develop the next generation of supercomputers, known as “exascale” machines, that are capable of at least 1,000,000,000,000,000,000 flops per second, otherwise known as one exaFLOPS, or a million trillion calculations per second, and a thousand-fold increase over the capability of the petascale computer.
The Race to Reduced Cost, Size and Power Consumption
The focus on increased consumer data usage often overshadows the fact that data communications that remain within the data center are several orders of magnitude more than the traffic to and from the data center. The challenge to support massive amounts of data capacity creates a commensurate strain on cost, size and power and energy usage within the data center. Current data center communication over copper wire is limited by the fact that power needs are proportional to the data rate over the wire and the reach, or length the data has to travel. In other words, as the bit rates go up and the physical size of data centers continues to grow, so does power consumption. For the expected increase of intra-data center traffic, the availability of low power consumption photonic communication technology is a required solution. The lower power used by photonic technology, with the inherent increase in reach, dissipates less heat in the data center and, as such, less power is needed to cool the data center.
Peter Kogge, a fellow at IEEE and professor of computer science and engineering at the University of Notre Dame, likens scaling up the existing supercomputer architecture today to creating a machine requiring the equivalent of a gigawatt-scale nuclear power plant2.
Companies like MACOM are pushing to utilize new technologies and develop more integrated solutions capable of supporting the growing power and performance requirements. One solution is to shift away from copper interconnects to low-power optical connectivity solutions, which have the ability to hold denser circuitry dissipating less heat at a lower cost. Our recently announced lasers integrated with a photonic integrated circuit, or L-PIC™, is a solution primarily targeted at datacenters. Learn more about the L-PIC™ here.
Developing New Technologies
The escalating demand for data and resulting load, power and connectivity challenges are broadly recognized across the industry, with leading companies taking various approaches to solving these problems. Just to meet the explosive growth of intra-data center traffic driven by the everyday consumer using Internet content providers like Amazon, Microsoft, Google and Facebook alone, we must advance our capabilities and develop more power efficient, compact, cost-optimized, high-speed interconnect solutions. MACOM has chosen to break through copper connectivity issues with technology drivers like optics and photonics, and we see great potential for optical interconnect solutions as the demand for data continues to increase with the advances of technology… and with the introduction and ensuing obsession with apps like Pokémon GO.
1National Research Council (U.S.) (2008). The potential impact of high-end capability computing on four illustrative fields of science and engineering. The National Academies. p. 11. ISBN 978-0-309-12485-0.
2Krogge, Peter (2011). “Next-Generation Supercomputers.” http://spectrum.ieee.org/computing/hardware/nextgeneration-supercomputers
All financial guidance projections referenced in this post were made as of the publication date or another historical date noted herein, and any references to such projections herein are not intended to reaffirm them as of any later date. MACOM undertakes no obligation to update any forward-looking statement or projection at any future date. This post may include information and projections derived from third-party sources concerning addressable market size and growth rates and similar general economic or industry data. MACOM has not independently verified any information and projections from third party sources incorporated herein. This post may also contain market statistics and industry data that are subject to uncertainty and are not necessarily reflective of market conditions. Although MACOM believes that these statistics and data are reasonable, they have been derived from third party sources and have not been independently verified by MACOM.