OFC 2018 Spotlight: Fully Analog Optical Connectivity for High-Performance Computing (HPC)   Mar. 13, 2018

hpc blog post small 2.jpg (625959576)The amount of computing horsepower at our disposal today is growing at a staggering rate, driven by huge gains in processing performance, power optimization and architectural innovations. This progress is being driven at every level of the technology market, but none more so than the High-Performance Computing (HPC) ecosystem. And where previously we associated HPC primarily with ultra specialized supercomputers, HPC capabilities have extended outward into Cloud Data Centers and beyond, servicing compute-intensive applications for which massive amounts of computational power and throughput are required to crunch huge data sets and run sophisticated modeling and simulation workloads. 


Among the many domains where HPC is employed today, natural science applications are among the most common, spanning from life sciences like biology and medicine to physical sciences including physics, astronomy, chemistry and earth science, among other fields. Atmospheric research and weather modeling are well represented among HPC apps as well, as are geological and oil and gas exploration.

Aerospace applications are prime candidates for HPC too, enabling government and commercial entities alike to assess air flow physics in a controlled environment, with implications for next-generation aircraft and spacecraft designs, and fuel efficiency characteristics.

HPC is also commonly employed in finance applications, where rapid market fluctuations must be analyzed and acted upon at extremely high speeds. HPC can enable automated trading systems to buy and sell in near real time to maximize investor value, while also enabling large-scale correlation analysis.


In all of these HPC environments, optical connectivity plays a crucial role in the high-speed delivery of data across processing, memory and storage resources, eliminating short-reach network frictions that could slow overall system performance. Fully analog optical modules and active optical cables (AOCs) have emerged as the solutions of choice for these environments. Though more difficult to implement than mainstream digital signal processor (DSP) based solutions, fully analog optical connectivity provides a host of benefits that are ideally suited for the unique demands of HPC applications. Among these benefits:

Extremely Low Latency – Fully analog optical interconnects can provide 1000X lower latency than DSP-based solutions. This attribute gets to the heart of the HPC value proposition, enabling system performance at fastest possible speeds that would otherwise take significantly longer to run using digital computing infrastructure. This is beneficial on multiple levels. Modeling simulations can be dramatically accelerated, delivering results in days and weeks where previously it might have taken months or even years. This in turn gives users the data they need in much shorter timeframes, helping them speed the pace of their research initiatives.

As a consequence of this workload acceleration, HPC system capacity can also be reallocated and redeployed at a faster rate, ensuring that HPC host providers can achieve the maximum return on their investment in HPC infrastructure – investments that could easily scale into the multimillion and multibillion dollar amounts.

Low Power Consumption – In the absence of a DSP, fully analog optical interconnects may consume much less power and dissipate considerably less heat than mainstream commercial solutions. Here again this is beneficial on multiple levels. By reducing power consumption at the device level, total operating costs can be reduced. And increased thermal efficiency enables a reduction in cooling hardware and costs, at both the system level and facility level. The reduction of cooling components within the device also ultimately enables smaller module form factors, maximizing port and bandwidth density while conserving valuable real estate in and around the HPC cluster.

Lower Overall Cost – As evidenced above, the streamlined architecture of fully analog optical interconnects may ensure significant cost savings compared to mainstream commercial solutions. Reductions in component count and power consumption ultimately can ensure more cost-effective module and system designs, while enabling a lower total cost of ownership for the end customer.


MACOM’s leadership in fully analog optical components for modules and AOCs targeting HPC applications extends from 100G to 200G and 400G connectivity. To read more about our recently announced 400G chipset for high-performance, short reach optical interconnects, click here. OFC 2018 attendees are invited to make an appointment to preview a demo of this new solution at booth #2613. We look forward to seeing you there!

Read Full Post
What Happens Now That 5G Standards Are Set?   Feb. 26, 2018

5G Standard Blog.jpg (649900766)It’s a very exciting time in the evolution of 5G. In December 2017, 3G Partnership Project (3GPP) officially announced the new standards for 5G New Radio (NR), effectively setting the stage to launch full-scale and cost-effective development of 5G networks. The approved standards include support for Non-Standalone 5G, enabling an operator with an existing 4G/LTE footprint to take advantage of the performance benefits of 5G, either in new or existing spectrum to boost capacity and user throughput.


Following this vital milestone in the realization of 5G, the industry is hitting the ground running. Although the full deployment and promised 10x to 1000x capacity value add of 5G may be further down the road, the required effort and innovation to bridge the gap between existing 4G speeds and maximizing 5G’s full potential has already begun.

Similar to previous network technologies, the evolution of 5G will see many flavors throughout its life cycle. Early deployments often use straightforward hardware partitioning, useful for demonstrating the technology but not necessarily hitting the performance points set by the International Telecommunications Union (ITU), which is responsible for defining what constitutes a new network generation, or “G”. For example, the 3GPP standards for 4G LTE were ratified in 2009, and within a year the first networks were rolling out—Telia deployed their 4G LTE network in Stockholm and Oslo by the end of that year. Although this initial deployment was considered an incremental improvement over 3G, it set into motion profound changes for the transition to 4G LTE.

In a fast-growing industry characterized by continuous evolution rather than revolution, the ITU is committed to connecting the world and their right to communicate. The ITU has set the step-by-step objectives with every network to date in an effort to keep definitions and deployment targets aligned. These key targets set to date by the ITU for 5G include a bandwidth minimum of 100 MHz, peak downlink of 20Gbit/s, latency of 4 milliseconds (ms) for extreme broadband and 1 ms for ultra-low-latency, and average downlink of 100Mbit/s and uplink of 50 Mbit/s. Naturally, these standards are not expected to be immediately and universally implemented in every initial deployment, but are considered goals for the step-by-step evolution and maturity of 5G.

The image below (Figure 1) highlights the total capacity promised by 5G throughout its evolution and maturity. Assuming demand doubles every two years—an assumption based on past experience—the capacity enhancement offered by mmW will not be required until the capacity offered by sub-6 GHz is fully utilized. Although higher frequency spectrums may be deployed earlier to address particular locations, these will be the exception rather than the rule as the evolution of 5G naturally progresses. With the world on the cusp of the evolution of 5G, it is truly an exciting time for the industry. 

evolution chart.PNG



As expected, carriers are already deep into 5G deployment plans. AT&T has announced their plans to deploy mobile 5G to customers in a dozen cities by the end of 2018. One could speculate to achieve this, that they will have to use existing or interim hardware solutions to bridge the gap to standard compliant chipset availability.

Verizon, largely recognized for blazing the trail with mmWave 5G, having been established as a forerunner with the 5G Technical Forum, has partnered with Samsung to develop “fixed 5G” microcell units, home routers, and mobile chip-sized modems to enable 5G service to its customers. At CES 2018, Verizon’s CEO announced the carrier plans to beat AT&T to 5G deployment.  

Sprint announced last year their plans to deploy a 5G solution in the 2.5 GHz range by late 2019. T-Mobile also announced their 600 MHz spectrum last year, and is expected to use the entire band to enable a complete indoor/outdoor 5G network. 600 MHz radio waves travel twice as far and offer four times better performance around buildings/obstacles, offering a key performance advantage.


As the race toward 5G begins in full force, MACOM is positioned to take a leading role in enabling the updated infrastructure. As parallel advancements in RF and optical technologies begin to intersect and integrate, MACOM is ready with the semiconductor components, technologies and cost-effective solutions necessary to realizing the evolution of 5G (click to learn more).

MWC pull up banner_R4-1.png

Over the next few years, many flavors of 5G will begin to deploy around the world. These initial deployments may only bring incremental improvements, but over time the full capacity of 5G will be reached and deployed, bringing the promised long-term benefits with it. At full maturity, a 5G network promises customers near-zero latency, improved data speeds, low energy and increased capacity. While this ideal remains a few years off, we can all agree the wait and journey will be worth it.

Read Full Post
GaN Transcendent: Driving the Scale, Supply Security and Surge Capacity for Mainstream RF Applications   Feb. 06, 2018

ST_Macom_co_branded_logo_vert.jpg (Print)

The market landscape for RF semiconductor technology has experienced significant changes in recent years.

For decades, laterally diffused metal oxide semiconductor (LDMOS) technology has dominated the RF semiconductor market in commercial volume applications. Today, the balance has shifted, and Gallium Nitride on Silicon (GaN-on-Si) technology has emerged as the technology of choice to succeed legacy LDMOS technology.

GaN-on-Si’s performance advantages over LDMOS are firmly established – it delivers over 70% power efficiency, and upward of 4X to 6X more power per unit area, with scalability to high frequencies. In parallel, comprehensive testing data has affirmed GaN-on-Si’s conformance with stringent reliability requirements, replicating and even exceeding the RF performance and reliability of expensive Gallium Nitride on Silicon Carbide (GaN-on-SiC) alternative technology.

GaN-on-Si’s ascension to the forefront of the RF semiconductor industry comes at a pivotal moment in the evolution of commercial wireless infrastructure. Its proven performance leadership over LDMOS technology is driving its adoption within the newest generation of 4G LTE basestations, and positioning it as the likely de facto enabling technology for 5G wireless infrastructure going forward, with seismic market implications that could extend far beyond mobile phone connectivity, encompassing transportation, industrial and entertainment applications, among many others.

Looking further ahead, GaN-on-Si-based RF technologies have the potential to supplant antiquated magnetron and spark plug technologies to unlock the full value and promise of commercial solid-state RF energy applications, spanning cooking, lighting, automotive ignition and beyond, where huge gains in energy/fuel efficiency and heating and lighting precision are believed to be close on the horizon.


Given the unprecedented pace and scale of the impending 5G infrastructure build-out in particular, there’s been increased attention on the cost structures, manufacturing and surge capacities, and supply chain flexibility and surety inherent to GaN-on-Si relative to LDMOS and GaN-on-SiC. GaN-on-Si stands alone as the superior semiconductor technology for next-generation wireless infrastructure, offering the potential for GaN performance at LDMOS cost structures, with the commercial manufacturing scalability to support massive demand.

The joint announcement from MACOM and STMicroelectronics of plans to bring GaN-on-Si technology to mainstream RF markets and applications marks a pivotal turning point in the GaN supply chain ecosystem, combining MACOM’s RF semiconductor technology prowess with ST’s scale and operational excellence in silicon wafer manufacturing. While expanding MACOM’s source of supply, this agreement is also expected to lead to the increased scale, capacity and cost structure optimizations necessary for accelerating mass-market adoption of GaN-on-Si technology.

For wireless network infrastructure, this collaboration is expected to allow GaN-on-Si technology to be cost-effectively deployed and scaled for 4G LTE basestations as well as massive MIMO 5G antennas, whereby the sheer density of antenna configurations puts a premium value on power and thermal performance, particularly at higher frequencies. And when properly exploited, GaN-on-Si’s power efficiency advantages can make a profound impact on wireless network operators’ basestation operating expenses. MACOM estimates that the utility bill savings of switching only new macro base stations deployed in a year to MACOM GaN-on-Si can exceed $100M when modeled with an average energy rate of $0.1/KWh.


The evolution of GaN-on-Si from early research and development to commercial-scale adoption may prove to be the largest technology disruption to impact the RF semiconductor industry in a generation. Via our agreement with ST, MACOM GaN-on-Si technology is uniquely positioned to meet the performance, cost structure, manufacturing capacity, and supply chain flexibility requirements of 4G LTE and 5G wireless basestation infrastructure going forward, with untold promise for solid-state RF energy applications. Offering the prospect of RF solutions at price/performance metrics that would be otherwise unachievable with competing LDMOS and GaN-on-SiC technologies, GaN-on-Si’s potential has only just begun to be realized.

For more information about MACOM’s GaN-on-Si technology leadership, visit

Read Full Post
A 2018 Look Ahead for Optical Networking Trends in the Hyperscale Data Center   Jan. 24, 2018

hyperscale data center blog.jpg (685050686)In our previous blog post, we forecasted some 2018 trends in the RF semiconductor domain, addressing the impact of digitization on legacy RF functionality, the demand for multi-mission capability for defense and civil applications, and the industry’s continued dependence on diodes to enable key system functionality.

Here we’ll turn our attention from RF to Light, and offer some predictions for the optical networking domain in 2018, with a special focus on hyperscale data centers, where the pace of innovation is hitting breakthrough strides.


The adoption of 100G modules is well underway among hyperscale data centers today, with millions of 100G links deployed as demand for higher speed optical interconnects continues to surge. As 100G technology reaches maturity in 2018, optical suppliers will likely seek competitive advantage by intensifying their efforts to reduce module costs and increase system density, which in turn requires a careful balancing of power and thermal profiles.

Single lambda PAM-4 modulation is critical to these efforts, enabling the delivery of 100G throughput over a single fiber. This can reduce the number of lasers from four to one in an optical transceiver module, and the associated cost and density benefits are significant. For data center operators, single lambda PAM-4 is widely expected to become the de facto standard for 100G connectivity going forward, with deployment commencing later this year.

With hyperscale data center operators driving the adoption of leading-edge 100G technologies, the enterprise data center ecosystem is poised to take advantage of established, current-generation 100G optical technologies. Fortune 500 companies hosting their own IT infrastructure are expected to fuel a new wave of 100G adoption beginning in 2018, as they seek to benefit from the cost efficiencies and robust supply chain ushered in via the mainstreaming of 100G technology in the hyperscale data center. 


Even as the market for 100G modules continues to soar in 2018, the optical industry will take a major step forward toward enabling 400G connectivity in hyperscale data centers, and industry watchers anticipate that we could see initial 400G module uptake by the end of the year. Prototype 400G modules are projected to hit the market within the next six to 12 months.

The transition to 400G modules promises more than just a 4X data throughput improvement, however. The power efficiency gains are equally noteworthy. A 400G module is expected to consume only 2.5X the power of a 100G module, not 4X the power as one might assume. Therefore, a 400G module can transmit an equal number of bits from point A to point B much more efficiently than a 100G module in terms of power and space savings, driving power-per-bit and cost-per-bit downward.


Much of the emerging generation of 400G switch silicon from a variety of vendors is also noteworthy in that it leverages 50Gb/s electrical channels – a key capability recently standardized by IEEE and OIF. The evolution from 25Gb/s to 50Gb/s electrical signaling enables twice the data throughput over a copper PCB trace, which ultimately allows for higher density modules on the pathway from 100G to 400G, complementing the density gains achieved with the single lambda optical architecture.

This development opens the door for continued innovation in smaller form factor pluggable modules like microQSFP that further optimizes 100Gb/s port densities. Supporting one, two and four channels per module in the same faceplate density as a single-channel SFP, a microQSFP module equipped with 50Gb/s electrical IO could deliver 100G throughput using just two host side traces (versus four today), while providing a 33% increase in density compared to QSFP28 (48 microQSFP ports vs. 36 QSFP28 ports per 1u faceplate).

In the year ahead, we can also expect to see advancements in optical component integration that will further increase the density of 100G and 400G modules. Whereas today system designers are grappling with discrete PHYs, TIAs, laser drivers, CDRs and more on their evaluation and prototype boards, one can envision how some of these functions could be consolidated further, potentially reducing design complexity, cost, interoperability testing and overall time to market.

We invite you to join us throughout the year ahead to assess the latest breakthroughs and trends in optical technologies. There’ll be much to talk about as we head into OFC 2018 and onward to the CIOE and ECOC tradeshows – we’re looking forward to an exciting year!

Read Full Post

All Blog Posts