Blog

A Look Ahead at the Trends and Technologies Shaping the Optical Networking Market   Feb. 26, 2019

2019 blog.jpgAs we enter into 2019 – as with any new year – it’s a good opportunity to look ahead to what the future holds in store for the networking and communication domains, and the key technologies that underpin them. In this blog post, we’ll share some of our predictions for the optical networking market in particular, taking into account the myriad trends affecting infrastructure build-outs in the coming year and beyond.

WHAT’S DRIVING THE GROWTH?

With the increasing number of subscribers, devices per subscriber and data per subscriber pervading our wireless and wireline networks, the data tsunami will only continue to gather speed. As cloud computing usage continues to skyrocket, so too will the capital expenditures for the leading cloud datacenter providers, reflecting the massive scope of their ongoing datacenter infrastructure build-outs – and the healthy ROI they continue to achieve as a result.

Industry analysts counted 430 hyperscale datacenter facilities at the end of 2018, representing an 11% increase in facilities compared to 2017. Looking ahead to 2019, experts are tracking 132 new datacenter facilities that are at various stages of planning or building.

Accordingly, 2018 saw a nearly 40% ($22B) jump in hyperscale cloud datacenter capex over the previous period in 2017 (H1 2018 vs H1 2017), and industry experts anticipate that the global cloud computing market will exceed $200B in 2019 – a 20% increase over 2018, fueled in large part by increasing enterprise adoption for cloud services. As such, cloud datacenter operators may anticipate similarly large revenue gains on par with the 45% to 75% year over year gains reported by some operators in 2018 relative to 2017.

We’ll also be hearing a lot more about edge computing in 2019, driven in large part by 5G’s anticipated enablement for applications requiring low latency. The advent of autonomous vehicles in particular has intensified the urgent need for near real-time network responsiveness to ensure the highest levels of passenger safety. Mission critical civil infrastructure could similarly benefit from dramatically reduced latency, as could a host of IoT, Industry 4.0 and smart city applications requiring precisely orchestrated real-time communication.

The industry’s prominent focus on edge computing reflects the growing realization that for applications like these, it’s imperative to bring datacenter-caliber processing capabilities and real-time automated decision making closer to where the data traffic originates. This in turn will evolve the way datacenters are interlinked together by impacting where and how data enters and exits the cloud, and will drive increased deployments of longer distance links, suggesting a steady growth for metro and long haul infrastructure into 2019 and beyond. As a result, global deployments of PON and fiber backhaul infrastructure are expected to be healthy into 2019, as indicated by industry forecasts predicting coherent port count growth.

OPTICAL MODULE PREDICTIONS

In 2019, volume scale optical module deployments in cloud datacenters will continue to drive the cost structures and supply chain required to propagate the fastest and most cost-effective optical links to enterprise environments and outward throughout the network.

To that end, cloud datacenters are still in the early stages of a long upgrade cycle to 100G and higher bandwidth. Industry customer forecasts project 2019 and 2020 to be strong growth years for CWDM4 modules in particular, with the potential for 100G unit demand in 2019 to more than double demand in 2019, reaching estimated volumes over 10 million units in 2019. Some experts predict that 100G CWDM4 modules will see market dominance as far as year 2022.

While CWDM4 is expected to represent a vast majority of unit volumes over the next few years, 2019 is also expected to see meaningful adoption of single-lambda PAM-4. Industry watchers anticipate that single-lambda 100G modules will be deployed by several vendors, with volume shipments ramping up by the end of the year. The recent Ethernet Alliance Higher Speed Networking Plugfest hosted at the University of New Hampshire InterOperability Laboratory (UNH-IOL) featured PAM-4-based electrical and optical signaling technologies leveraging compatible offerings from a host of leading vendors, highlighting the gathering industry momentum for single-lambda 100G connectivity.

Meanwhile, the market arrival of switch silicon solutions utilizing 50G electrical IO channels is enabling 2X the data throughput over copper PCB traces compared to legacy chipsets with 25G electrical signaling. This capability is significant in that it opens the door to more streamlined 100G module architectures (2 x 50G), and it’s regarded as a key enabler for accelerating the adoption of higher bandwidth modules at 200G (4 x 50G) and onward to 400G.

To this end, the growing demand for fully analog 200G optical modules holds key implications for the anticipated timing for the eventual mainstream adoption of 400G optical modules. Widely considered to be still in its infancy, 400G technology promises huge bandwidth gains in the future, but the cost curve must come down dramatically for it to achieve volume uptake in the cloud datacenter. In the meantime, fully analog 200G modules – with their many advantages over DSP-based offerings, including significantly lower latency, power consumption and cost – are coming to market now, and are seen by many as a viable, volume-scalable stepping-stone to 400G. So 2019 will be a pivotal year to see how 200G takes hold in the cloud datacenter, and intensifying industry collaboration on 200G standards and interoperability could help position this technology for sustained mainstream adoption while 400G continues to mature.

As always, MACOM and our industry peers will be tracking these and other developments closely throughout the year, particularly as we approach OFC 2019 and onward to the CIOE and ECOC events in the Fall. We can safely anticipate a host of eye-opening revelations throughout the year – revelations that will no doubt help shape our market perspective as we ring in the next new year.


Read Full Post
Designing with Diodes: Protecting Sensitive Components   Jan. 22, 2019

receiver protect limiter blog.jpg (!blank)Sensitive low noise amplifiers (LNAs) in radar or radio receivers cannot tolerate large input signals without sustaining damage. What’s the solution? Receiver-protector limiter (RPL) circuits, the “heart” of which typically comprises PIN diodes, can be utilized to protect sensitive components from large input signals without adversely affecting small-signal operation.

RPL circuits do not require external control signals. These circuits comprise at least one PIN diode connected in shunt with the signal path, along with one or more passive components, such as RF choke inductors and DC-blocking capacitors. A simple (but possibly complete) RPL circuit is shown below.

diagram 1.png

When there is no RF input signal or when only a small RF input signal is present, the impedance of the limiter PIN diode is at its maximum value, the magnitude of which is typically in the few-hundreds of ohms or greater. Consequently, the diode produces a very small impedance mismatch and correspondingly low insertion loss.

When a large input signal is present, the RF voltage forces charge carriers into the PIN diode’s I layer, holes from its P layer and electrons from its N layer. The population of free charge carriers introduced into the I layer lowers its RF resistance, which produces an impedance mismatch as seen from the RPL circuit’s RF ports.

This mismatch causes energy from the input signal to be reflected to its source. The reflected signal, in concert with the incident signal, produces a standing wave with a voltage minimum at the PIN diode since it temporarily presents the lowest impedance along the transmission line. There is a current maximum collocated with every voltage minimum along the transmission line. This current flows through the PIN diode, enhancing the population of free charge carriers in the diode’s I layer, which results in lower series resistance, a greater impedance mismatch and a “deeper” voltage minimum. Eventually the diode’s resistance will reach its minimum value, which is determined by the design of the PIN diode and the magnitude of the RF signal. Increases in the RF signal amplitude force the diode into heavier conduction, thus further reducing the diode’s resistance until the diode is saturated and produces its lowest possible resistance. This results in an output power vs. input power curve as shown below.

graph 2.PNG

After the large RF signal is no longer present, the diode’s resistance remains low (and its insertion loss remains large) if the population of free charge carriers in the I layer is large. Upon cessation of the large RF signal, the population of free charge carriers will decrease by two mechanisms: conduction out of the I layer and recombination within the I layer. The magnitude of the conduction is determined primarily by the DC resistance in the current path external to the diode.

The rate of recombination is determined by several factors, including the free-charge-carrier density in the I layer, the concentration of dopant atoms and other charge-trapping sites in the I layer, etc. Due to the required parameters of the diodes, the greater the RF signal which a PIN diode can safely handle, the longer its recovery time to low insertion loss will be.

The properties of the I layer of the PIN diode determine how this circuit performs. The I layer’s thickness (sometimes referred to as its width) determines the input power at which the diode goes into limiting – the thicker the I layer, the higher the input-referred 1 dB compression level (also known as the threshold level). The thickness of the I layer, the area of the diode’s junction and the material of which the diode is made determine the resistance of the diode as well as its capacitance. These parameters also determine the diode’s thermal resistance.

The simplest implementation of a PIN RPL circuit comprises a PIN diode, an RF choke inductor and a pair of DC blocking capacitors. The RF choke inductor is critical to the performance of the RPL circuit, with the primary function to complete the DC current path for the PIN diode. When a large signal forces charge carriers into the diode’s I layer, a DC current is established in the diode. If a compete path for this DC current is not provided, the diode’s resistance cannot be reduced, and no limiting can occur. This current will flow in the same direction as a rectified current would flow, but it is not produced by rectification.

Implementation of the choke inductor in the RPL circuit can be challenging, since inductors are the least ideal of the components in the RPL circuit. Inductors all have series and parallel resonances due to their inductance and their parasitic inter-winding capacitance. Care must be taken to ensure that series resonances do not occur within the operating frequency band. Additionally, the choke’s DC resistance must be minimized in order to reduce the recovery time of the RPL circuit.

Note: the DC blocking capacitors are optional. They are only necessary if there are DC voltages or currents present on the input or output transmission lines which might bias the PIN diode.

A Practical Example

Assume the maximum input power which an LNA can tolerate is 15 dBm. This power level sets the requirement for the I layer thickness of the PIN diode in the RPL circuit, which in this case is approximately 2 microns. A designer can then determine the acceptable capacitance of the PIN diode from the frequency of the RF signal and the maximum acceptable small-signal insertion loss. If they assume the RPL operates in X Band and the maximum acceptable insertion loss is 0.5 dB, then the maximum capacitance of the diode can be calculated.

The insertion loss (IL) in decibels of a shunt capacitance is given by:

equation 1.PNG

We can solve that equation for C:

equation 2.PNG

For f = 12 GHz, IL = 0.5 dB and Z0 = 50 Ω, C = 0.185 pF.

Along with the I layer thickness, this value of capacitance will determine the area of the diode’s junction.

The combination of thin I layer and small junction area creates a diode which has relatively high thermal resistance, which cannot dissipate very much power without forcing the junction temperature to exceed its maximum rated value of 175 °C. Typically, a 2 micron diode with 0.185 pF capacitance can safely handle a large CW input signal of around 30 to 33 dBm. A larger signal can potentially damage or immediately destroy this diode due to the Joule heating produced by the current flowing through the diode’s electrical resistance.

PIN diode RPL circuits reliably protect sensitive components like low noise amplifiers in radar or radio receivers from large incident signals. For RPL applications which require very low flat leakage output power but high input power handling, additional diode stages and other circuit enhancements can be added at the input side of the RPL circuit.

Members of MACOM’s applications engineering team are ready to help you select the optimal diodes and circuit topologies for your RPL application. For more information on MACOM’s solutions, visit: https://www.macom.com/diodes


Read Full Post
What’s the Role of 200G Optical Connectivity on the Pathway to 400G?   Dec. 05, 2018

200g future blog.jpg (Datacenter)The ever-burgeoning bandwidth demands on Cloud Data Center infrastructure are intensifying the pressure on optical module providers to enable faster connectivity solutions at volume scales and cost structures. This is fueling tremendous uptake for 100G CWDM4 (4 x 25G) modules and accelerating the ramp to 100G single lambda (PAM-4) modules on the pathway to mainstream adoption of 400G (4 x 100G).

Technology vendors from across the optical networking industry are working hard to drive this progress, leveraging interoperability plugfests among other opportunities to ensure seamless compatibility among a growing ecosystem of components, modules, and switch systems. This activity reflects the urgent need for faster Data Center links, and also underscores the extreme effort and design precision required to achieve coherence among the heterogeneous products coming to market.

With 100G in widescale deployment today and the promise of mainstream 400G deployment seemingly ubiquitous, Cloud Data Centers are eager to take advantage of any and every opportunity to bridge the throughput gap and keep pace with the data deluge. 200G (4 x 50G) optical modules answer this immediate need head on.

ANALOG ADVANTAGES

200G modules provide several key benefits, chief among them the flexibility to leverage a fully analog architecture, the merits of which we assessed in an earlier blog post focused on optical modules for high performance computing (HPC) applications. Though somewhat more difficult to implement than mainstream digital signal processor (DSP) based solutions, fully analog optical interconnects can provide 1,000X lower latency than DSP-based solutions – a crucial attribute for enabling system and network performance at the fastest possible speeds. And while DSPs will remain essential for designing 100G single lambda and 400G modules, DSPs aren’t necessary for 200G module enablement today.

In the absence of DSPs, fully analog 200G optical modules consume much less power and dissipate considerably less heat. Leveraging existing optical components, it’s now possible to enable module-level total power consumption under 22 milliwatts per gigabit. This translates to a 200G optical module for 2km applications with power consumption as low as under 4 watts. A DSP-based module would likely clock in at 2 to 3 watts higher, which doesn’t sound like very much, until you aggregate the resulting power consumption penalty across a Data Center hosting thousands of optical modules. In this context, a 2 to 3 watt power savings per module is hugely advantageous for optimizing OPEX and cooling efficiency.

Low latency and power consumption are important attributes, but not the only performance metrics that matter. Signal integrity is another critical performance criterion given the cascading consequences of transmitting bit errors into the data stream. This poses a particularly daunting challenge as data throughput speeds increase from 100G to 200G and beyond.

The ability to maintain optimal signal integrity performance at 200G in the absence of a DSP is due, in large part, to continued advancements in clock data recovery (CDR) devices and the underlying signal conditioning technology. The newest generation of analog CDRs deployed in fully analog 200G modules have demonstrated the ability to enable a low bit error rate (BER) and better than 1E-8 pre-forward error correction (Pre-FEC), on par with DSP-based 200G modules.

HIGH VALUE, HIGH VOLUME

None of the aforementioned advantages of a fully analog 200G optical module would be worthwhile if the cost structures weren’t approaching comparable alignment with mainstream commercial solutions. But here again, the fully analog 200G module architecture wins against DSP-based 200G modules.

At the device level, the streamlined design of a fully analog 200G module reduces overall component count and sidesteps the costs of DSP development and implementation. At the broader market level, while 100G technology is already mature and component integration is well established, 200G end-to-end interoperable chipsets have just recently hit the market. Looking to the past as our guide, in the short term, 200G modules are expected to emulate cost structures akin to 100G modules when they entered the market a few years ago, and follow a similar downward cost curve as component integration is further standardized and volume shipments accelerate. In due course, 200G modules are expected to achieve a cost structure that’s comparable to today’s 100G modules.

As an intermediate step between 100G and 400G, 200G optical connectivity is a compelling solution for Cloud Data Centers challenged to implement faster optical links at scalable volumes and costs. DSPs will undoubtedly play a pivotal role on the path to 400G, and in the interim, the fully analog 200G module architecture lights the path to faster, cost effective connectivity beyond 100G.

MACOM is committed to leading the evolution of Cloud Data Center interconnects from 100G to 200G and 400G, and at ECOC 2018 we demonstrated a complete, fully analog 200G chipset and TOSA/ROSA subassembly solution that affords optical module providers seamless component interoperability to reduce design complexity and costs. To learn more about MACOM’s optical connectivity solutions for Cloud Data Center infrastructure, visit https://www.macom.com/data-center


Read Full Post
The Health and Economical Benefits of Solid-State Cooking   Oct. 23, 2018

Featured Image (Courtesy of the RFE Alliance):

RFE Diagram.PNGThe ability to generate and amplify RF signals is nothing new – but solid-state RF energy has enormous potential beyond data transmission applications. As companies like MACOM and collaborative organizations such as the RF Energy Alliance (RFEA) continue to pioneer and develop this technology, enabling greater efficiency and control than previously possible with conventional technologies, the full potential of this technology for mass-market applications is beginning to take form.

Microwave cooking is one application that is already being radically transformed with solid-state RF energy, enabling healthier eating and broad economical benefits. Solid-state RF energy transistors generate hyper-accurate, controlled energy fields that are extremely responsive to the controller, resulting in optimal and precise use and distribution of RF energy. This offers benefits unavailable via alternate solutions, including lower-voltage drive, high efficiency, semiconductor-type reliability, a smaller form factor and a solid-state electronics footprint. Perhaps the most compelling benefit is the power-agility and hyper-precision enabled by this technology, yielding even energy distribution, unprecedented process control range and fast adaption to changing load conditions, not to mention a lifespan of more than 10 years.

Enabling Healthier Eating

Precise temperature control is essential for maintaining proper nutrients of food during the heating/cooking process. Microwave ovens leveraging solid-state power amplifiers enable precision and control of directed energy, which helps preserve the nutritional integrity of food, and prevent cold spots that negatively impact the dining experience.

Since today’s magnetron-based microwave ovens aren’t equipped to adapt to energy being absorbed by or reflected from the food as it cooks, they rely on open-loop, average heating assisted by the rotating turntable at the base of the cavity. This imprecise delivery of energy often results in over-cooking and hot spots that can lower the food’s nutritional value.

By using multiple solid-state power amplifiers and antennas with closed-loop feedback to adjust for precise energy absorption, the energy can be directed with greater precision to exactly where it’s needed and in a controlled way that ensures optimal temperature control. Rather than relying on moisture sensors that measure humidity in the cooking cavity – an indirect mode of measurement that’s sometimes implemented in modern magnetron-based microwave ovens – solid-state microwave ovens measure the properties of the food itself while it cooks, and adapt accordingly. This promotes the retention of the nutrients, moisture and flavors of the food.

Economical Impact

The adoption of solid-state microwave heating is expected to commence in the industrial and commercial cooking market, where the value that these systems provide will be well worth the modest increase in cost. Customers stand to gain significant advantages centric to system reliability, food processing speed and throughput.

With regard to system reliability, solid-state RF transistors can provide 10X longer lifespans of typical magnetrons – this is a major benefit in 24/7 production environments where frequent magnetron failures can slow production and require numerous, expensive service calls. By eliminating the rotating platters common to magnetron-based microwave ovens, system reliability is further increased due to the reduction of mechanical moving parts, which are a common point of failure.

Food processing speed and throughput are increased due to solid-state microwave ovens’ ability to heat and cook food much faster than magnetron-based systems, owing to the rapid energy transfer enabled by solid state RF power adapting to the changing food dielectric. Solid-state RF technology is particularly valuable for food defrosting processes, enabling food to be defrosted much faster and more evenly than it can today, without drying or damaging the food.

With continued innovation in solid-state GaN-based RF technology and cost structure improvements, this technology is expected to eventually migrate to consumer kitchens, and in so doing has the potential to change perceptions of the modern microwave oven. Its value will evolve from that of a simple heating device, to a device that’s capable of cost-effectively cooking healthier, multi-course meals with unprecedented efficiency.

Proven Technology

This revolutionary cooking technology is already being successfully demonstrated. At IMS 2016, MACOM demonstrated this with our 300 W RF transistor in a solid-state oven baking muffins. The following year, at IMS 2017, MACOM announced their RF Energy Toolkit aimed at accelerating customers’ time to market by making it easier to fine tune RF energy output levels to maximize efficiency and performance.

Earlier this year, at IMS 2018, MACOM demonstrated the controllability of GaN-on-Si-based solid-state RF energy by successfully cooking the traditional Japanese Onsen Tamago. This dish is traditionally slow cooked using rope nets in the water of onsen hot springs in Japan at 70 °C for 30-40 minutes, enabling the egg yolk and egg white to solidify at different temperatures. The result is a dish of unique texture, with both a creamy outer layer and firm inner yolk. With the controllability enabled by solid-state RF energy, MACOM cooked this traditional dish in only 6-8 minutes, achieving the same desired consistency accomplished in the onsen hot springs.

Looking Forward

As with any emerging technology, the speed of RF energy technology’s commercial adoption hinges in part on collaborative industry efforts to establish common standards. Organizations like the RF Energy Alliance, composed of industry leaders spanning semiconductor vendors, commercial appliance OEMs and more, aim to help standardize RF energy system components, modules and application interfaces. In turn, this standardization will help to  reduce system costs, minimize design complexity, ease application integration and facilitate rapid market adoption (learn more about MACOM’s RF Energy Toolkit).

Thanks to continued advances such as these, the RF industry is closer than ever to enabling a more advanced, smarter kitchen for commercial restaurants and consumers around the world.


Read Full Post

All Blog Posts

X

By continuing to use this site you consent to the use of cookies in accordance with our Cookie Policy