Blog

Designing with Diodes: Protecting Sensitive Components   Apr. 29, 2019

receiver protect limiter blog.jpg (!blank)Sensitive low noise amplifiers (LNAs) in radar or radio receivers cannot tolerate large input signals without sustaining damage. What’s the solution? Receiver-protector limiter (RPL) circuits, the “heart” of which typically comprises PIN diodes, can be utilized to protect sensitive components from large input signals without adversely affecting small-signal operation.

RPL circuits do not require external control signals. These circuits comprise at least one PIN diode connected in shunt with the signal path, along with one or more passive components, such as RF choke inductors and DC-blocking capacitors. A simple (but possibly complete) RPL circuit is shown below.

diagram 1.png

When there is no RF input signal or when only a small RF input signal is present, the impedance of the limiter PIN diode is at its maximum value, the magnitude of which is typically in the few-hundreds of ohms or greater. Consequently, the diode produces a very small impedance mismatch and correspondingly low insertion loss.

When a large input signal is present, the RF voltage forces charge carriers into the PIN diode’s I layer, holes from its P layer and electrons from its N layer. The population of free charge carriers introduced into the I layer lowers its RF resistance, which produces an impedance mismatch as seen from the RPL circuit’s RF ports.

This mismatch causes energy from the input signal to be reflected to its source. The reflected signal, in concert with the incident signal, produces a standing wave with a voltage minimum at the PIN diode since it temporarily presents the lowest impedance along the transmission line. There is a current maximum collocated with every voltage minimum along the transmission line. This current flows through the PIN diode, enhancing the population of free charge carriers in the diode’s I layer, which results in lower series resistance, a greater impedance mismatch and a “deeper” voltage minimum. Eventually the diode’s resistance will reach its minimum value, which is determined by the design of the PIN diode and the magnitude of the RF signal. Increases in the RF signal amplitude force the diode into heavier conduction, thus further reducing the diode’s resistance until the diode is saturated and produces its lowest possible resistance. This results in an output power vs. input power curve as shown below.

graph 2.PNG

After the large RF signal is no longer present, the diode’s resistance remains low (and its insertion loss remains large) if the population of free charge carriers in the I layer is large. Upon cessation of the large RF signal, the population of free charge carriers will decrease by two mechanisms: conduction out of the I layer and recombination within the I layer. The magnitude of the conduction is determined primarily by the DC resistance in the current path external to the diode.

The rate of recombination is determined by several factors, including the free-charge-carrier density in the I layer, the concentration of dopant atoms and other charge-trapping sites in the I layer, etc. Due to the required parameters of the diodes, the greater the RF signal which a PIN diode can safely handle, the longer its recovery time to low insertion loss will be.

The properties of the I layer of the PIN diode determine how this circuit performs. The I layer’s thickness (sometimes referred to as its width) determines the input power at which the diode goes into limiting – the thicker the I layer, the higher the input-referred 1 dB compression level (also known as the threshold level). The thickness of the I layer, the area of the diode’s junction and the material of which the diode is made determine the resistance of the diode as well as its capacitance. These parameters also determine the diode’s thermal resistance.

The simplest implementation of a PIN RPL circuit comprises a PIN diode, an RF choke inductor and a pair of DC blocking capacitors. The RF choke inductor is critical to the performance of the RPL circuit, with the primary function to complete the DC current path for the PIN diode. When a large signal forces charge carriers into the diode’s I layer, a DC current is established in the diode. If a compete path for this DC current is not provided, the diode’s resistance cannot be reduced, and no limiting can occur. This current will flow in the same direction as a rectified current would flow, but it is not produced by rectification.

Implementation of the choke inductor in the RPL circuit can be challenging, since inductors are the least ideal of the components in the RPL circuit. Inductors all have series and parallel resonances due to their inductance and their parasitic inter-winding capacitance. Care must be taken to ensure that series resonances do not occur within the operating frequency band. Additionally, the choke’s DC resistance must be minimized in order to reduce the recovery time of the RPL circuit.

Note: the DC blocking capacitors are optional. They are only necessary if there are DC voltages or currents present on the input or output transmission lines which might bias the PIN diode.

A Practical Example

Assume the maximum input power which an LNA can tolerate is 15 dBm. This power level sets the requirement for the I layer thickness of the PIN diode in the RPL circuit, which in this case is approximately 2 microns. A designer can then determine the acceptable capacitance of the PIN diode from the frequency of the RF signal and the maximum acceptable small-signal insertion loss. If they assume the RPL operates in X Band and the maximum acceptable insertion loss is 0.5 dB, then the maximum capacitance of the diode can be calculated.

The insertion loss (IL) in decibels of a shunt capacitance is given by:

equation 1.PNG

We can solve that equation for C:

equation 2.PNG

For f = 12 GHz, IL = 0.5 dB and Z0 = 50 Ω, C = 0.185 pF.

Along with the I layer thickness, this value of capacitance will determine the area of the diode’s junction.

The combination of thin I layer and small junction area creates a diode which has relatively high thermal resistance, which cannot dissipate very much power without forcing the junction temperature to exceed its maximum rated value of 175 °C. Typically, a 2 micron diode with 0.185 pF capacitance can safely handle a large CW input signal of around 30 to 33 dBm. A larger signal can potentially damage or immediately destroy this diode due to the Joule heating produced by the current flowing through the diode’s electrical resistance.

PIN diode RPL circuits reliably protect sensitive components like low noise amplifiers in radar or radio receivers from large incident signals. For RPL applications which require very low flat leakage output power but high input power handling, additional diode stages and other circuit enhancements can be added at the input side of the RPL circuit.

Members of MACOM’s applications engineering team are ready to help you select the optimal diodes and circuit topologies for your RPL application. For more information on MACOM’s solutions, visit: https://www.macom.com/diodes


Read Full Post
A Look Ahead at the Trends Shaping the RF Semiconductor Industry   Mar. 26, 2019

2019 rf small.jpg2019 has begun amidst a wave of rousing activity. A strong global economy is driving new activity and demand across the RF industry, resulting in renewed innovation and revitalized programs. The promise of 5G lurks right around the corner, and the industry has its hands full with standards continuing to evolve, new applications being realized and everyone considering what their role will be in this emerging network. In an industry renowned for its long technology lifecycles yet relentless innovation, what can one realistically expect to see happen in the coming year?

A Robust Economy

Against the backdrop of geo-political factors and trade tensions, today’s global economy remains very healthy, and is driving a broad-based increase in overall market demand. For example, the proliferation of the Internet on the move and in rural areas is driving an increased need for devices operating at Ku- and Ka-bands, and VSAT and SATCOM are seeing significant growth as a result. The emergence of millimeterwave bands in 5G telecom is driving the need for more test and measurement instrumentation, and in conjunction, there is an increase in the manufacturing of wireless equipment due to an increase in demand for RF power devices. Simply put, the overall health of the economy is driving end product demand, which is driving the demand for manufacturing equipment, which in turn drives the demand for semiconductor components.

5G Networks Driving Demand

The exciting rollout of 5G networks and move to Massive MIMO antenna configurations is expected to create an up to 10x increase in the demand for RF products to support basestations, driving the proliferation of transmit components, switches, LNAs and other RF components for the RF industry. The FCC recently announced the conclusion of America’s first 5G spectrum auction for 28 GHz, followed by the 24 GHz auction in March, and is expected to auction three additional spectrum bands in 2019. With the networking OEMs rolling out these spectrums, we are already seeing the first large scale deployment of millimeterwave commercial products, outside of past VSAT and Point-to-Point communication deployments. This is driving the performance and cost structure of millimeterwave components dramatically.

Aerospace & Defense

Following increases to the Defense budget, 2018 saw the start of many new A&D programs in addition to programs for refurbishing existing equipment being deployed in the field today. With the increased demand in legacy products for these modernization efforts, along with the demand for new Commercial off the Shelf solutions (COTS) as well as custom designs, it is expected the defense market will continue to be fairly robust over the next few years.

Looking Forward

This flurry of activity will likely continue to increase as we move throughout 2019, reinforced by growing demand and a thriving economy. While arduous to predict which activities will come to fruition and which will continue to build out over the coming years, one thing is certain: 2019 will be progressive, and a year for the RF industry to remember.


Read Full Post
A Look Ahead at the Trends and Technologies Shaping the Optical Networking Market   Feb. 26, 2019

2019 blog.jpgAs we enter into 2019 – as with any new year – it’s a good opportunity to look ahead to what the future holds in store for the networking and communication domains, and the key technologies that underpin them. In this blog post, we’ll share some of our predictions for the optical networking market in particular, taking into account the myriad trends affecting infrastructure build-outs in the coming year and beyond.

WHAT’S DRIVING THE GROWTH?

With the increasing number of subscribers, devices per subscriber and data per subscriber pervading our wireless and wireline networks, the data tsunami will only continue to gather speed. As cloud computing usage continues to skyrocket, so too will the capital expenditures for the leading cloud datacenter providers, reflecting the massive scope of their ongoing datacenter infrastructure build-outs – and the healthy ROI they continue to achieve as a result.

Industry analysts counted 430 hyperscale datacenter facilities at the end of 2018, representing an 11% increase in facilities compared to 2017. Looking ahead to 2019, experts are tracking 132 new datacenter facilities that are at various stages of planning or building.

Accordingly, 2018 saw a nearly 40% ($22B) jump in hyperscale cloud datacenter capex over the previous period in 2017 (H1 2018 vs H1 2017), and industry experts anticipate that the global cloud computing market will exceed $200B in 2019 – a 20% increase over 2018, fueled in large part by increasing enterprise adoption for cloud services. As such, cloud datacenter operators may anticipate similarly large revenue gains on par with the 45% to 75% year over year gains reported by some operators in 2018 relative to 2017.

We’ll also be hearing a lot more about edge computing in 2019, driven in large part by 5G’s anticipated enablement for applications requiring low latency. The advent of autonomous vehicles in particular has intensified the urgent need for near real-time network responsiveness to ensure the highest levels of passenger safety. Mission critical civil infrastructure could similarly benefit from dramatically reduced latency, as could a host of IoT, Industry 4.0 and smart city applications requiring precisely orchestrated real-time communication.

The industry’s prominent focus on edge computing reflects the growing realization that for applications like these, it’s imperative to bring datacenter-caliber processing capabilities and real-time automated decision making closer to where the data traffic originates. This in turn will evolve the way datacenters are interlinked together by impacting where and how data enters and exits the cloud, and will drive increased deployments of longer distance links, suggesting a steady growth for metro and long haul infrastructure into 2019 and beyond. As a result, global deployments of PON and fiber backhaul infrastructure are expected to be healthy into 2019, as indicated by industry forecasts predicting coherent port count growth.

OPTICAL MODULE PREDICTIONS

In 2019, volume scale optical module deployments in cloud datacenters will continue to drive the cost structures and supply chain required to propagate the fastest and most cost-effective optical links to enterprise environments and outward throughout the network.

To that end, cloud datacenters are still in the early stages of a long upgrade cycle to 100G and higher bandwidth. Industry customer forecasts project 2019 and 2020 to be strong growth years for CWDM4 modules in particular, with the potential for 100G unit demand in 2019 to more than double demand in 2019, reaching estimated volumes over 10 million units in 2019. Some experts predict that 100G CWDM4 modules will see market dominance as far as year 2022.

While CWDM4 is expected to represent a vast majority of unit volumes over the next few years, 2019 is also expected to see meaningful adoption of single-lambda PAM-4. Industry watchers anticipate that single-lambda 100G modules will be deployed by several vendors, with volume shipments ramping up by the end of the year. The recent Ethernet Alliance Higher Speed Networking Plugfest hosted at the University of New Hampshire InterOperability Laboratory (UNH-IOL) featured PAM-4-based electrical and optical signaling technologies leveraging compatible offerings from a host of leading vendors, highlighting the gathering industry momentum for single-lambda 100G connectivity.

Meanwhile, the market arrival of switch silicon solutions utilizing 50G electrical IO channels is enabling 2X the data throughput over copper PCB traces compared to legacy chipsets with 25G electrical signaling. This capability is significant in that it opens the door to more streamlined 100G module architectures (2 x 50G), and it’s regarded as a key enabler for accelerating the adoption of higher bandwidth modules at 200G (4 x 50G) and onward to 400G.

To this end, the growing demand for fully analog 200G optical modules holds key implications for the anticipated timing for the eventual mainstream adoption of 400G optical modules. Widely considered to be still in its infancy, 400G technology promises huge bandwidth gains in the future, but the cost curve must come down dramatically for it to achieve volume uptake in the cloud datacenter. In the meantime, fully analog 200G modules – with their many advantages over DSP-based offerings, including significantly lower latency, power consumption and cost – are coming to market now, and are seen by many as a viable, volume-scalable stepping-stone to 400G. So 2019 will be a pivotal year to see how 200G takes hold in the cloud datacenter, and intensifying industry collaboration on 200G standards and interoperability could help position this technology for sustained mainstream adoption while 400G continues to mature.

As always, MACOM and our industry peers will be tracking these and other developments closely throughout the year, particularly as we approach OFC 2019 and onward to the CIOE and ECOC events in the Fall. We can safely anticipate a host of eye-opening revelations throughout the year – revelations that will no doubt help shape our market perspective as we ring in the next new year.


Read Full Post
What’s the Role of 200G Optical Connectivity on the Pathway to 400G?   Dec. 05, 2018

200g future blog.jpg (Datacenter)The ever-burgeoning bandwidth demands on Cloud Data Center infrastructure are intensifying the pressure on optical module providers to enable faster connectivity solutions at volume scales and cost structures. This is fueling tremendous uptake for 100G CWDM4 (4 x 25G) modules and accelerating the ramp to 100G single lambda (PAM-4) modules on the pathway to mainstream adoption of 400G (4 x 100G).

Technology vendors from across the optical networking industry are working hard to drive this progress, leveraging interoperability plugfests among other opportunities to ensure seamless compatibility among a growing ecosystem of components, modules, and switch systems. This activity reflects the urgent need for faster Data Center links, and also underscores the extreme effort and design precision required to achieve coherence among the heterogeneous products coming to market.

With 100G in widescale deployment today and the promise of mainstream 400G deployment seemingly ubiquitous, Cloud Data Centers are eager to take advantage of any and every opportunity to bridge the throughput gap and keep pace with the data deluge. 200G (4 x 50G) optical modules answer this immediate need head on.

ANALOG ADVANTAGES

200G modules provide several key benefits, chief among them the flexibility to leverage a fully analog architecture, the merits of which we assessed in an earlier blog post focused on optical modules for high performance computing (HPC) applications. Though somewhat more difficult to implement than mainstream digital signal processor (DSP) based solutions, fully analog optical interconnects can provide 1,000X lower latency than DSP-based solutions – a crucial attribute for enabling system and network performance at the fastest possible speeds. And while DSPs will remain essential for designing 100G single lambda and 400G modules, DSPs aren’t necessary for 200G module enablement today.

In the absence of DSPs, fully analog 200G optical modules consume much less power and dissipate considerably less heat. Leveraging existing optical components, it’s now possible to enable module-level total power consumption under 22 milliwatts per gigabit. This translates to a 200G optical module for 2km applications with power consumption as low as under 4 watts. A DSP-based module would likely clock in at 2 to 3 watts higher, which doesn’t sound like very much, until you aggregate the resulting power consumption penalty across a Data Center hosting thousands of optical modules. In this context, a 2 to 3 watt power savings per module is hugely advantageous for optimizing OPEX and cooling efficiency.

Low latency and power consumption are important attributes, but not the only performance metrics that matter. Signal integrity is another critical performance criterion given the cascading consequences of transmitting bit errors into the data stream. This poses a particularly daunting challenge as data throughput speeds increase from 100G to 200G and beyond.

The ability to maintain optimal signal integrity performance at 200G in the absence of a DSP is due, in large part, to continued advancements in clock data recovery (CDR) devices and the underlying signal conditioning technology. The newest generation of analog CDRs deployed in fully analog 200G modules have demonstrated the ability to enable a low bit error rate (BER) and better than 1E-8 pre-forward error correction (Pre-FEC), on par with DSP-based 200G modules.

HIGH VALUE, HIGH VOLUME

None of the aforementioned advantages of a fully analog 200G optical module would be worthwhile if the cost structures weren’t approaching comparable alignment with mainstream commercial solutions. But here again, the fully analog 200G module architecture wins against DSP-based 200G modules.

At the device level, the streamlined design of a fully analog 200G module reduces overall component count and sidesteps the costs of DSP development and implementation. At the broader market level, while 100G technology is already mature and component integration is well established, 200G end-to-end interoperable chipsets have just recently hit the market. Looking to the past as our guide, in the short term, 200G modules are expected to emulate cost structures akin to 100G modules when they entered the market a few years ago, and follow a similar downward cost curve as component integration is further standardized and volume shipments accelerate. In due course, 200G modules are expected to achieve a cost structure that’s comparable to today’s 100G modules.

As an intermediate step between 100G and 400G, 200G optical connectivity is a compelling solution for Cloud Data Centers challenged to implement faster optical links at scalable volumes and costs. DSPs will undoubtedly play a pivotal role on the path to 400G, and in the interim, the fully analog 200G module architecture lights the path to faster, cost effective connectivity beyond 100G.

MACOM is committed to leading the evolution of Cloud Data Center interconnects from 100G to 200G and 400G, and at ECOC 2018 we demonstrated a complete, fully analog 200G chipset and TOSA/ROSA subassembly solution that affords optical module providers seamless component interoperability to reduce design complexity and costs. To learn more about MACOM’s optical connectivity solutions for Cloud Data Center infrastructure, visit https://www.macom.com/data-center


Read Full Post

All Blog Posts

X

By continuing to use this site you consent to the use of cookies in accordance with our Cookie Policy