Why Isn’t 56 Gbps Impossible?

How fast can you force data through a pair of wires? It is a trick question, of course. The answer depends on the wires, the material and geometry around them, the distance, and your choice of transceiver technology. Worst-case, your answer may be tens of megabits per second (Mbps). In even modest data networking applications the answer may be a hundred times that speed.

But look inside the most demanding data centers and supercomputers. There, the incredible pressure of growing CPU core density, hardware accelerator speed, and memory bandwidth are forging new serial communications technology, much as the inconceivable temperatures and pressures within start fuse nuclei into new, heavier elements that are eventually blasted out into the universe at large (Figure 1).

Figure 1. Like novae, data centers are forging new technology and ejecting it into the broader design community.

Let us zoom in closer to the short PCB traces that connect networking adapters to optical modules in the data center’s internal backplane networks. The unrelenting pressure for speed is pushing these optical fiber connections to 400 Gbps per fiber. But that means designers must somehow get that 400 Gbps in and out of the tiny, board-edge optical transceiver modules, across some distance on an ordinary PCB, and into or out of a networking adapter chip. Space and power constraints preclude a large number of signal pairs, pushing the industry to a configuration of eight 50 Gbps connections, built on 56 Gbps (56G)-over-copper transceiver technology.

This is great news for many other kinds of systems. The massive scale of the data-center networking market means 56G transceivers will be available for many other demanding links—connections to proliferating and increasingly high-resolution camera, lidar, and radar modules, for instance, or connection to multi-Gsample/s data converters.

But how are transceiver designers able to reach these speeds? It wasn’t that long ago that 10G seemed like an almost unattainable goal. What are the challenges, and what technologies have answered them?

The Horrible Channel

Serial communication depends upon putting a pulse into one end of a channel and extracting a reasonable facsimile of it from the other end. As the pulse frequency increases, three main enemies militate against this reasonable-sounding goal: attenuation, inter-symbol interference (ISI), and reflections. There are lots of other related problems, especially involving crosstalk, but these three tend to be dominant, and will help guide us through an exploration of transceiver architecture.

Attenuation—usually called insertion loss by communications folks—is just the ratio in dB of the signal energy you get at the receiver input to the energy the transmitter put into the channel. Of course these are AC circuits, so the attenuation, and the phase shift, are frequency-dependent. You are putting a very fast signal into a complicated, distributed RLC network, after all. So insertion loss has to be stated for a particular frequency component of the signal.

And here we need to take a second for terminology if you are working in a different field from communications. These links use pulse amplitude modulation. Until recently, this meant non-return-to-zero (NRZ) pulse coding, where a pulse during an interval represented a 1 and the absence of a pulse was a 0. The NRZ part just means that you don’t drive the signal back to 0 between pulses. If you have two consecutive ones, you send one long pulse, not two short ones. That in turn means the maximum fundamental frequency you can send is the waveform representing alternating 1s and 0s: that is, a pulse every other pulse interval. For that signal, the fundamental frequency is one half the pulse rate. In a terribly unfortunate choice of terminology, communications engineers call this the Nyquist frequency, causing instant confusion with the Nyquist sampling frequency, which is quite different. For our purposes here, Nyquist frequency means one half the pulse rate of the signal: nothing to do with any sampling rate.

But as we were saying, the attenuation is frequency dependent. And it is usually stated for the signal’s Nyquist frequency, not for any of the other harmonics in the waveform. This can be a dispiritingly large number, depending on the physical channel. For example, an 802.3-compliant backplane will attenuate a signal with a 14 GHz Nyquist frequency by over 33 dB. Double the Nyquist frequency to 28 GHz—what you would need for a 56G NRZ connection—and you lose 62 dB. Given the power and thermal limitations inside server racks, the problems with crosstalk, and any realistic view of what can be accomplished in a receiver amplifier, you can see that higher Nyquist frequencies quickly become intractable.

But, you say, we aren’t talking about backplanes, we are just going a few centimeters across a PCB. That improves the situation considerably. At 15 cm, what the industry calls very short reach, careful layout can keep the attenuation down to 10 dB for 14 GHz. That at least makes it feasible to get a signal between a chip and an optical module at these speeds.

ISI

Since this connection is, as we said, a messy, distributed RLC network, the frequency response will change a lot in both amplitude and phase as you pass the poles and zeros of the network. That means that if you put in a nice string of sinusoidal pulses, the harmonics in your signal are going to get attenuated and shifted by varying amounts and smeared all over the time domain: you will get out something looking entirely different from what you put in.

This is a big problem for the receiver, which has to find the center of the pulse interval and sample the pulse amplitude there. But it gets worse. At these frequencies, the smearing can extend beyond the boundaries of a single pulse interval, so that your harmless-looking pulse ends up interfering with the pulses that come before and after it—maybe for several intervals. That is ISI. At its worst, ISI can turn a pulse train into a near-DC mess, making it difficult to even guess where the pulses are. And of course it gets worse with increasing Nyquist frequency.

Reflections are another matter. At these frequencies, we can’t just model the interconnect as a lumped impedance: it is a transmission line. Every change in the local impedance—from a via, a bend or necking in the PCB trace, a stub that’s not properly terminated, even a non-uniformity in materials—will split your waveform into a transmitted portion that goes on toward the receiver, and a reflected portion that goes back toward the transmitter. These reflections rattle around in the channel, getting split and re-reflected off of every discontinuity, and eventually end up, attenuated and significantly delayed—often into subsequent pulse intervals–at the receiver. Even if you are careful in layout, reflections can add significantly to the noise through which the receiver must find your pulses. The good news is, they are measurable and repeatable, so the receiver can learn to compensate for them.

What you send, then, is not at all what the receiver will get. The art of transceiver design is to create a transmitter that pre-compensates for some of these issues, and a receiver that compensates for the rest, allowing it to pick out your 1s and 0s with as little power consumption and delay as possible.

Fighting Back

This battle against reality is being fought with well-tried weapons. There is no really new concept in 56G transceiver technology that has not been used in other contexts—from disk read/write channels to deep-space communications—or at lower frequencies in networking. In fact, with one major exception, the functional block diagrams for 28G and 56G transceivers are nearly identical.

But that exception is key. We’ve noted that all three of the challenges we identified—attenuation, ISI, and reflection—get worse rapidly with increasing frequency. So the first line of attack is to minimize the Nyquist frequency and keep as much energy as possible out of the harmonics.

To that end, designers working at 56G for distances over a few centimeters have dropped the NRZ coding scheme in favor of four-level pulse amplitude modulation (PAM4). In this way, they can transmit two binary bits per pulse—per symbol, communications engineers would say—and achieve 56G speed with the same 14 GHz Nyquist frequency used in 28G NRZ. That by itself allows many of the techniques already proven for 28G to be used again, with little or no modification. Note that this has a profound impact on the waveform. Instead of a string of pulses of varying width but only two possible amplitudes, the waveform now looks more like a stairstep function, in which the next transition may be up or down, and from 1 to three steps in height (Figure 2).

.Figure 2. The idealized PAM4 waveform is far more complex than the NRZ waveform.

 

This is a big win, but it is offset somewhat by the fact that the receiver now has to discriminate between four possible amplitudes in each interval. If you think in terms of eye diagrams, the receiver eye diagram now has three small, irregular-shaped eyes instead of one nice big eye. In other words, you have given up what turns out to be about 11 dB of signal-to-noise ration (SNR) just by switching to PAM4. You now have very little budget left to absorb the accumulated effects of attenuation, ISI, and reflections, so you must be very good at compensating for them. That job calls for teamwork between the receiver and the transmitter.

The Transmitter

The transmitter has the easier job: we can shrink it down to just a few major blocks (Figure 3). Data comes into the transmitter in parallel form, and goes straight into a forward error correction (FEC) encoder. The reality is that even with all the other functions we will discuss, the bit error rate (BER) for 56G PAM4 at ranges beyond a few centimeters is unacceptable without FEC. Typically, Reed-Solomon techniques are used.

Figure 3. A simplified transmitter block diagram shows the relatively simple structure

 

 

 

Once FEC encoding has been added, the data stream is further encoded to ensure an adequate number of transitions exist for the receiver’s clock-recovery circuits to work, and to establish proper framing. The data stream is then serialized into a stream of bit pairs and delivered to a de-emphasis block. This block is typically a three-tap finite impulse response (FIR) filter using taps at pre, center, and post positions. Its job is to pre-distort the waveform to counter the distortions that will be introduced by the channel. Instead of putting a nice clean PAM4 waveform into the interconnect and getting out a mess, you put a carefully prepared mess in and get out something like a clean PAM4 waveform. The primary effect is to reduce ISI.

From there the signal goes to a digital-to-analog converter (DAC) and line driver. Given everything that has to happen downstream from here, these functions must do their best to maintain linearity and high SNR, while of course minimizing power dissipation.

The Receiver

Much of the technology at 56G is in the receiver. Although there are many variations in receiver architecture, the basic structure is fairly common (Figure 4). The signal from the interconnect enters an analog filter, grandly known as a continuous-time linear equalizer (CTLE), and then a programmable-gain amplifier (PGA). There is still debate in the industry over which block goes first, so you may see a design with the amplifier before the CTLE. The conditioned signal then goes through analog-to-digital conversion and a second stage of signal conditioning in a feed-forward equalizer (FFE). Again, there is debate: the FFE may be analog and ahead of the analog-to-digital converter (ADC), or digital and after the ADC. And there are still some all-analog designs in which the ADC is not present at all. From there the signal goes to a decision-feedback equalizer (DFE) and slicer. The resulting data stream moves on to clock-data recovery (CDR), serial-to-parallel conversion, and finally into an FEC back-end.

.Figure 4. The receiver, simplified here, carries most of the technology burden for the transceiver

We should look inside some of these blocks a bit. The CTLE and PGA are there primarily to condition the incoming signal for use by the subsequent blocks, eliminating spurious frequencies and centering the signal in the dynamic range of the following functions. The CTLE is a bandpass filter centered on the Nyquist frequency. The PGA particular must be very careful about linearity and noise, even though it may be called on to have significant gain—remember those 10 dB and above insertion losses.

The ADC, if one is used, is one of the more critical blocks in the receiver. Some designs omit it entirely, using analog DFE and slicer circuits. But increasingly designers are choosing to convert to digital early, and then do further filtering, slicing, clock recovery, and DFE with fast digital signal processing (DSP) hardware. Unfortunately, this requires digitization of a complex analog waveform with the possibility of transitions at the 14 GHz Nyquist edge rates. You need a high sampling rate—at least pulse-rate and maybe twice pulse-rate–with very low jitter to avoid making the eye diagram even worse. And since subsequent blocks are trying to discriminate between four signal levels in a rather noisy signal and to sense how near the center of the sample interval you are sampling, you need a fair amount of resolution—perhaps six to eight meaningful bits. Little wonder that vendors are reluctant to discuss their ADC technology.

The next three blocks, the CDR, DFE, and slicer, work together in nested feedback loops to convert the digitized waveform into a string of data bit-pairs. It is the slicer’s job to discriminate between the four possible data values in each sample: 00, 01, 10, 11. What the slicer decides is then fed back into the DFE, a multi-tap FIR filter with perhaps six taps. Multiplying the slicer’s decision about the signal level by the coefficient for each tap and then adding it into the data stream, the DFE attempts to back out the impact of reflections and any residual ISI. This of course requires that the tap coefficients be preset to undo the exact impact of reflections and ISI from the channel—more on that in a bit. If all is well, the impact of the DFE will be to remove most of the distortions introduced by the channel, and hand off to the slicer a waveform that looks very much like what the transmitter started with before de-emphasis.

At the same time, the signal from the DFE output is going into the CDR circuit, which is attempting to center the ADC’s sample clock in the pulse interval so that it is digitizing the waveform at the best possible time. This is done by any of a number of techniques, including Mueller-Muller, minimum squared error, and others, all of which rely on either pulse-rate sampling or 2X oversampling, in order to keep frequencies halfway manageable. Unfortunately, this is one block in which the tried-and-true techniques from NRZ are not applicable.

As a final observation, many of the blocks we have mentioned, including the transmitter de-emphasis and everything in the receiver except perhaps the slicer, must be adjusted based on the observed performance of the channel. This is generally done at power-up, or any time the BER becomes excessive, through an iterative training process. The process may be controlled by a state machine, but given its growing complexity it may these days be administered by a microcontroller.

What’s Next?

That is a very generic overview of what goes on inside 56G PAM4 transceivers. There are many variations, and we have only brushed the surface of the details. But given the inexorable pressure for speed in data centers, we have to ask what is next.

There are no obvious paths from 56G to 100 or 112 Gbps. Doubling the Nyquist frequency seriously worsens all three of the challenges we have discussed. It could also increase crosstalk, a challenge that existing techniques do not handle well, to the point that crosstalk becomes a fourth major challenge. But the other alternative, moving from PAM4 to PAM8, presents huge challenges for nearly every block we have discussed, making those receiver eyes even tinier and more closely spaced, and smaller relative to the noise. Stronger FEC might help, but at the cost of reduced coding efficiency and increased power.

Still, hope springs eternal. Presentations at the 2018 IEEE Optical Interconnects Conference suggested the industry would achieve 100 Gbps over copper between chips and optical modules by 2020, still using PAM4. But that, in the view of conference presenters, may be the end of the line.

Beyond that point, the experts see use of fly-over coaxial cables for interconnect, connected directly into IC packages. After that come optical fibers connected directly to modules that integrate digital electronics with optical transceivers. And eventually, all eyes are on silicon photonics and the hope of terminating optical fibers directly on digital dice. At that point, the PCB becomes mainly a medium for ground planes, heat and power distribution, and mechanical support. But there is much work across many technologies between here and there.


CATEGORIES : Transceiver Technology/ AUTHOR : Ron Wilson

Has one comment to “Why Isn’t 56 Gbps Impossible?”

You can leave a reply or Trackback this post.
  1. Nice article, Ron…but unfortunately, you made a reference to a PAM4 waveform for figure 2, but figure 2 is not a PAM4 waveform of any sort.

Write a Reply or Comment

Your email address will not be published.