5G is moving forward. From a vague notion of the next big thing in wireless, to a loosely defined set of goals that all but invited overpromising, to an increasingly solid set of use cases and technical standards proposals, 5G is rapidly converging on realizable objectives and implementable standards. In the process it is becoming something very different from today’s cellular network.
What is It For?
Early on, much of the discussion about 5G was in terms of quantitative leaps: Gbps of bandwidth, huge areal density of connections in crowded urban markets, remarkable energy efficiency. Everything LTE is but more so. Such statements threatened to draw the 5G effort into an impossible situation. Systems architects across many application areas could presume the existence of an arbitrarily fast, infinitely available and reliable network and just assume 5G would bail them out.
Want to break into the fixed broadband access market without any fiber or copper? Don’t worry, 5G will be as fast as fiber to the home. Want to do mobile augmented reality without a 5kg headset? Relax: 5G will let you put all the heavy computing in the cloud via a seamless, always-available, high-bandwidth connection. Want a self-driving, connected car without a supercomputer in the trunk? Just drop in a 5G modem, and let the cloud do the work (Figure 1). Want to connect the sensors and actuators of your Internet of Things (IoT) system directly to the Internet? Well, you get the idea.
The problem wasn’t that any one of these goals was unrealistic, given enough time and resources, but that they all seemed to pull in different directions. It became clear that to make progress, standard developers would have to constrain expectations to some reasonable set.
Three Use Cases
The International Telecommunication Union’s International Mobile Telecommunications (IMT) 2020 vision statement narrows the cacophony of competing wish lists to three representative use cases: Enhanced Mobile Broadband, Massive Machine-Type Communications, and Ultra-Reliable Low Latency Communications. Together the three define the vertices of a triangular space that will include practical responses to many of the expectations for 5G (Figure 2).
Enhanced Mobile Broadband is perhaps closest of the three to what most people will think of as the next-generation cell phone. This scenario envisions static users in well-served locations getting up to 20 Gbps data rates, and mobile users getting enough practical bandwidth—from 80 to 200 Mbps depending on location—to support 3D or ultra-high-definition video, and to enable close interaction between mobile users and cloud apps for scenarios like gaming or augmented reality.
Massive Machine-Type Communications offers a quite different scenario. Here, the clients are not servers or humans, but IoT devices in smart cities, factories, buildings, or homes. The emphasis is not so much on raw data rate. The scenario presumes that machines will either have relatively little to say—a few sensor readings per second, perhaps—or they will locally preprocess their data to drastically reduce its bandwidth—as in a smart surveillance camera. The critical resource here is not data rate, but density of connections—up to a million connected devices per km2—and energy efficiency—as much as a hundred times the efficiency of the 4G network.
The third scenario, Ultra-Reliable Low Latency Communications, adds an entirely new and, for anyone used to losing calls as they drive down the street, a rather implausible use case: industrial automation, mission-critical connections, and, ultimately, self-driving cars. These applications not only require relatively high data rates—think of a suddenly confused car dumping streams from all its cameras and lidar to the cloud for analysis—but also two attributes wireless networks rarely even hint at: millisecond-level latencies and functional-safety-level reliability.
Finding a Way
Taken individually, even the most demanding requirements implied by these three scenarios seem achievable. You can almost always increase the delivered data rate by just increasing the bandwidth of the channel, for instance. Failing that, you can apply techniques already tested with today’s LTE-Advanced network, such as multiple transmit and receive antennas (MIMO) and carrier aggregation. There is room, with the expanded transistor counts of advanced semiconductor processes, to further improve spectral efficiency. You can use massive MIMO antenna arrays on the tower to do beamforming, simultaneously creating private radio beams for individual clients. You can overlay a microcell with an array of smaller cells in dense areas. All these measures can increase delivered data rates.
To increase the number of served clients in a given area, smaller cells and beamforming help. You can also multiplex a high-bandwidth channel, sharing the bandwidth out among many clients. Changes in network protocols can improve energy efficiency and latency.
But the problems multiply when you try to do all these things at once.
For instance, if you just make the subcarrier spacing wider in the existing 4G bands below 2 GHz, each connection can have more bandwidth. But you get fewer connections. You can instead move into new, higher frequencies like the 24.25-29.5 GHz or 37-43.5 GHz bands, where there is room for more than 500 MHz spacing. But as you move from 3 GHz up into millimeter-wave territory, propagation becomes an issue. At 28 GHz, much of the real world attenuates or blocks the carrier.
Rain alone can add tens of dB/km attenuation at these frequencies, and lovely spring foliage can utterly devour a signal. Even construction materials such as drywall and glass can eat up tens of dB. So while millimeter waves can offer lots of bandwidth, the offer may only be good on short, unobstructed, line-of-sight connections on a dry day.
Of course you don’t have to have just one high-bandwidth channel to deliver great data rates. You can patch together narrow and poor channels—lots of them. You can dynamically aggregate multiple channels from a single link, or you can aggregate multiple links through different antennas and different beam paths, through MIMO. With beam-forming and tracking you can even hope to keep these links intact while your client moves through her dense urban environment.
Unfortunately, these techniques militate against the needs of those millions of cheap, low-power IoT devices. MIMO in the millimeter-wave bands is unlikely to be cheap, especially if the receiver is trying to extract high-speed data from compromised channels. And the complexity of all that aggregation and link management can run up not only processing overhead, but also energy consumption and latency.
Nor does the complexity of the situation make one sanguine abut the needs of an autonomous car tooling along in 100 km/h traffic on a leafy parkway. The combined needs of high burst data rates, millisecond latency, and five-nines reliability look like bad news for the passengers. So what to do?
A Layered Answer
The proposed solution to these challenges is to employ three separate layers of frequency bands, each to meet a specific technical need. These layers do not match up one-to-one with the scenarios, though. Rather, by using the layers in combination the 5G network can meet, or at least approach, the IMT-2020 goals for each of the three scenarios.
The three layers form a sort of wedding cake (Figure 3). At the bottom, providing the broadest coverage, will be the existing LTE bands below 2 GHz. Existing cellular infrastructure already provides coverage to most of Earth’s population in these bands. And these frequencies are good at penetrating buildings, foliage, and weather. Accordingly, 5G will use these low-frequency bands for coverage of more remote areas where immediate deployment of new base stations is not economically feasible, for fall-back connections in difficult situations, for uplinks when more sophisticated 5G connections are providing the downlink, and for undemanding machine-type links. Crucially, the sub-2GHz layer will backstop higher bands for the high-reliability, low-latency connections. Your self-driving taxi won’t find radio silence under trees or behind buildings: it will just have to hop to a new channel and endure temporarily lower data rates.
The middle layer in the cake will be the medium-frequency bands between 2 and 6 GHz: the C band. These will be the workhorse frequencies for 5G, employing all the formidable technologies at its disposal to maintain high data-rate connections. Here is where in particular massive MIMO, beam forming and tracking, and big leaps in spectral efficiency should come into play. The goal is to deliver usable 100 Mbps rates to mobile users under all but the worst conditions by dynamically aggregating whatever resources are available to make connection. These bands will also carry much of the higher-speed machine-type and high-reliability/low-latency traffic.
But even the combined efforts of the first two layers will not be enough to meet the extreme data rates of the most demanding clients—those who really want that 20 Gb/s. For them there is a crowning layer on the cake: bands above 6 GHz, including those difficult millimeter-wave bands. Here, where it is hoped governments will allocate contiguous 800 MHz blocks for 5G use, 5G can substitute for optical fiber, deliver real-time UHD video to mobile users, and undertake similar feats when geography, antenna placement, and channel quality cooperate.
It is more accurate, then, to think of 5G in its IMT-2020 incarnation not as a single wireless network, but as three layers of wireless networks, intimately linked through virtual base stations and control layers in metro data centers to deliver the appearance of one seamless network to the user.
Defining a Radio
Looking across this landscape of diverse requirements and frequencies, developers were able to see one thing clearly: the need for a new radio design to replace the too-limited radio definition of 4G. They creatively named this effort New Radio (NR).
NR will have to erect an implementation across the landscape of needs. It must provide a single architecture that can scale across all three layers of frequencies, from 700 MHz to 40 GHs and, eventually, beyond. It must support massive MIMO arrays and perform dynamic beamforming and tracking. It must squeeze the best data rate out of each available channel, and allow sharing of the available data rate across multiple users and message types in order to achieve those vast numbers of devices per km2. And it must be implementable at a range of price points with semiconductor processes already available in 2020. To achieve this, the NR architects made changes to modulation, error correction, frame definition, and protocol in the air interface.
They started by choosing a dense orthogonal frequency-division modulation scheme that is scalable across frequency bands and multiplexable. The modulation scheme generates simple waveforms—you don’t want the RF front end to be any worse rocket science than it has to be—useful for multi-user access, and amenable to efficient implementation in processes like 10 or 7 nm that have large transistor budgets.
To this the architects added multi-edge low-density parity check channel coding (ME-LDPC) that improves coding efficiency, is easy to implement in parallel circuitry, and permits short transmission time intervals (TTIs)—that latter bit vital to keeping latencies low.
Latencies require some further work, though. NR redefines the data frame structure to pack scheduling data and acknowledgements into the initial data frame, reducing the extended turn-around-times that stretched out latency in LTE. These decisions are vital both to reducing latency for machine-to-machine communications and to rapid training of transmitters for beamforming, so there is some hope of tracking fast-moving mobile clients. While especially valuable in time-division duplexing, the new frame structure will also support frequency-division duplexing.
The New Radio is an aggressive leap beyond the LTE radio, not really introducing new science, but pulling together many existing ideas and fitting them into the capabilities of 2020 process technology. But the challenges of building the NR may pale in the glare of a different kind of challenge: bureaucracy.
The three layers of the IMT-2020 cake aren’t isolated: they need each other. While it is possible to phase in limited urban-market 5G service with just some spectrum in the middle layer, or to provide point-to-point fixed broadband links in just slivers of the top layer, to get what most people are expecting from 5G you need all three layers.
That means individual countries will need to reallocate big, contiguous blocks of spectrum—from 20 MHz chunks in the 700 MHz band to great 800 MHz swathes above 6 GHz, from incumbent users. And this will need to be done in a globally coordinated way so that a single NR can operate across national borders.
So far progress looks good. Huawei estimates, based on public data, that over half of the desired spectrum in the mid-level C band is either available or under regulatory consideration. Big chunks are still listed as future potential in the US, South Korea, and Europe. In the high-frequency layer matters are less certain.
What seems certain is that as 2020 approaches, NR implementations solidify, and local service providers finalize decisions about which frequencies they will need and which they can afford, the value of uncommitted blocks of spectrum—in economic and in political terms—will skyrocket. It may be the results of the ensuing 11th-hour negotiations that will determine the public perception of 5G. Will it be only an enabling platform for dramatic global media events like the Olympics, or just another obscure tool for giant high-tech companies and their wealthiest customers, or will it be a society-altering change in the way humans and their things relate to each other? Technology can make the latter answer possible, but only time, circumstances, and a certain amount of good will can affirm it.