Increasingly sophisticated SoCs, integrating many system components onto a single die, have in general simplified the system designer’s job. But these chips have made the power-delivery subsystem more complex. What used to be a straightforward task of routing Vcc from a supply connector to the ICs has become the design of an active network as complex as any other piece of the system.
This complexity grows from the increasingly challenging power requirements of the SoC. Fortunately, designers have some options for dealing with the task at board-level, and SoC developers are trying to help by absorbing elements of the power network into their chips. But ultimately, power designers still have some difficult decisions to make. And they may have to do some analog circuit simulation before they are done.
Integration has its costs. SoC designers have drawn many diverse circuits onto their dice, and each of those kinds of circuits has brought with it its own voltage, noise, sequencing, and transient-response requirements. Migration to smaller geometries has made possible not only integration, but lower supply voltages. The trend has also driven up peak operating currents, slashed noise margins, and introduced the unbounded complexity of dynamic power management.
The most obvious result of these complications is a sharp increase in the number of external voltages SoCs require. A high-end FPGA, for example, can have 15 externally-driven power rails. Where are they all going?
One answer is different voltage requirements. Core logic supply voltages have moved inexorably lower, stalling out at least temporarily in the neighborhood of 1V, at least until the advent of FinFET processes. But other kinds of circuits have dropped out of the march along the route. I/O cells may be tied to a particular supply voltage by an industry standard. SRAM cells may require a voltage slightly higher than logic-level for reliable full-speed operation, and may use a significantly lower voltage for standby. Precision analog circuits may want a higher voltage to reduce jitter or improve noise margins. These diverse requirements have caused a sharp increase in the number of supply rails.
But the number of voltages has not been the only issue. Some SoC circuits—notably low-noise amplifiers, phase-locked loops (PLLs), and physical interfaces have extraordinarily strict supply-noise limits. These requirements prevent the circuits from sharing a supply rail with noise sources such as digital logic or high-current I/O cells, even if the voltages are the same. So add low-noise rails to the list.
Another demand for additional supply rails comes from, rather ironically, power management. As digital designers employ increasingly aggressive dynamic power-reduction techniques—such as fine-grained clock gating and on-the-fly power gating or voltage scaling—the circuits that use these techniques can make extraordinary demands on the transient response of their supply rails. Loads can change by orders of magnitude in microseconds, or less. Voltages may have to change in response to commands from the SoC. It can make sense to separate these loads from more constant—or noise-sensitive—ones.
Sequencing may also require separation of supply rails. In many SoCs, there is a required sequence for powering up—and in some cases powering down—supply rails. This timing requirements can force separation of rails for circuits that otherwise could share a supply.
A system designer faced with a dozen different power rails on an SoC has an interesting problem. The solution will generally be some sort of distributed power network, according to Ashraf Lotfi, fellow and chief technologist in Altera® Corporation Power Business Unit.
“Typically you will see a bulk regulator on the board, stepping the system 12 V or 24 V down and distributing it to individual point-of-load regulators. Then you will have—often—a point-of-load supply for each rail, in order to meet the varying requirements.”
Due to the proliferation of rails, each new design requires analysis to minimize the number of regulators. Fifteen on one board is not ideal. So the designer has to confront some key questions. Are there rails whose voltage, noise, and sequencing requirements allow them to share a regulator in this particular implementation? If not, are there cases where running one rail at a slightly different voltage—even at the cost of slightly higher power or slightly lower performance—could make sharing of regulators possible? Could an external sequencing switch help?
After minimizing the number of regulators, designers should turn their attention to optimizing regulator efficiency and footprint, Lotfi says. An obvious start is to use high-efficiency switching regulators, rather than linear regulators, whenever noise and transient-response requirements permit. Lotfi argues that recent high-frequency switcher modules have considerably expanded the range in which such substitutions are possible.
Designers can also do a lot to reduce the board area required by each regulator. Modularization can bring the controller, voltage reference, drivers, power FETs, and inductor into one hybrid package. In some designs, the feedback compensation is also in the package. In principle, this integration reduces the designer’s freedom to optimize the regulator’s transfer function to the needs of a particular rail. But in practice, Lotfi maintains, requiring the power designer to provide the feedback passives consumes more design time and board space than the added flexibility is worth. Vendors can preset an optimal transfer function for the internal components of the regulator and for normal requirements. Further, by keeping the critical components within a module, regulator vendors can use higher switching frequencies, improve overall efficiency, and, Lotfi claims, suppress switching noise so effectively that the modules can equal the noise figures of linear regulators.
Whether the choice is a discrete regulator or a miniaturized module, linear or switching, the problem rests with the system design team to verify that the choice of regulator, external components, and layout actually meets the supply requirements of the SoC. As the problem has grown to include more dynamic behavior and noise-immunity questions, this verification has become less about back-of-napkin calculations from data sheets and more about simulation. Lotfi says that the most sophisticated design teams will do a behavioral simulation of their entire power grid. This requires not only the skill to run the simulation, but also access to accurate models of the components actually on the board—data not always available to smaller organizations. The simpler alternative—if it is available–is to use detailed reference designs from the SoC vendors.
But even with the best information and tools, some supply problems simply can’t be solved from outside the SoC. Sometimes, the chip designers have to take on the responsibility for powering the circuits they have created.
On-die voltage regulation has a long history, dating back to the use of charge pumps to provide programming voltages for embedded EEPROM on inexpensive microcontrollers. In many cases the motivation was bill-of-materials cost or convenience: microcontroller applications, for example, can be quite intolerant of the cost of a second voltage regulator on the board.
Convenience continues to be an important motivation, even with far more complex chips. Altera IC design manager Weichi Ding points out that advanced FPGAs may use on-die regulation to provide voltages for configuration RAM or back-bias circuits. These uses are not so much to satisfy technical requirements as to keep the number of external rails from climbing even higher than it already is.
Similarly, a number of circuits on Altera’s Stratix® V FPGAs require separate regulators because their noise sensitivity makes sharing of regulators with other circuits impractical. Examples include PLLs and Physical Media Attach circuits (PMAs), the latter being the I/O blocks that connect directly to multi-GigaHertz serial I/O pins. All these circuits have on-die regulators on the Stratix V FPGA chips, also in order to reduce the number of pins dedicated to external voltage rails.
Dynamic voltage-frequency scaling (DVFS) can also create a need for on-chip regulation if you carry it far enough. In early DVFS implementations, software would predict the level of performance required of a block in the next few tens of milliseconds, and command the hardware to pause operation and shift voltage and frequency in anticipation of the new load. For instance, a handset entering standby mode might shut down its graphics engine entirely, and throttle the CPU back to a very slow clock and minimum operating voltage. This process was ponderous enough to be handled easily with an external regulator programmed to produce several output voltages. But because of the considerable delays and energy expenditure in shifting, the system could only adapt to fairly long-term and predictable changes in activity.
At the Design Automation Conference in June, Intel Principal Engineer Tanay Karnik described what happens when you make DVFS much more temporally fine-grained. Intel, watching power consumption on its processors climb toward 100W—and far surpass that for server CPUs—has deployed very fine-grained DVFS on individual processing units on the die. Abandoning operating-system-driven frequency selection at the millisecond level, Intel designers created circuits that examined input buffers and selected voltage and frequency on the fly, based on the next few lines of code. That means potentially changing frequencies and voltages in tens of nanoseconds rather than milliseconds. The much faster DVFS means that the chip can much more closely match energy consumption to the processing needs of individual blocks. But it also puts huge demands on regulators—demands that external regulators simply can’t meet.
To achieve this level of dynamic response, Karnik said, Intel chips such as Haswell use programmable on-die linear regulators. These blocks, implemented in the processor’s native digital CMOS, step the 2.4 V primary voltage down to a selectable output in the range of 0.6-1.8 V, in 12.5 mV steps. The regulators can change voltages at rates up to 100 MHz, and they achieve a rather astonishing slew rate of 100 A/ns in order to track the enormous load changes the power- and clock-gated digital blocks can produce. Such performance, needless to say, would be nearly impossible if there were a centimeter or two of circuit-board trace and a lead frame in the regulator’s control loop.
Undertaking such design is not for the faint of heart, Karnik warned. The implementation Intel chose uses on-die inductors, so Intel had to introduce magnetic material into its back-end-of-line process flow. For the design team, the regulator network is a formidable modeling challenge, with multiple domains and millions of simulation elements. The design must be verified—and tested in manufacturing—over the entire range of voltages, and must maintain efficiency over the full load range.
“Internal regulators take significant die area, planning, and debug effort,” Karnik said. “But they are the way to go.” Not only do they make possible nearly-instant changes in voltage and response to the rapidly-changing loads, but they eliminate seven external chips.
If Intel continues to indicate the direction of advanced SoCs from other vendors, we can expect to see point-of-load regulation becoming more demanding, and to see the regulators themselves gradually moving onto the SoCs, in some cases taking their inductors with them. Certainly, supplying power to SoCs will continue to grow as a design challenge in its own right.