Power Play—Successfully Designing Power Delivery Systems

Kella Knack
|  Created: May 13, 2019  |  Updated: July 29, 2020

Across the industry, the design element that continues to cause the most problems is the power delivery system (PDS). And, as a consulting firm, for the last several years the majority of problems that we have been called in to solve have always centered on PDS problems. As noted in my previous blog on guard traces and their ineffectiveness, every time we have been called upon to solve an EMI problem, we have always ended up fixing the PDS.

This article will discuss the evolution of PDS design challenges, how they arose and the methods used to mitigate them. Part two of this article, addresses PDS design for ultra low power implementations.

A Little Bit of History and a Lot of Problems

First, some grounding: all PDSs have both inductance (expressed as Lp) and resistance (expressed as Rp) in series with where the power flows. At low currents, resistance doesn’t bother you. At low frequencies, the inductance doesn’t. As soon as you start going up in frequency, the inductance becomes a major source of degradation.

So, let’s step back a little to see when things such as inductance and resistance became issues of concern. Most importantly, we can see where the “black magic” of PDS design came into play.

When we started using two-sided boards the rise times of the signals on them were sometimes dozens of nanoseconds. The frequency content was really low and, as a result, the traces on the boards did not cause a problem. All of the power was routed with traces not planes. You could put a capacitor across the route path of each IC, between +5 and ground on each IC, and that was good enough. Then, ECL (emitter coupled logic) came on the scene.

ECL technology could never work without planes on the boards because the rise times were under a nanosecond. This is where the use of traces for power distribution became unacceptable. Because of the fast rise times, you simply could not hook up power with traces. There was too much inductance. So, from the very beginning of ECL technology there was always a power plane pair. Engineers using ECL technology in their designs understood this and used power planes.

Now, let’s move to TTL (transistor-to-transistor logic). With basic TTL, you could route the power with traces. Then came the advent of ASTTL (Advanced Schottky TTL). It was as fast as ECL but the engineers designing products with it tried to use the same methods they used with slow TTL and the circuits were not stable. The result was fearsome EMI (electromagnetic interference). People tried using .1 and .01 microfarad capacitors and it didn’t work. And, that’s when all the black magic rules started appearing.

There were virtually no EMI engineers in the public domain, and because the engineers using TTL didn’t understand things like transmission lines, power delivery and the routing of fast signals they got themselves in real trouble. A product would fail out on an EMI test range and the design engineer would ask the person doing the test what should be done about it and the test engineer (who didn’t understand PDS design) would cite something that someone else did to make the problem better and it would get put down on a list and passed around. These lists were the genesis of the “black magic” rules.

The Role of Capacitors in PDS Design and Beyond

One of the results of the “black magic” rules was that when it came time to design a PDS, people had a tendency to put lots of capacitors on their boards in an attempt to fix PDS problems without really understanding how those capacitors did or did not function. This is where the notion of decoupling capacitors came from. In reality, these capacitors don’t really decouple anything. They instead provide a local source of charge to support a switching event.

They are better referred to as “coulomb buckets” as they store coulombs of charge used to support switching events. (A coulomb is the International Systems of Units unit of electric charge. It is the charge transported by a constant current of one ampere in one second). If the capacitors used are not the right size or don’t function at the frequencies involved, the result is ripple on Vdd. (Vdd is the terminal of the power supply that connects to the drain end of a MOS or CMOS transistor or IC. It is commonly the most positive rail of the power supply). A thorough treatment of this topic appears in Lee Ritchey’s Altium Resource article, “Win at Power Delivery System Design,” posted January 17, 2019).

In 1995, the EMC faculty of the University of Missouri, Rolla, (now the Missouri University of Science and Technology) demonstrated how the classic use of capacitors to bypass the PDS was not effective in resolving EMI issues (see Reference 2). (High impedance means high ripple and potential EMI.) Figure 1 shows three impedance vs. frequency curves resulting from that study.

Figure 1. PDS Impedance vs Frequency from UMR Paper

Figure 1. PDS Impedance vs Frequency from UMR Paper

Chart courtesy of IEEE and Todd Hubing

In essence, .1 and .01 microfarad capacitors improved the performance of the power system (reduced its impedance) out to about 70MHZ. From 70MHZ on, they make the impedance worse and then eventually they don’t do anything at all. Specifically, it’s the parasitic inductance in series with the capacitors that renders them useless after a certain frequency. What this study showed was that the only way to control impedance at high frequencies (above 100MHZ) is to use plane capacitance. The amount of capacitance needed can be ascertained by determining the number of transmission lines that need to be driven at one time. In switching transmission lines, charge is transferred from the power supply to the transmission line. From this, it can be determined how big the charge storage device (or coulomb bucket) has to be.

So, to have stable operation and no EMI there has to be a really robust PDS. In this instance, robust means supplying current at very high frequencies (above 100MHZ) and that can only be achieved with plane capacitance. An obvious way to create plane capacitance in a PCB is to add two plane layers for that purpose. These plane layers can be a specialty material such as ZBC from Sanmina or other materials developed for this purpose. Alternatively, when a PCB has several planes, they can be arranged in such a way that pairs of planes are opposite each other across a piece of laminate or prepreg. Either method achieves the desired results. The advantage of placing plane layers next to each other over using a specialty material such as ZBC is that no premium is added to the PCB cost as a result of using the specialty material.

But, what do you do when you have small boards, such as the four-layer boards in an Xbox, where there is not enough board real estate to have plane capacitance? The only solution is to move the high quality capacitance onto the IC package and the die itself. For decades, companies such as AMD and Intel have been building capacitance into their parts because the majority of their parts went onto four layer boards. But, up until seven or eight years ago, the FPGA vendors did not build capacitance into their parts. This led to a number of boards being built that failed. In some instances, companies had to completely respin their products which resulted in missed product windows and high NRE costs. In other cases, those companies that could not afford respins or missed market windows altogether simply went out of business. Fortunately, now, all of the major FPGA vendors “get it” and have capacitance built into their parts.

The next big challenge has been all the ICs that have a billion transistors and require 100 amps at .9 volts. Thus, the challenge has been to deliver very high currents at very low voltages where there is not a lot of tolerance for voltage drop. This means the need to address resistance came back into the picture. This has been accomplished by reducing the number of signal layers in a stackup and replacing them with extra ground planes. For example, in a 22-layer board built for a high-speed network monitoring product, the middle four layers became one-ounce Vdd and ground planes placed in parallel.

But, now, even changing the stackup is not enough. Beyond looking at the terminal voltage at the output of the power supply, there is a need to compensate for the voltage drop that occurs after that. For these very high currents, there are two sense lines that are part of a feedback loop. One line goes to the ground terminal and the other goes to the Vdd terminal. These sense lines are not connected at the output of the power supply. Instead, they are carried all the way through a couple of dedicated pins on

the Vdd and ground inside the IC package so that the voltage is sensed at the silicon rather than at the board. Whoever designs the IC has to provide a single pin that is isolated from the Vdd and ground planes in the package. One vendor refers to them as “peepholes” because they allow you to go in and measure what happens at the die.

Today, it’s not uncommon to have two or three billion transistors on a chip. The goal now is not about cramming more transistors onto a chip, it’s about making them faster. This is FinFET technology where a MOSFET is built on a substrate where the gate is placed on two, three or four sides of the channel or is wrapped around the channel forming a double gate structure. In this architecture, you have very narrow gates of 14nm and the channel stands up on two sides. This is how chip manufacturers achieve these very tiny dimensions. This is also the reason why the voltage has to drop because the oxide thicknesses are such that they cannot tolerate very high voltages and there is breakdown between the source and the drain. This is the technology that is driving 56Gbps—lots of transistors in very small dimensions with very thin oxide layers. It’s all about performance and making things switch faster.

Summary

Over the course of the history of the PDS designs, we went from not having to care about either resistance or inductance, to having to care about resistance and then later inductance and to now having to care about resistance again. The pin-on-chip approach addresses resistance and the chip manufacturers have been providing that for the past seven or eight years. They know they have to own the PDS issues inside the package. Still, as with most aspects of our technology, it’s a good idea to ask IC suppliers to provide a demo board that shows how the IC works in the way you need it to work before committing the silicon to your board.

Reference

1. Ritchey, Lee W. and Zasio, John J., “Right The First Time, A Practical Handbook on High-Speed PCB and System Design, Volumes 1 and 2.” 2. Hubing, Todd, etal, “PWB Power Structures: Theory and Design,” University of Missouri, Rolla, November 1999.

Would you like to find out more about how Altium can help you with your next PCB design? Talk to an expert at Altium.

About Author

About Author

Kella Knack is Vice President of Marketing for Speeding Edge, a company engaged in training, consulting and publishing on high speed design topics such as signal integrity analysis, PCB Design ad EMI control. Previously, she served as a marketing consultant for a broad spectrum of high-tech companies ranging from start-ups to multibillion dollar corporations. She also served as editor for various electronic trade publications covering the PCB, networking and EDA market sectors.

Related Resources

Related Technical Documentation

Back to Home
Thank you, you are now subscribed to updates.