Let’s say you’ve just received a populated switch or GPU PCB from your favorite PCB fabrication and assembly site. You sit down at your computer and start up your GUI or console to begin transmitting data. Trouble begins—the protocol to bring up the link isn’t working. What do you do? Checking the voltages and jumper positions is easy enough, but what about the signal integrity of your channel? Assuming that you did all of the required simulations before fabrication, you probably want some way to validate the design performs as predicted. You can only do that if you have added the right structures to the PCB!
In this article, I describe several structures to place on your board for SI design verification and how to use them.
The 2x-Thru de-embedding method is the most practical technique for removing fixture effects from validation measurements on PCB test coupons. The concept is straightforward: a dedicated coupon consisting of two identical fixture launches connected back-to-back (with no intervening DUT) is measured, and the resulting S-parameters are mathematically split at the midpoint to extract the fixture contribution on each side. This fixture model is then removed from the measurement of the actual validation structure, leaving only the interconnect response.
Designing a 2x-Thru structure requires careful attention to symmetry. The two halves of the structure must be geometrically identical, including trace length from the connector launch to the midpoint, via transitions, and any impedance transitions at the connector footprint. Asymmetry introduces errors in the split operation that propagate directly into the de-embedded result. In practice, this means the 2x-Thru coupon must be fabricated on the same panel, in the same stackup, and ideally adjacent to the DUT coupon to minimize process variation between the two.
The length of the 2x-Thru structure also matters. It must be long enough that the fixture-to-fixture reflections are separable in the time domain from the midpoint discontinuity. A structure that is too short causes time-domain overlap between the launch reflection and the midpoint, which corrupts the fixture extraction. A general guideline is that each fixture half should be long enough to allow reflections from the launch to decay before reaching the midpoint, which typically means each half should be at least 15 to 20 mm for connectors with moderate reflection profiles, though this depends on the connector type and rise time of interest.
A validation panel should include, at minimum, three structure types: the 2x-Thru reference, the DUT coupon (the actual interconnect topology being validated), and an impedance reference line for TDR verification. Without the 2x-Thru, you cannot de-embed. Without the impedance reference, you cannot confirm that the fabricated stackup matches the design intent before proceeding to S-parameter extraction.
The de-embedded S-parameters from the DUT coupon become the ground truth against which your simulation model is correlated. This correlation step is where signal integrity validation actually happens. If the de-embedded measurement and the simulation diverge beyond acceptable limits, the root cause is isolated to the interconnect model (material properties, copper roughness model, via model) rather than being obscured by fixture uncertainty. This is the practical value of designing validation structures with de-embedding in mind from the start.
When the 2x-Thru method is applied correctly, the residual fixture error is typically below 1 dB of insertion loss variation and a few ohms of impedance deviation across the bandwidth of interest. This level of accuracy is sufficient to validate loss models, identify copper roughness effects, and confirm that differential via transitions meet impedance targets. Without de-embedding, fixture losses of 0.5 to 2 dB per connector launch at 10 to 20 GHz would be attributed to the interconnect, leading to overly pessimistic loss budgets or incorrect material property extraction.
Designing validation structures without referencing a measurement standard leaves the results open to interpretation. IEEE P370 provides a framework for characterizing high-speed interconnects up to 50 GHz, and it directly governs how test coupons and fixtures should be designed, measured, and post-processed. Any validation structure intended for channels operating above a few Gbps should be designed with IEEE P370 compliance in mind, because the standard defines quality metrics for S-parameter data that determine whether your measurement is trustworthy.
IEEE P370 specifies requirements for fixture design, calibration methodology, and data quality checks including causality, passivity, and reciprocity. When a validation coupon does not conform to these requirements, the extracted S-parameters may appear reasonable in magnitude but carry phase errors or non-causal artifacts that corrupt time-domain simulations. The standard also defines specific test fixture architectures, including the 2x-Thru structure, that enable de-embedding of launch fixtures from the device under test. Designing your validation panel without these structures means you cannot cleanly separate fixture effects from the interconnect performance you are trying to validate.
Key requirements from IEEE P370 that affect validation structure design:
If your validation coupons are not designed to support IEEE P370-compliant extraction, you are left comparing simulation to raw measurements that include connector and launch parasitics. This conflates fixture performance with interconnect performance and makes root-cause analysis of failures nearly impossible.
Validation structures that will be measured multiple times, across multiple boards or fabrication lots, need to account for connector mounting repeatability. Soldered connectors introduce variation in the launch impedance profile from board to board. Press-fit or compression-mount connectors reduce this variation but impose mechanical constraints on the coupon footprint. The 2x-Thru method partially compensates for launch variation because the fixture is re-measured on each board, but only if a fresh 2x-Thru measurement is taken for each board under test.
The connector footprint should attempt to match the target impedance in the transition when looking from the connector pin to the trace. This may require anti-pad sizing, back-drilling any signal via on the connector pin, placing stitching vias, and using tapers or cutouts in the signal launch. All of these features affect the quality of fixture extraction. Ignoring these factors may result in a high-reflection fixture that is harder to de-embed cleanly, because the time-domain energy from the launch takes longer to decay and may overlap with the DUT response.
This may often require simulation to verify the connector transition design.
You always want to check if your brand new printed circuit board does what it was intended to do. In terms of SI, you want to check if your traces have the correct loss and impedance. Both can be checked with one structure: the 2x-thru.
The 2x-thru is a transmission line two times length x, where x is usually a length of a thru transmission line on your PCB. Hence the name 2x-thru. You want to pick x in such a way that it’s relatively easy to place on the board. Meaning, you don’t want it to take up much space. On PCBs designed to test a single component, x is the length of the trace from your coaxial connector to the device under test. If your PCB is a functional device, you may want x to be the same length as your longest transmission line. That makes sense if you’re testing a short trace from an I/O port to a SERDES on an HCA. However for designs with longer traces, like an Ethernet switch, you may want x to be half the length of your trace from an I/O port to your SERDES.
You need to have an idea of what you expected from the PCB before you check it. This is, in fact, Eric Bogatin’s rule of thumb #9, and, in my opinion, one of his most impactful rules for your day-to-day life as an engineer. To get an idea of how much loss to expect, you can model your traces with a closed form equation or with a 3D EM solver. Personally, I find closed form equations work best for loss. Impedance is a little more complicated to anticipate since the PCB fabrication site’s process influences the results. A common practice is to assign your high-speed digital traces a target impedance and a tolerance. Most board houses can hold +/- 10% and some can go down as low as 5%. Tighter tolerances come at a premium cost.
Checking loss and impedance is fairly straightforward. To check loss, use a VNA or a TD-VNA. Both do a great job of measuring loss accurately. You just want to be sure you have a valid calibration before you measure. To check impedance, use the reflection S-parameters that come with the above measurement and convert them into the time-domain. Impedance can also be measured directly with a TDR. I personally recommend using a VNA (figure 1) or TD-VNA (figures 2 and 3). Impedance rises with time. This is a physical phenomenon caused by the frequency dependant impedance of the transmission line. Since impedance is not a straight line, what point do you pick? Many SI engineers pick the point in the middle; don’t do that. The best point is the first stable point on the transmission line (figure 4). That yields the most accurate answer. If your impedance is outside of the range you specified on the print or drill drawing layer, you can ask for a new board or a refund. Doing this can impact your relationship with the fabrication site. So, I recommend being careful before making that request.
Figure 1. ZNB20 VNA from Rohde & Schwarz® showing a setup to verify the high-speed differential 2x-thru on a PCB with probes. Image courtesy of Rohde & Schwarz®
Figure 2. A TD-VNA from Multilane inc. Part number ML-4035-TDR. Image courtesy of Multilane inc.
Figure 3. TD-VNA from Teledyne LeCroy measuring an edge-card channel. Image courtesy of Teledyne Lecroy.
Figure 4. Impedance profile from simulated S-parameters of a 10 in transmission line.
Once you have the S-parameters of your 2x-thru, determine the loss by the length and you have a very reasonable estimate of loss per meter. You can then scale that result to all the trace lengths on your board. In most cases, you’ll find the board has more loss than expected. This is, of course, not what you want to hear. The source of this extra loss is almost always surface roughness. Many SI engineers claim surface roughness is a predictable phenomenon, and to some extent, they are right. There are algorithms which adequately predict its ill effects, but the algorithms are all really a fudge-factor tacked onto conductor losses. From what I can tell, the snow-ball algorithm is the closest thing to a first-principles explanation of surface roughness. Even that doesn’t predict all of its effects, namely, it doesn’t predict phase discrepancy, and that means it’s incomplete. My point in saying all this is that you can’t really predict the effects of surface roughness perfectly unless it is not there, and that is why we as SI engineers often mispredict IL performance.
The Beatty standard (figure 5) is the next structure to consider placing on your board. The Beatty standard is a serial resonator created by tripling the width of your 2x-thru trace for a certain length. The resonant behavior is determined by the following equation.
Figure 5. A micro-strip Beatty standard.
Where F is the frequency in GHz, c is the speed of light, l is the length of the Beatty standard, and εr is the relative dielectric constant of the material. If you are using a microstrip structure, εr is the effective dielectric value.
Peaks are observed when n is odd, and nulls are observed when n is even. For example, see the plot below that uses the following inputs:
Figure 6. Example Beatty standard modeled with QUCS.
You need to de-embed the coaxial connector and a portion of the trace to observe these peaks and nulls. De-embedding is the process of mathematically removing the electrical effects of a test fixture from measurement. To de-embed the fixture from the Beatty standard measurement, use a process called 2x-thru de-embedding. 2x-thru de-embedding is where a 2x-thru is used to create fixture models that are removed from the fixture using matrix algebra. I show an example of de-embedding a fixture from a Beatty standard measurement on the SI Journal. This article also has more information about de-embedding in general.
De-embedding can be quite complicated. However, free tools are available. If you use MATLAB or Octave, you can use the de-embedding functions on Gitlab.
It is important that several peaks and nulls are in your observable frequency band so design the length of your Beatty standard around the VNA you have. Here is a table for some good lengths to use based on a VNA’s maximum frequency.
Table 1. Design criteria for a Beatty standard based on VNA maximum frequency.
|
VNA Maximum Frequency |
Length of a microstrip Beatty Standard |
Length of a stripline Beatty Standard |
|
26.5 GHz |
33mm |
28mm |
|
50 GHz |
17mm |
15mm |
|
110 GHz |
7.8mm |
6.5mm |
These values ensure there are 10 peaks and nulls available across the entire bandwidth of the VNA.
Now, what to do with the Beatty. First, you can validate your PCB DK. The Beatty standard length is known, so you can model the structure with your expected PCB materials. If the delay of the beatty is incorrect, adjust the DK to correct the discrepancy. This will also make your peaks and valleys line up in the frequency domain. Once that is complete, you can determine the trace width by adjusting it in your model until the measured and modeled impedances match. In some designs, knowing the width of the traces helps understand where extra crosstalk is coming from. I learned of this process from an article by Heidi Barns and Jose Morarea. There is also another interesting read on Beatty standards by Chun-Ting Wang Lee. It was first presented at DesignCon and is now available on the SI Journal.
Now, you have determined with fairly high confidence how your design is performing electrically and how the artwork compares to what is fabricated. Seems like you have enough information now, right? Well, you can still take it one step further.
The last structure to consider is a 1X or 3X-thru. By de-embedding the 1X from the 2X-thru, or the 2X-thru from the 3X-thru, you are left with an isolated transmission line with 1X length. This yields higher accuracy for loss scaling than the 2x-thru alone and gives access to more algorithms for dielectric material property extraction. If you are interested in experimenting with SI related measurement algorithms, you can also use this structure to write your own TRL code using the two lines of different lengths.
If you are not concerned about the PCB material or creating de-embedding code, you can also add a suspicious feature of the PCB as a DUT. For example, via transitions can be very problematic, especially if you hired a contract manufacturer to do your design for you. To test a suspicious via transition and perhaps your contract manufacturer’s SI mettle, you can add a via with some surrounding trace as your DUT. In that scenario, you have 1x of trace length, your suspicious trace and via structure, and another 1x of trace. By de-embedding the trace from the measurement, you isolate the via to understand if it will present SI problems in your link.
Validating loss, impedance, and discontinuities is only useful if the results stay clearly tied to the design decisions that followed. Altium Develop helps engineers keep SI validation context aligned with the evolving layout, reducing version ambiguity, simplifying review, and making release readiness easier to judge. Use it solo, bring others in when needed, and move from measurement to release with fewer manual checks. Get started with Altium Develop today →