Field Programmable Gate Arrays, or FPGAs, have become ubiquitous amongst high-speed, real-time digital systems. They can be used for time critical applications, digital signal processing, or even crypto mining. Their efficiency in speed and power make them perfect for reusable high speed applications. The speed at which FPGAs operate continues to increase at a dizzying pace but their adoption into Continuous Integration (CI) pipelines seems not to trail as closely. In this article we will review the concept of CI pipelines, their application to FPGAs, and look at examples on how to set this up.
If you haven’t noticed by now I practically eat, sleep, and breath Continuous Integration. Whether it’s CI for PCB design or CI for Embedded Systems I’m always looking for ways to continuously improve and automate builds for any type of system. Some recent feedback I’ve gotten from folks was that there hasn’t been much progress made with FPGAs and CI systems. The real principal behind CI and FPGAs follows the same logic as all other CI systems: create a repeatable build environment that can do all the heavy lifting for us. In an FPGA based CI system we would typically see the following three stages:
Figure 1: Stages of an FPGA CI Pipeline
Each stage is important unto itself and all require their own setup. Let’s look at each stage to understand what they represent and how to implement them.
Simulation is an integral part of FPGA design. Building an FPGA image to load onto a target can take lots of time. Rather than write code, build, and test it out on hardware, simulation gives up the ability to rapidly test our code, or Register-Transfer Level (RTL), within an environment that simulates the behavior of an FPGA. Generally this is done at the user level but it is becoming increasingly popular to integrate FPGA simulation into CI pipelines. This means that someone would push their code to their repository and a pipeline would kick off to run the (self-checking) simulation somewhere in the cloud. To truly do this “somewhere in the cloud” one needs to create an environment that can be encapsulated, or containerized, into a self-sufficient environment. We do this using something called Docker containers. These act almost like virtual machines that can be run anywhere. This particular Docker container, as an example, creates a containerized environment that enables a user to run Icarus Verilog within any Linux system. We then take that container and use it to create our FPGA simulation pipeline. In this example you can see a simple “Hello World” pipeline happening in the cloud using Icarus Verilog. Note that this can be done with any FPGA simulation tool.
Figure 2: Pipeline run of FPGA simulation using GItlab CI
A second, also very important, stage within the FPGA pipeline is the build stage. We want to be able to synthesize, place and route, and generate a bitstream for our FPGA design. This is also typically done by users within the tool that is provided by the vendor (e.g. Xilinx, Intel, Microchip, etc.). Rather than have this build locally we’d like to have the build occur elsewhere. This can, however, be a bit tricky since the FPGA tools are usually very large in size. One approach that many users take is to have a dedicated “build machine” that runs all the build pipelines. This approach isn’t bad but it also doesn’t scale and can become a single point of failure. Other folks have attempted to containerize FPGA tools but those images can exceed 100 GB which, essentially, deems them unusable for cloud applications. A middleground that I have found to work well is the network installation method. As an example I’ve created a container that runs Vivado 2019.1 but the tool itself is not installed on the image (thus the image size is less than 300 MB). What I’ve done is installed Vivado onto a network drive (in this case EFS in AWS) and then mounted it within my Docker container. Since I’m running my pipeline in AWS the latency between the EFS and the EC2 instance (Kubernetes Node) is negligible.
In this example I am using an Arty A7 device from Digillent to create a digital filter. I’m using an automated build script to generate the bitstream file for my device on every push to my repository. As you can see in the output I successfully call Vivado even though it doesn’t exist within the Docker container (i.e. it’s mounted as an external drive).
Figure 3: Pipeline run of FPGA build using GItlab CI
The testing phase is really going to depend on every individual and project. The objective of testing within a CI pipeline is to automate as much as possible. Just as I automated my DSP example for Arduinos with my Analog Discovery 2 I could also do the same here. To cover an automated test solution for FPGAs would be a bit out of scope for this article. The main principle here is to ensure that is can be repeatable and run within an encapsulated, or containerized, environment. It’s important to remember that testing is an important piece of the CI pipeline and should be implemented at whatever level the user can handle.
In this article we covered the concept of CI pipelines for FPGAs. We reviewed the three critical stages that make up FPGA pipelines: simulation, build, and test. We looked at examples of simulation and build pipelines and discussed the importance of testing. After reviewing this article and the examples the user should be able to understand the basic makeup of what it takes to create an FPGA based CI pipeline.
When you're ready to build your custom FPGA board to support your embedded system use the complete set of PCB design and layout features in Altium Designer®. Once you’ve completed your PCB and you’re ready to share your designs with collaborators or your manufacturer, you can share your completed designs through the Altium 365™ platform. Everything you need to design and produce advanced electronics can be found in one software package.
We have only scratched the surface of what is possible to do with Altium Designer on Altium 365. Start your free trial of Altium Designer + Altium 365 today.