Yocto vs. Ubuntu: Which OS is Best For Embedded AI?

Zachariah Peterson
|  Created: January 4, 2020  |  Updated: March 6, 2021
Yocto vs ubuntu for embedded systems

When I was in high school, everyone was laser-focused on the Red Hat family of Linux distributions, and let’s not forget Turbolinux. Almost 20 years later, the list of Linux distros has only continued growing, and there are over 600 Linux distros available. Debian-based distros like Ubuntu have become popular in server and data center environments, and they support a huge range of software.

In the embedded world, the size of a full Ubuntu distribution can be enough to wipe out your board’s memory before you even start installing additional software or collecting data. If you’re not in the business of building your own Linux distro, there are other options available for your board. When it comes to data-intensive AI applications, you need an OS that contains the bare essentials to run your system. This is where Yocto comes in; the goal of the Yocto Project is to provide optimized software for highly specific embedded applications. Here’s what you need to know about Yocto vs. Ubuntu and the benefits of each.

Yocto vs. Ubuntu: What’s the Difference?

First, there is something important you should know about Yocto: it is not a Linux distro. In fact, the Yocto project website motto states “It's not an embedded Linux Distribution, It creates a custom one for you.” Contrast this with Ubuntu, which is a full Linux distribution for general computing purposes. Since Ubuntu and other Debian-based distributions are geared for general purpose computing and programming, they are a great choice for a development environment, where code may need to change and prototypes need to be changed.

Yocto allows you to trim down the size of the OS to the bare essentials required to run your system. Yocto is not a full distribution, rather it is more properly called a meta-distribution. Think of this as a collection of libraries, dependencies, configuration values, and classes that are pieced together into create a custom Linux runtime image tailored to your specific needs. Yocto is heavily modular, and you’ll need to use an SDK to build out a Yocto distribution that is specific to your embedded system.
 

 

Ubuntu

Yocto

Applications

General purpose

Embedded, customized

Size

7-8 GB

<2 GB

Uses

Fast prototyping, proof of concept during development

Production-grade OS for embedded systems

Configuration

Easy: comes prepackaged for deployment

Hard: You get best results with everything custom

Adding Packages

Easy: just run apt-get [package_name] from the console

Hard: requires complete image rebuild and reinstall

There is a danger during development that Debian gets deployed on a new system when it is not necessary, simply because Debian was used for the build. The logic goes something like this: “we already know it works in Debian, so why risk rebuilding on Yocto?” This is understandable, but certain data-intensive applications in edge computing and AI can seriously benefit from a customized, trimmed-down Yocto OS.

Yocto is powerful and allows you to include only the portions of the software you need for your system, but it comes with a learning curve. Yocto uses “meta-layers” to create the overall configuration of your custom Yocto OS, and specific hardware platforms require specific meta-layers. If you’re not a hardcore programmer, you’ll likely find yourself at a loss for building the meta-layers required for your hardware platform. Fortunately, the open source community has created some tools for deploying custom Yocto distributions. Also, some hardware vendors will provide an SDK that allows you to easily deploy a custom Yocto distribution for your embedded system.

Yocto for Embedded AI Systems

The trimmed-down nature of Yocto makes it ideal for embedded AI applications using a COM. To get the best performance out of your board, you’ll need to maximize your processing power. For simple classification/prediction tasks from text snippets or numerical data that don’t run in real time, a single Raspberry Pi or BeagleBone board should have enough power to handle these tasks as long as your code is optimized. For more intense tasks, such as processing live video and audio data through multiple streams, you’ll need something much more powerful.

NVIDIA’s options for embedded AI systems are an excellent choice to bring into an embedded system for AI applications. You can also implement full-scale TensorFlow models on NVIDIA modules as long as your OS is CUDA enabled. This allows you to run powerful AI models on a compact NVIDIA GPU. You’ll need to use NVIDIA’s Jetpack SDK if you’re going to work with these NVIDIA boards. Using Yocto in this environment eliminates many of the unnecessary portions of a standard Linux distribution. This saves you approximately 6 GB of memory compared to Ubuntu.

If you’re not an SBC designer and you want to build a new board for your embedded AI application, you can easily build a new board around a standard COM or use an NVIDIA module when you use the modular design tools in the Upverter Modular workspace. To get started, just open up the software and search for the modules you want to bring into your board. You can simply drag-and-drop the module connectors you need into the board area. You can also bring a variety of other sensors, power regulators, and wireless comms modules into your board. As you add new modules into your board, Geppetto will flag any modules you need to add to the board to complete your design.

NVIDIA Jetson Nano COM with Yocto vs. Ubuntu
This image processing board will run Yocto and TensorFlow AI models on an NVIDIA Jetson Nano COM.

The Verdict

When it comes to Yocto vs. Ubuntu, the verdict should be pretty clear. If you’re busy developing code, building a proof-of-concept, and testing early prototypes, then use Ubuntu. If you’ve fully tested your code and you’re ready to test a functional prototype on an embedded board, you’ll see some serious benefits in terms of speed and memory if you use Yocto.

If you need to reduce your development time for AI applications with NVIDIA Jetson modules, the Gumstix Yocto image includes TensorFlow with CUDA enabled. You’ll be able to quickly run powerful TensorFlow models on NVIDIA GPUs. In data-intensive embedded AI tasks like image classification and speech recognition from multiple video streams, you’ll need all the processing power and on-board memory you can get, and you can get an extra 6 GB out of your onboard memory when you use Yocto vs. Ubuntu.

The powerful modular design tools in Upverter Modular give you access to a broad range of industry-standard COMs and popular modules, allowing you to create production-grade hardware for nearly any embedded AI application. You can also easily build and deploy the Gumstix Yocto image on your NVIDIA Nano or TX2 board. If your system needs additional functionality, you can include wireless connectivity (WiFi/LoRaWAN/Bluetooth), an array of sensors, high resolution cameras, and a number of standard COMs.

About Author

About Author

Zachariah Peterson has an extensive technical background in academia and industry. He currently provides research, design, and marketing services to companies in the electronics industry. Prior to working in the PCB industry, he taught at Portland State University and conducted research on random laser theory, materials, and stability. His background in scientific research spans topics in nanoparticle lasers, electronic and optoelectronic semiconductor devices, environmental sensors, and stochastics. His work has been published in over a dozen peer-reviewed journals and conference proceedings, and he has written 2500+ technical articles on PCB design for a number of companies. He is a member of IEEE Photonics Society, IEEE Electronics Packaging Society, American Physical Society, and the Printed Circuit Engineering Association (PCEA). He previously served as a voting member on the INCITS Quantum Computing Technical Advisory Committee working on technical standards for quantum electronics, and he currently serves on the IEEE P3186 Working Group focused on Port Interface Representing Photonic Signals Using SPICE-class Circuit Simulators.

Related Resources

Related Technical Documentation

Back to Home
Thank you, you are now subscribed to updates.