AI Vision with the Kria KV260 Vision AI Starter Kit

Ari Mahpour
|  Created: November 18, 2024
AI Vision with the Kria KV260 Vision AI Starter Kit

In Getting Started with the Kria KV260 Vision AI Starter Kit, we unboxed and played around with the Kria KV260 Vision AI Starter Kit from AMD Xilinx. This board provides us with an FPGA and ARM processor powerful enough to run a full distribution of Ubuntu on it. In this article, we're going to build and run the SmartCam application using a Raspberry Pi camera. This application will be able to detect faces in real-time and demonstrate that on your computer monitor.

Why I've Written This Tutorial

This tutorial follows the original tutorial put together by the folks at AMD Xilinx. You'll notice that a bunch of this tutorial is very similar (if not the same) to theirs. My initial reaction to this tutorial was to feel overwhelmed. I have a fairly decent background in FPGA design, but to walk through their tutorials, blow by blow, can sometimes be challenging and a bit daunting. I was looking for something a bit more straightforward and simpler to follow. After pouring over other people's rewritten tutorials, I wasn't terribly happy with what I found; hence, I decided to write my own.

If you're looking for all the gory details, I highly recommend that you review the original tutorial. There are some steps that aren't super clear, but I attempt to get through (or even bypass) them in this tutorial. Most importantly, at the time of writing this article, the sample SmartCam application did not seem to work with the latest firmware. In my forked repository, I have created automated scripts (and even the final flash files needed) to get your demo up and running without a hitch. With this tutorial in hand, I hope you will be able to jump into AI on hardware targets as quickly as possible and experience that "woah" moment that I got after successfully bringing up the demo.

Hardware Prerequisites

You will, of course, need the Kria KV260 Vision Starter Kit from AMD Xilinx. You will need to get your board set up, and that can be done by following my previous tutorial, Getting Started with the Kria KV260 Vision AI Starter Kit. You will also need a Raspberry Pi Camera Module V2. The V2 part is super important. I have unsuccessfully attempted to run the demo with the cheaper V1 version of the camera and other knock-off cameras (and I can attest that they don't work with this demo). You'll need to plug the camera's ribbon cable into the J9 port on the board. Lastly, you'll need an HDMI-capable monitor (or TV) to hook up to your KV260 kit (and HDMI cable, of course).

Software Prerequisites

Before getting started with the demo you need to have both Docker and the SmartCam example installed on your Kria KV260 device (even though we won't be using the SmartCam application). You will also need Xilinx Vitis (the full installation) version 2022.1. Note that the version number of Xilinx Vitis is very important as this tutorial is built specifically for that version.

You can install Docker using the Convenience Script method. To do that, open up a terminal and run the following lines:

curl -fsSL https://get.docker.com -o get-docker.sh

sudo sh ./get-docker.sh --dry-run

sudo usermod -aG docker $USER

To install the SmartCam application package, run the following command in the terminal:

sudo apt install -y xlnx-firmware-kv260-smartcam

You'll also need to clone my fork of the original repository:

git clone --recursive --branch rpi-camera-demo-clean

https://github.com/amahpour/kria-vitis-platforms.git

Running the Build

To make things as simple as possible, I've written an automated script that goes through the build process automatically. You won't need to follow the screenshots in the original tutorial to update the block designs or change any code. This tutorial attempts to run the build without having to jump into the Vitis user interface. To run the build script, cd into your cloned repository and run the following commands:

cd kv260

./build_demo.sh

Note that this script has been written with Linux in mind. If you're running on Windows, I highly recommend that you set up WSL 2 with Ubuntu and install Xilinx Vitis there (versus Windows).

If you get an error complaining that Vivado cannot be found, you probably forgot to source the Xilinx settings. Just run this command:

source /tools/Xilinx/Vivado/2022.1/settings64.sh

Running the build_demo.sh script will completely bypass the whole tutorial because I have included the updated block design, pin constraint file, and project configurations in the kv260/extras/ folder of the repository. If you want to walk through the tutorial step by step, I highly recommend taking a look at the original.

If everything ran correctly, you should end up with a bitstream file located in

kv260/overlays/examples/smartcam/binary_container_1/link/int/kv260-raspi-dpu.bit.bin.

The build process can take between 1-2 hours. Not everyone has time for that, so I have included the bitstream file in the kv260/extras/ folder as a backup.

Running the Demo

At this point, we're ready to copy our files to the KV260 board and run the demo. You can transfer files via a USB flash drive or via the SCP command (secure copy). You'll need to transfer the following files over:

  • kv260/extras/kv260-raspi-dpu.bit.bin (or the generated one call out above)
  • kv260/extras/kv260-raspi-dpu.dtbo
  • kv260/extras/kv260-raspi-dpu.xclbin

Once back on your KV260 device, we need to move these files to the library section where firmware is typically loaded:

sudo mkdir /lib/firmware/xilinx/kv260-raspi-dpu

sudo mv  kv260-raspi-dpu.bit.bin /lib/firmware/xilinx/kv260-raspi-dpu/

sudo mv  kv260-raspi-dpu.dtbo /lib/firmware/xilinx/kv260-raspi-dpu/

sudo mv kv260-raspi-dpu.xclbin /lib/firmware/xilinx/kv260-raspi-dpu/kv260-raspi-dpu.xclbin

sudo cp /lib/firmware/xilinx/kv260-smartcam/shell.json /lib/firmware/xilinx/kv260-raspi-dpu/

Now, we're ready to launch the application. Note that this will turn off your monitor, so you should be running this over SSH or via the USB serial interface (i.e., USB port and using PuTTY or TeraTerm):

sudo xmutil listapps

sudo xmutil unloadapp

sudo xmutil loadapp kv260-raspi-dpu

sudo xmutil desktop_disable

docker run \

--env="DISPLAY" \

-h "xlnx-docker" \

--env="XDG_SESSION_TYPE" \

--net=host \

--privileged \

--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \

-v /tmp:/tmp \

-v /dev:/dev \

-v /sys:/sys \

-v /etc/vart.conf:/etc/vart.conf \

-v /lib/firmware/xilinx:/lib/firmware/xilinx \

-v /run:/run \

-it xilinx/smartcam:latest bash

In the Docker container, we'll need to make one slight modification to a file. That will require us to install vim first:

apt-get update -y && apt-get install -y vim

vim /opt/xilinx/kv260-smartcam/share/vvas/facedetect/preprocess.json

Once in vim hit "i" (for "insert") to start editing the file. Look for the line that points to an "xclbin" file and update it with this path:

/lib/firmware/xilinx/kv260-raspi-dpu/kv260-raspi-dpu.xclbin

Hit the escape key. Type in ":wq" (to save and quit), then hit enter. After that, we can run the application with the following (very long) command:

gst-launch-1.0 mediasrcbin name=videosrc media-device=/dev/media0  v4l2src0::io-mode=mmap v4l2src0::stride-align=256 !  video/x-raw, width=1920, height=1080, format=NV12, framerate=30/1  ! tee name=t ! queue ! vvas_xmultisrc kconfig="/opt/xilinx/kv260-smartcam/share/vvas/facedetect/preprocess.json" ! queue ! vvas_xfilter kernels-config="/opt/xilinx/kv260-smartcam/share/vvas/facedetect/aiinference.json" ! ima.sink_master                     vvas_xmetaaffixer name=ima ima.src_master ! fakesink t. ! queue max-size-buffers=1 leaky=2 ! ima.sink_slave_0 ima.src_slave_0 ! queue ! vvas_xfilter kernels-config="/opt/xilinx/kv260-smartcam/share/vvas/facedetect/drawresult.json"          ! queue ! kmssink driver-name=xlnx plane-id=39 sync=false fullscreen-overlay=true

If everything worked correctly your monitor (or TV) should turn back on with the feed from your Raspberry Pi camera. It will place a blue box around any faces that it detects in the video feed in real-time.

Conclusion

In this article, we reviewed the SmartCam tutorial with the Raspberry Pi Camera and observed the shortcuts needed to "just get it working." At this point, you should now have your own SmartCam up and running on the Kria KV260, detecting faces in real time. My goal was to simplify the process so you can focus more on the fun of seeing the AI in action rather than fumbling through the original tutorial. Hopefully, this guide has made things clearer and faster for you in order to get that "it works" moment. Now, it's your turn to get creative and explore what else you can do with this powerful starter kit.

Note: All the code for this project can be found in this repository.

About Author

About Author

Ari is an engineer with broad experience in designing, manufacturing, testing, and integrating electrical, mechanical, and software systems. He is passionate about bringing design, verification, and test engineers together to work as a cohesive unit.

Related Resources

Related Technical Documentation

Back to Home
Thank you, you are now subscribed to updates.