We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Showing results for 
Search instead for 
Did you mean: 

Adam Taylor’s MicroZed Chronicles Part 140: Embedded Vision, HLS, and OpenCV on the Zynq-7000 SoC

Xilinx Employee
Xilinx Employee
0 0 42.8K


By Adam Taylor


Over the last several blogs, we have looked at how we build a Linux system using both the RAM Disk and file system approaches. The most recent blog culminated with the ZedBoard functioning as a single board computer.


Over the coming weeks, I want to explore embedded vision. To do this we are going to be using the following for vision applications:


  • ZedBoard running as a single board computer
  • Avnet Embedded Vision Kit
  • OpenCV
  • High-Level Synthesis and how we can use it in image-processing applications






Zynq SBC running simple OpenCV demo



An increasing number of embedded applications use vision, ranging from simple security and monitoring systems to robotics, driver awareness, and medical imaging. We must also remember that embedded vision can cover a wider section of the electromagnetic spectrum, from ultraviolet, which is used in scientific imaging, to infrared, commonly used for night vision, security and safety, and thermography.


I think it’s sensible to dedicate a number of blogs to embedded vision topics to see the different approaches, the challenges, and how we can overcome the challenges.


No matter the area of the visual spectrum we are working in, in a typical embedded vision system we will want to:


  • Configure the imaging device to output images in the correct format, frame rate, etc.
  • Process the received raw data. Processing examples include color filter interpolation if a Bayer filter is used, color-space conversions and corrections, image enhancement (e.g. noise filtering), edge enhancement, etc. Depending upon the sensor used, processing can be straightforward or pretty complicated.
  • Implement the image-processing algorithms required for our application. Typically this can require a number of stages and is very processing-intense.


The beauty of the Zynq-7000 SoC is that we can perform image processing operations within the PS (processor system) or the PL (programmable logic) and indeed we can use tools like we have looked at previously (such as SDSoC) to accelerate PS performance. Or we can use HLS (High Level Synthesis) to generate RTL modules used in the image-processing algorithm.


When we develop image-processing applications we normally use HLLs (high-level languages) and libraries of algorithms to save time. One such collection of image-processing applications is OpenCV, which provides a number of C++ algorithms for real-time, computer-vision applications.


We can use OpenCV on Microsoft Windows or Linux machines. This means we can develop our algorithm on a development machine and then cross compile it to run on the Zynq SoC’s PS under Linux. Even more exciting, Xilinx Vivado HLS supports OpenCV. We can create AXI Streaming IP modules and drop them into the Zynq SoC’s PL within the image-processing chain. Now that is cool.


First, we’ll look at how we can use Vivado HLS and OpenCV on the ZedBoard SBC that we created last week. To use OpenCV on the Zynq SoC, we need to install both the include files and the libraries it uses. We can then develop the OpenCV code.


The first step for the Zynq SBC is to open a terminal window and download OpenCV. There are a number of ways you can do this, including building it from scratch using the source, however I opted for the simplest method and used a command. In my defense, I am time limited when I write these blogs. You may also be under a time crunch so my approach actually has broad appeal.


I used this command to load OpenCV on the Zynq SBC:



sudo apt-get install libopencv-dev



Once the OpenCV files loaded, I was ready to write my first OpenCv application, which opens and displays a specific file. You can find this on my github page. (See below for a link.)


When it comes to compiling the code we can use the built-in GCC compiler using the command line:



g++ `pkg-config - - cflags opencv` <filename.cpp> `pkg-config - -libs opencv` -o <output name>



When I ran this, I got the image appearing above.


Now so far, we have assumed that we wanted to develop the embedded-vision application on the Zynq SBC. However SDK comes with the OpenCV libraries, so if we wish to use them we can develop our application using SDK on a workstation host and then upload the file to our SBC or to another Zynq implementation. To do this though we need to make SDK aware of the include directory and the library locations. These are in the screenshots below for my installation of Xilinx SDK:





Setting Include Directory in SDK






Setting Libraries in Xilinx SDK



Now we have pipe-cleaned the process. Next week, we’ll look a little more at different image-processing applications using this SBC and the many different ways we can implement them.




The code is available on Github as always.


If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.




  • First Year E Book here
  • First Year Hardback here.




 MicroZed Chronicles hardcopy.jpg



  • Second Year E Book here
  • Second Year Hardback here




 MicroZed Chronicles Second Year.jpg



Tags (3)