We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Showing results for 
Search instead for 
Did you mean: 

Adam Taylor’s MicroZed Chronicles Part 148: Cracking HLS Part 5

Xilinx Employee
Xilinx Employee
0 0 40.4K


By Adam Taylor


With the core written, we want to be able to test it, initially using a C simulation to prove that it does what we desire and then again using co-simulation against the synthesised HDL to verify that the HDL functions as required. We can use the same test bench for both the C and HDL testing using the HLS environment. Within the test bench, we need to perform the following steps:


  • Open the image we wish to work with into a cv::mat
  • Convert the image from RGB to YUV
  • Convert the cv::mat to IplImage
  • Convert the IplImage into an AXI Stream
  • Call the synthesizable function
  • Convert the AXI Stream to an IplImage
  • Convert the YUV image to RGB
  • Write the image to file


This approach allows us to take in an image file, apply a Gaussian filter to it, and then save it to a file for later examination. The reason we do the conversion from RGB to YUV is because it is common to process images in this color space and the example we will eventually build from this preparatory work using the EVK employs this color space, so the input image must also be in this color space.


The color space we will be using is YUV 4:2:2, which means that the pixel value can be represented in 2 bytes. If we were to use the OpenCV  cvt_color function, it would return a YUV 4:4:4 image. Therefore we need a specific routine to subsample the image, converting it from 4:4:4 to 4:2:2.






Once we have generated both the function and the test bench file, the next step is to use C simulation to ensure that the function performs as desired. If it does not, we can quickly and easily modify the test bench or the source code and re simulate until we obtain the desired results.


Once assured that we’ve properly defined the desired function at the C level, we then synthesise the function using Vivado HLS to produce HDL. As part of the synthesis process, Vivado HLS estimates the resource utilization. For this example, where we are targeting the Zynq Z-7020 SoC, the estimate showed that the following resources would be required:






Resource Estimation



We now wish to perform co-simulation with the synthesized output. This step allows us to use the C test bench to stimulate the generated HDL using an HDL simulation tool. The images below show the original input image and the resultant output image.






Original Image






Co-Simulation Gaussian Blur 3x3



Once the co-simulation is complete, we can examine the resultant output image and, if we wish, the simulation waveforms. There is also a status report on the co-simulation, which not only reports the pass/fail status but also the function latency. We can compare this latency from co-simulation against the synthesis report, which also contains the expected latency. However, the expected latency shown in the synthesis report is based on handling the maximum row and column sizes while the latency estimation in the simulation results are based on the actual image size passed to it.






Once we are happy with the co-simulation and know that the function works as intended, the final step is to export the module into our Vivado IP library and insert the new IP module into our image-processing chain. This we can achieve very easily by using the Export RTL option and completing the configuration options as you desire as shown below:







This will package the IP. You will find the packaged IP core and a Zip file of the IP core within your project’s solutions directory.


We can then open our Vivado design and import the IP Core we just created from the IP Catalog. However, to do this we first need to create an IP Repository within our project using the projects settings dialog on the IP tab:





Creation of the IP repository with this example highlighted



After we create the IP Repository, it will not contain any IP Cores. We need to add cores to it using the IP Catalog. Within the IP Catalog, you should see the Repository that we just created. Right-click on the Repository and then select Add IP to Repository option.






This will open a dialog box so that we can select the IP Core we wish to add to the repository. We can select either the component.xml or the zipped archive. When this is complete, you will see the IP core located within the Repository, ready for use in a block diagram.





Having now shown how we can quickly and easily get image processing functions up and running, I will now start to look at how we can get image data into and out of our system in the next blog post.


Incidentally I am attending the Embedded System Conference in Minneapolis next week and giving several talks including one on High Level Synthesis. If you are attending, please come by and say hello.



Code is available on Github as always.


If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.




  • First Year E Book here
  • First Year Hardback here.




 MicroZed Chronicles hardcopy.jpg



  • Second Year E Book here
  • Second Year Hardback here



 MicroZed Chronicles Second Year.jpg





All of Adam Taylor’s MicroZed Chronicles are cataloged here.