UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Adam Taylor’s MicroZed Chronicles, Part 147: Cracking HLS Part 4

by Xilinx Employee ‎09-13-2016 07:47 PM - edited ‎09-14-2016 05:02 AM (19,179 Views)

 

By Adam Taylor

 

 

Following on our examination of how we can use the HLS video libraries, our next step is to understand how we store an image and the subtle difference between OpenCV and the HLS Video Libraries.

 

 

Image1.jpg 

 

Different types of edge detection (Original, Laplacian of Gaussian, Canny and Sobel)

 

 

The most basic of OpenCV elements is the cv::mat class, which defines the image size in X and Y and pixel information (e.g. the number of bits within pixel), if the pixel data is signed or unsigned, and how many channels make up a pixel. This class creates the basis for how we store and manipulate images when we use OpenCV.

 

Within the HLS library there is a similar construct: the hls::mat. The library also providesa number of functions that enable conversion of the hls::mat class to and from HLS streaming. This is the standard interface we use when creating image-processing pipelines. One major difference between the cv::mat and the hls::mat classes is that the hls::mat class is defined as a stream of pixels as opposed to the cv::mat definition, which is a block of memory. This difference means that we do not have random access to pixels using hls::mat.

 

A simple example that demonstrates how we can use these libraries is to perform a simple Gaussian Blur of an image. The filter will use AXI Streaming interfaces to input and output the image data stream.

 

Gaussian blurring is typically applied to an image prior to many edge-detection embedded-vision algorithms that reduce noise within the image like Sobel or Canny.

 

The first step is to create the HLS structures we need within a header file so that both the module to be synthesised and the test bench can use them. These type definitions are:

 

  1. HLS streaming interfaces: this makes using the conversion to and from AXI streams within the test bench easier.

 

typedef hls::stream<ap_axiu<16,1,1,1> >               AXI_STREAM;

 

  1. HLS mat type: if we are using RGB and YUV we will need to define different types

 

typedef hls::Mat<MAX_HEIGHT, MAX_WIDTH, HLS_8UC2>     YUV_IMAGE;

 

typedef hls::Mat<MAX_HEIGHT, MAX_WIDTH, HLS_8UC3>     RGB_IMAGE;

 

 

With the basics defined we are then in a position to generate the module we wish to synthesise and the test bench to check it is functioning.

Starting with the module we wish to synthesise, the video input and output from the module will use the previously defined AXI_STREAM type definition. While the size of the image in rows and columns will be supplied over an AXI-Lite interface, we can also use this interface if we want to provide the ability to enable or disable the filter.

 

Implementing the function we want is very simple. We need to convert the input video from an AXI Stream into an hls::mat, apply our filter, and then convert the output hls::mat back to an AXI Stream.

 

 

Image2.jpg

 

HLS Function to perform the Gaussian Blur

 

 

Having written the code we wish to synthesise and implement in the Zynq SoC, the next thing we need to do is create a test bench so that we can check the functionality using both C and Co-Simulation before we include the core within our Vivado design.

 

We will look at this next week, and we’ll also see how we can combine OpenCV and the HLS Libraries in our test bench.

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 MicroZed Chronicles hardcopy.jpg

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

 

MicroZed Chronicles Second Year.jpg 

 

 

 

All of Adam Taylor’s MicroZed Chronicles are cataloged here.

 

 

 

 

 

 

 

 

 

Comments
by Adventurer
on ‎09-14-2016 06:26 AM

Is there any reason for the typedef other than making the types shorter to write?

 

Personally I think the usage of typedef in tutorials and examples is counterproductive; I'd much rather see the examples made by Xilinx using the complete type directly (hls::stream<ap_axiu<16,1,1,1> > &video_in) than the shorter but less clear AXI_STREAM &video_in.  Sure, the latter is shorter and probably a good idea to use in actual code, but it fails to illustrate what is actually being created.  Additionally, it forces the user to check 2 files, the .cpp and the .h, and before seeing the .h it may give the false impression that AXI_STREAM and YUV_IMAGE are actual "standard" types rather than custom types that need to be created (specially since the use of all caps makes them look like some sort of macro).

 

In any case, I think the YUV_IMAGE and RGB_IMAGE and other image/pixel types should be defined directly in the .cpp (if at all), since they are not needed outside of the function.

by Observer taylo_ap
on ‎09-14-2016 12:00 PM

 

Great question, the reason I defined them within their own header file is I will need to use them in the test bench as well which will test the fucntion. This enables me to have a single definition which is used by both files and does not result in me making a mistake in one or the other modules that use them. 

 

My thought process was to use the header file with the type def files as I would a packge in VHDL,I am not sure as a VHDL guy what the verilog equivelent is. This enables common contants to be defined once and then we can be sure that they are the same throughout. 

 

I take you point however about the clarity, and misunderstanding them for predefined functions I will make sure I consider this in future blogs. 

 

You are right about the pixel definition only being used in the source code for this one example however, this comes from a larger example I am working on (I only spend a few hours each week on this, so example evolve) as such these would be used in several source files. 

 

Thanks for taking the time to read the blogs and engaging

 

Best

 

Ad

by Adventurer
on ‎09-15-2016 07:18 AM

In that case it might make sense.  I wasn't talking about your article specifically, but about Xilinx documentation in general, which also does this so I was curious if there was a reason for this.  For example, in XAPP1167 they also use these types in the top.cpp files, which is misleading because you don't have the typedefs on the same file.

 

In your article this is not a problem because these definitions are just a couple of paragraphs above, but for the aforementioned XAPP examples it causes the information of interest to be spread across multiple files.  This problem could be partially solved with a simple comment like

 

#include "top.h" // AXISTREAM, YUV_IMAGE, RGB_IMAGE

which explains the reader that these types are declared in that header file.

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.