UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Adam Taylor’s MicroZed Chronicles Part 143: Getting Down with Embedded Vision Algorithms

by Xilinx Employee ‎08-15-2016 09:42 AM - edited ‎08-15-2016 09:46 AM (15,878 Views)

 

By Adam Taylor

 

We have demonstrated how we can quickly and easily get a get an image tracking algorithm up and running on the Zynq-7000 SoC over the last two blogs (parts 1 and 2 are here). We also introduced some embedded vision techniques including thresholding and morphological operations.

 

However embedded vision covers a very wide area, so I want to take some time in this blog to look at some other embedded-vision applications and how we can implement the relevant algorithms. Embedded-vision algorithms break down into the following high-level categories:

 

 

Image1.jpg

 

 

Embedded Vision Categories

 

 

These processing techniques split further into tasks that process and extract information from the image and tasks that use the results of these operations for analysis and decision making.

 

One of the most commonly used embedded-vision techniques is applying filters to the image. Because information within an image resides in the spatial domain and not the frequency domain, image-processing filters typically use convolution filters. That is, we convolve the image with a 2D filter kernel to obtain the desired response.

 

As with 1D convolution, we must consider the filter’s impulse response. This is called the point-spread function (PSF) in image-processing applications. To control the filter’s function, we define a custom PSF for each function just as we would define different impulse responses for signal-processing filters (e.g. high-pass, low-pass, etc).

 

Most commonly, we implement the filter kernel as a 2D matrix, which we apply to each pixel in the image. Within the implementation, we need to buffer lines so that we can slide the filter across the image.  

 

 

Image2.jpg

 

 

Example Image and filter kernel applied on pixel-by-pixel basis on the Lena image

 

 

Within these filter kernels, we can define PSF’s for:

 

  • Noise Reduction
  • Edge Enhancement
  • Edge Detection

 

These Imaging filters are defined as linear filters. There is another class of filters called non-linear filters that include filters like the median-order filter.

 

One of the major differences between linear and non-linear filters is their edge-preservation capability. Linear filters can produce blurred edges. Non-linear filters tend to preserve edges.

 

 

Image3.jpg

 

Example of original Lena image (left) and edge enhancement (Right)

 

 

Depending upon the image, we may need to adjust the contrast to extract the most information from in the image. Each pixel has a value within an image. These values will be close together in a low-contrast image, which makes it hard to distinguish between pixels. Contrast enhancement widens the distribution of pixel values to make subsequent image-processing algorithms easier to implement.

 

The contrast of an image can be determined by its histogram, which shows the distribution of pixel values. A low-contrast image will demonstrate a tight grouping of pixel values in the histogram. A high-contrast image will show a wide spread of values. Commonly used algorithms for contrast enhancement include contrast stretching, which can use linear or non-linear approaches or histogram equalization.

 

 

Image4.jpg

 

Histogram showing contrast of the Lena image

 

 

Many embedded-vision applications require the detection of edges. While edges are easy for the human eye to detect, they can require significant processing in the embedded-vision world. First we must consider the three general types of edges found in images:

 

  • Step – Change in intensity over one or a small number of pixels
  • Ramp – Change in intensity over a number of pixels e.g. a gradual increase of pixel value
  • Roof – A brief change in intensity before returning to the original intensity

 

Typically, the resultant images from edge detection are binary in nature (e.g. white or black). There are of course a number of different algorithms to detect edges in our images. Three of the most common are:

 

  • Sobel – Uses two 3x3 kernels—one for edges in the X direction another for edges in the Y direction—from which the gradient and angle can be determined. This is probably the one of the simplest edge-detection approaches as it is gradient based.
  • Canny – A multi-stage process that uses Gaussian filtering to remove noise. Stages include edge-detection operators like the Sobel operator, non-maximal suppression, thresholding, and hysteresis.
  • LoG – Laplacian of Gaussian, applies a Laplacian filter to the results of a Gaussian filter.

 

 

Image5.jpg

 

Edge-Detection Algorithms Clockwise: Original, Laplacian of Gaussian, Canny, and Sobel

 

 

Both Canny and LoG operations are often called advanced edge-detection algorithms because they calculate:

 

  • First derivative – Used to determine the edge
  • Second derivative – Used to determine the direction of the edge (e.g. black to white)

 

Having introduced some even embedded-vision algorithms, we are now going to start looking at how we can use Vivado HLS (high-level synthesis) to implement these algorithms within our embedded-vision application.

 

 

Code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 MicroZed Chronicles hardcopy.jpg

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

 

 MicroZed Chronicles Second Year.jpg

 

 

 

All of Adam Taylor’s MicroZed Chronicles are cataloged here.

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.