UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Adam Taylor’s MicroZed Chronicles Part 141: OpenCV EV Object Tracking Part One

by Xilinx Employee on ‎08-01-2016 09:07 AM (8,474 Views)

 

By Adam Taylor

 

Having now gotten the ZedBoard running OpenCV, I want to spend some time developing some embedded vision applications and demonstrating how these function before moving on to the looking at how we can accelerate these within the programmable logic (PL) in the Zynq-7000 SoC.

 

After getting the ZedBoard functioning as a single board computer and having installed OpenCV, my next step is to demonstrate how we can develop an algorithm that tracks objects within a frame and draw a box around objects being tracked.

 

 

Image1.jpg 

 

Initial object-detecting algorithm

 

 

 

To do this I need a camera, so I connected a webcam to the SBC. The webcam I used is supported by the UVC, which is USB Video Class. We can check that the kernel we are using supports this class by performing the following commands with our webcam connected:

 

lsusb  - This will list all of the connected USB devices, the results with on my system are shown below.

 

 

Image2.jpg

 

 

 

The next command is:

 

usb-devices – This will list all of the drivers for the USB devices connected, again the results are below.

 

 

Image3.jpg 

 

 

I installed the following programs on the ZedBoard SBC to ease development:

 

  • Open SSH server – to enable me to transfer files between the Zed SBC and my laptop
  • GUVCView – to enable me to test the webcam is working correctly prior to development

 

I have also installed Python support for OpenCV so that we can develop applications using Python if we want (using Python-opencv, Python-dev, and Python-numpy) although, as an interpreted language, Python runs slower than a C executable. However, there are cases where we might want to use Python instead of C.

 

Coming back to the task at hand, there are a number of different methods we can use for object detection:

 

  • Blob Detection
  • Background Subtraction
  • Histogram of Oriented Gradient (HOG) with appropriate Support Vector Machine (SVM) classifier
  • Cascade Classifier using a Haar or Local Binary Pattern classifier

 

Because we are developing an embedded vision application, I am going to use a background subtraction detection method for the first example because that will allow me to introduce a number of concepts that we will use across a number of applications to follow. I promise come back to the other algorithms in subsequent blogs because we can use them for a number of different applications.

 

The algorithm we will use will be very simple. The first image taken from the web camera will be used as the reference image. We will assume that this image contains only the background, we will then calculate the difference between this background and the image just captured to detect new objects in the frame.

 

While the world we see and capture via the camera is imaged in full color, a number of the image-processing techniques we will use to implement this algorithm use grey-scale or even binary black-and-white images (0 or 1), which reduces the processing required and makes image processing more efficient for embedded applications.

 

The main concepts we need to understand when generating a background subtraction system are:

 

  • Color-Space Conversion – Converting from color to grey scale reduces processing requirements.

 

  • Thresholding – This is a commonly used image-segmentation technique we can use to create binary images. Image segmentation covers a number of techniques that segment an image into multiple segments often called super pixels. Segmentation allows for easier analysis of the segment contents. In our application, we will use thresholding to segment the background from the foreground. This will produce a binary image.

 

  • Morphological Operations – A group of image-processing techniques used on binary images to help determine structure. Within an image there are four basic morphological operations:
  1.  Erosion – Every non-background pixel that touches a background pixel is converted into a background pixel. This technique makes an object smaller or may even fracture it into a number of parts.
  2. Dilation – Every background pixel that touches a non-background pixel is converted into a non-background pixel. This has the opposite effect of erosion and makes the object larger.
  3. Opening – Opening an image performs an erosion operation followed by a dilation operation and is used to remove elements of the object.
  4. Closing – Closing an image performs a dilation followed by an erosion. We use this morphological operation when we want to remove small elements of the background.

 

Within the background detection algorithm, we will use dilation on the result of the threshold operation.

 

  • Structure detection – One we have the binary image created by the thresholding and morphological operations, we can look for structured elements like contours to identify the difference between the foreground and the background.

 

  • Identifying the object – This operation uses the results of structure detection to draw a colored box around the detected objects. This box is applied to the original color frame so that we can see the object the algorithm has detected.

 

 Image4.jpg

 

Output Image and Thresholding output post dilation, with no change to background reference

 

 

 

Image5.jpg

 

 

Output Image and thresholding image post dilation, with the camera moved slightly to register a large change to the reference background

 

 

 

Having introduced these concepts, I will explain in detail how we implement the algorithm using OpenCV in both C++ and Python in the next blog.

 

If you want to follow along and do not have the time to generate your own SBC build, the Xilinx University Programme provides an SBC design that supports a web camera. It’s available here.

 

 

 

The code is available on Github as always.

 

If you want E book or hardback versions of previous MicroZed chronicle blogs, you can get them below.

 

 

 

  • First Year E Book here
  • First Year Hardback here.

 

 

 MicroZed Chronicles hardcopy.jpg

 

 

 

  • Second Year E Book here
  • Second Year Hardback here

 

 

MicroZed Chronicles Second Year.jpg 

 

 

 

 

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.