Over the years we have looked lots at embedded visions systems, as they form a corner stone for many exciting applications, e.g. vision guided robotics, autonomous vehicles, etc.
However, as I was creating a recent embedded vision project targeting the Genesys ZU platform, I noticed several of the key building blocks have been obsoleted and replaced with more capable IP blocks post 2019.1.
Of course, as these blocks are more capable they require a little more configuration to achieve the capability. So I thought a blog showing a deep dive would be of interest.
Overall imaging system
The complete list of blocks which have been replaced can be found here, but the two main blocks I used in the demo are:
Sensor Demosaic: This replaces the color filter array and perform debayer operations to convert raw pixels to RGB.
Gamma LUT: This replaces the gamma correction IP and performs gamma correction.
These IP cores are crucial when we are working with video processing systems; without them, we will have to create our own implementations increasing the development time.
One big advantage of the new IP cores over the ones they replaced is they are now bundled with Vivado as such no separate licenses required.
Let's start with the simplest of the two blocks, the Sensor Demosaic. This block converts the RAW pix value received by the sensor into a pixel which contains RGB elements. The size of the pixel depend on the input value for example a 10 bit RAW pixel will result in a 30 bit RGB pixel on the output.
Configuration of the block in Vivado is quite straightforward: all we need to define is the maximum image size, the maximum input pixel width, and the interpolation method.
The rest of the configuration is achieved over the AXI Lite interface at run time. This enables us to change the image size on the fly, which is very useful if high frame rates are required from the sensor by reducing the image size. It is also possible to change the debayer pattern on the fly should the sensor change as development progresses.
Setting up the Sensor Demosaic in SDK or VITIS is pretty straightforward as well. The BSP provides a sensor demosaic driver, xv_demosaic — within this driver are all of the functions needed to get up and running.
Using this driver, we can configure the demosaic IP block to debayer the image and stream out RGB pixels as shown in the code below.
Once we have converted the RAW pixels to a RGB pixel value, the next stage in many processing pipelines is to correct the image for gamma.
Gamma correction is performed to adapt the linear RGB pixel values to match the non-liner characteristics of displays.
The Gamma LUT IP block allows us to correct for gamma on the fly using software to update the LUT tables within the IP core. Again configuration within Vivado is very simple requiring only the image size, pixel size, and number of pixels per clock.
Within the software environment, we can then use the xv_gamma_lut drivers provided as part of the BSP to configure the IP core.
When it comes to driving this IP core at run time, we also need to configure the image height and width, along with the video format (RGB, YUV, etc).
However, we also need to configure the gamma correction tables, without configuring this the video will be output as blank.
There are several algorithms that can be used for the gamma correction, my recent example used the following approach.
This populates an array with the gamma correction factors — the size of the array depends on the size of the pixel. An 8-bit pixel requires 256 entries while a 10-bit pixel requires 1,024 entries and so on.
The Gamma LUT IP contains three memory regions to store the look up table. Each one of the regions maps to one color channel:
LUT0 = RED Channel
LUT1 = Green Channel
LUT2 = Blue Channel
This enables the use of different correction factors for different color space channels if desired.
Within the BSP driver, there are several functions which can be used to configure the Gamma LUT. There are functions provided to download the LUT table values, too.