UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 

Video Series 30 – Understanding Interlaced Video

Moderator
Moderator
0 0 334

What is the difference between Interlaced and Progressive video?

 

For those following my video series, you might remember Video Beginner Series 16: Understanding Video Timing with the VTC IP where I talked about progressive and interlaced video.

I mentioned there that I would explain the difference in a later Video Series entry, so here you are!

So what is progressive video and what is interlaced video?

Progressive video (or non-interlaced) is what you would expect by default for video. For each frame, you are sending all of the pixels values for every line.

If you want to save some bandwidth, you can use interlaced video. In interlaced video, you only transfer half of the lines, for example the odd-numbered lines of a frame followed by the even-numbered lines of the next frame.

1.png

 

2.png

Note that in interlaced video, each “frame” (with only half of the lines) is called a field.

Sending half of the lines of each frame to create interlaced fields is called the Scan Line Decimation technique. However, this technique can create flickering if there is a sharp vertical transition in color or intensity.

A better approach, called Vertical Filtering, is to use multiple progressive frames to create an interlaced field.

For example, to create the fist line of a field we can use the mean value of the first line of the 2 consecutives frames.


How to transport interlaced content on AXI4-Stream:

 

On the AXI4-Stream interface, transmitting interlaced content is similar to transmitting progressive content. The signal tuser will be asserted for the first pixel of a field and the signal tlast will be asserted for the last pixel of a field.

The only difference is the fid signal. This signal is not required for progressive video (it should be connected to 0 in progressive). The fid signal is used to indicate if an odd or even field is currently being transmitted. Thus, this signal will toggle for every field.

3.jpg

 


From Interlaced to Progressive content

 

For some applications, you will need to convert interlaced video data to progressive video. This operation is called deinterlacing.

In Xilinx devices, you can use the Video Processing Subsystem to convert interlaced video to progressive video.


Example of Deinterlacing using the Xilinx VPSS IP

 

The attached example shows the Video Processing Subsystem configured as Scaler only.

The design is based on the one in  Video Series 28: Using the VPSS IP in Color Space Converter mode.

In Vivado 2019.1, the Xilinx Test Pattern Generator can be used to generate interlaced content. This is what I am using as the interlaced source.

Hardware changes compared to the Video Series 28 design

 

In this example, I have only made few changes to configure the VPSS to be deinterlacer only (it was color space converter in the Video Series 28 example):

  • I changed the VPSS configuration to Deinterlacer only

4.jpg

 

  • In the Deinterlacer tab, I kept the option Enable Motion Adaptative Deinterlacing. Because of this option, the VPSS will use a frame buffer and thus have an AXI Memory Mapper interface to access memory

5.jpg

 

  • I connected the AXI4 Memory Mapped interface to the PS DDR

    6.jpg

     

  • And finally, I connected the fid signal from the TPG to the VPSS as the stream is now interlaced

    7.jpg

     

Software changes compared to the Video Series 28 application

 

Only a few changes were required in the application compared to the Video Series 28 application:

  • As we are not doing color conversion anymore in the VPSS, the color space is fixed to YUV422 for both the input and the output
XVidC_ColorFormat colorFmtIn = XVIDC_CSF_YCRCB_422;
  • When configuring the TPG, the height is the input height divided by 2
app_hdmi_conf_tpg(&tpg_inst, Height/2, Width, colorFmtIn, XTPG_BKGND_COLOR_BARS);
  • The TPG is configured to output interlaced
app_hdmi_conf_tpg_interlaced(&tpg_inst, 1);
  • And the input stream is set as interlaced
StreamIn.IsInterlaced   = 1;
  • We need to define the base address used by the DMA engine inside of the VPSS. Note that this needs to be done before initializing the VPSS (XVprocSs_CfgInitialize)
XVprocSs_SetFrameBufBaseaddr(&VprocInst,0x10600000);

Generate the Design

  1. Download the tutorial files and unzip the folder

  2. Open Vivado 2019.1

  3. In the Tcl console, cd into the unzipped directory (cd XVES_0030/hw)

  4. In the Tcl console, source the script tcl (source ./create_proj.tcl)

  5. Generate the BD output products then run Synthesis and Implementation and Generate a Bitstream

  6. Export the Hardware including the bitstream to XVES_0030/sw/sdk_export

  7. Start the Xilinx Software Command Line Tools (XSCT) 2019.1 either from the Windows menu or Command line:
  • From the windows menu select the following:

Start > All Programs > Xilinx Design Tools > Xilinx Software Command Line Tool 2019.1

  • From the command line:

Use the command xsct (the environment variables for SDK 2019.1 needs to be set)

  1. In xsct, cd to XVES_0030/sw.

  2. Use the command source create_SW_proj.tcl

  3. Open SDK and select XVES_0030/sw/sdk_workspace as the workspace


Liked this Video Series?

  • You can give Kudos using the Kudos button  kudos.JPG
  • Share it on social media using the Share button share.JPG

  • Feel free to comment on this topic or to create a new topic on the forums to ask questions

Want more from the Video Series?