I mentioned there that I would explain the difference in a later Video Series entry, so here you are!
So what is progressive video and what is interlaced video?
Progressive video (or non-interlaced) is what you would expect by default for video. For each frame, you are sending all of the pixels values for every line.
If you want to save some bandwidth, you can use interlaced video. In interlaced video, you only transfer half of the lines, for example the odd-numbered lines of a frame followed by the even-numbered lines of the next frame.
Note that in interlaced video, each “frame” (with only half of the lines) is called a field.
Sending half of the lines of each frame to create interlaced fields is called the Scan Line Decimation technique. However, this technique can create flickering if there is a sharp vertical transition in color or intensity.
A better approach, called Vertical Filtering, is to use multiple progressive frames to create an interlaced field.
For example, to create the fist line of a field we can use the mean value of the first line of the 2 consecutives frames.
How to transport interlaced content on AXI4-Stream:
On the AXI4-Stream interface, transmitting interlaced content is similar to transmitting progressive content. The signal tuser will be asserted for the first pixel of a field and the signal tlast will be asserted for the last pixel of a field.
The only difference is the fid signal. This signal is not required for progressive video (it should be connected to 0 in progressive). The fid signal is used to indicate if an odd or even field is currently being transmitted. Thus, this signal will toggle for every field.
From Interlaced to Progressive content
For some applications, you will need to convert interlaced video data to progressive video. This operation is called deinterlacing.
In Xilinx devices, you can use the Video Processing Subsystem to convert interlaced video to progressive video.
Example of Deinterlacing using the Xilinx VPSS IP
The attached example shows the Video Processing Subsystem configured as Scaler only.
In Vivado 2019.1, the Xilinx Test Pattern Generator can be used to generate interlaced content. This is what I am using as the interlaced source.
Hardware changes compared to the Video Series 28 design
In this example, I have only made few changes to configure the VPSS to be deinterlacer only (it was color space converter in the Video Series 28 example):
I changed the VPSS configuration to Deinterlacer only
In the Deinterlacer tab, I kept the option Enable Motion Adaptative Deinterlacing. Because of this option, the VPSS will use a frame buffer and thus have an AXI Memory Mapper interface to access memory
I connected the AXI4 Memory Mapped interface to the PS DDR
And finally, I connected the fid signal from the TPG to the VPSS as the stream is now interlaced
Software changes compared to the Video Series 28 application
Only a few changes were required in the application compared to the Video Series 28 application:
As we are not doing color conversion anymore in the VPSS, the color space is fixed to YUV422 for both the input and the output