08-12-2016 04:36 AM - edited 08-12-2016 04:39 AM
I want to interface the signals of PAL decoder to Video-in-to-AXI4S-out ip core on Vivado. As far as I have understood, the decoder's datasheet shown here outputs only 8 bits of YCbCr 4:2:2 (8 bits Cb - 8 bits Y - 8 bits Cr - 8 bits Y) at 27MHz. There are only 8 physical data pins on the IC chip. When I interface data pins to Video-in-to-AXI4S-out ip core with the configuration as shown below, the input data expected is 16 bits.
How can I interface these data bits according to the Decoder's output signal?
On page # 2 of the datasheet, it says: "The output formats can be 8-bit 4:2:2 or 8-bit ITU-R BT.656 with embedded synchronization." I don't have an issue using the ITU-R BT.656 standard as well, provided it is supported by Video-in-to-AXI4S-ou
08-12-2016 05:12 AM
08-12-2016 05:36 AM - edited 08-12-2016 05:42 AM
Yes, it says: if video data widths are not an integer multiple of 8, data must be paddedwith zeros on the MSB to form an N*8-bit wide vector before connecting to m_axis_video_tdata.
But how will it help me interface with that particular PAL Decoder's YUV 4:2:2 output?
08-12-2016 07:17 AM
The key is to look at Firgure 3-6 from that datasheet you posted. It showes that luma and chroma components are interleaved onto the 8 bit parallel data interface with a 2x pixel clock. The Video In to AXIS core won't handle this for you so you will need some logic in front of the Video In to AXIS core to de-interleave it.
If it were me, i'd put a little block out front that de-serializes the samples into parallel registers and latches them into a final output register every other clock cycle. Then use the strobe for the output register as a clock enable for the Video2AXIS core.
08-12-2016 12:04 PM
If you don't have the AVID signal wired to the FPGA, you can still get the active video area by decoding the ITU-R BT.656 commands. The last time I did that, I was using Foundation 4.1i tools in schematic capture mode. However the logic isn't too complex.
08-13-2016 12:27 AM - edited 08-13-2016 03:26 AM
Thanks for your response.
1. Yes, I noticed that. In order to handle this, I am currently doing the following:
I guess I'm wrong in the above procedure. But I didn't understand why?
2. As you mentioned, to de-interleave, does the De-interleaver IP core perform the required operation?
@gszakacs : Thanks. I'll need some time to go through that. But isn't there a simpler way? So you suggest I use the standard ITU-R BT.656 output mode?
08-13-2016 01:51 PM
The simpler way is to attach the AVID pin of the TVP5150AM1 to the FPGA and use that to determine the incoming line start and end as well as which bytes contain luma vs. chroma data. You'd still need another way to determine the video framing, which could come from the other output pins (VSYNC and FID) of the TVP5150AM1. On the other hand, the logic required to decode the 656 data stream is not very large in an FPGA, and it would save you those extra pins if you need them for something else.
08-15-2016 10:27 PM
@gszakacs : Again, thanks for your reply.
Yes. I meant I didn't understand the purpose of decoding the 656 standard data stream.
As I mentioned earlier, currently I am using the other mode of decoder.....which outputs discrete sync signals. Fyi, I already have a complete video processing design which passes through all these ip cores (including a custom ipcore) and gives a output video on monitor. But this output video looks noisy (the pixel values keep changing, they aren't stable even when pointing towards a stationary wall).
The noise is because the custom ipcore requires a RGB image as input. So basically, I have to convert the output of the above mentioned decoder to RGB stream video (using color space conversion ip core). This is my aim.
Please give me your feedback.