cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
efpkopin
Adventurer
Adventurer
876 Views
Registered: ‎01-20-2017

DisplayPort RX Subsystem native mode timing constraints

Jump to solution

I'm implementing a DisplayPort RX Subsystem block operating with via a 'Native' Video Interface.   While I am successfully receiving a DisplayPort video signal, my problem is that the output video is constantly out of sync (i.e. the video rolls continuously as if the vsync signal does not correspond properly with the start of each frame).  My design does not meet timing - so I'm trying to add those constraints now.  

My understanding is as follows:

1. We want the rx_vid_clk and rx_vid_rst signals to be synchronouse to each other.  We achieve this by using a Processor System Reset.

2. And I am assuming that the native enable, hsync, vsync and the pixel outputs from the dp_rx_subsystem are synchronous with the rx_vid_clk signal.

Is that true?  Do I have to assign any additional timing constraints to ensure that these signals are routed correctly on the FPGA?  

 

 

 

0 Kudos
1 Solution

Accepted Solutions
watari
Professor
Professor
810 Views
Registered: ‎06-16-2013

Hi @efpkopin 

 

> And I am assuming that the native enable, hsync, vsync and the pixel outputs from the dp_rx_subsystem are synchronous with the rx_vid_clk signal.

Is that true? 

 

Yes. Would you refer the following PDF (figure 2.6) on page 15 ?

https://www.xilinx.com/support/documentation/ip_documentation/dp_rx_subsystem/v2_1/pg233-displayport-rx-subsystem.pdf

 

>Do I have to assign any additional timing constraints to ensure that these signals are routed correctly on the FPGA?  

 

It depends on your current timing constraints.

Make sure the timing path on a boundary whether there are timing constraints or not.

 

Best regards,

View solution in original post

9 Replies
watari
Professor
Professor
811 Views
Registered: ‎06-16-2013

Hi @efpkopin 

 

> And I am assuming that the native enable, hsync, vsync and the pixel outputs from the dp_rx_subsystem are synchronous with the rx_vid_clk signal.

Is that true? 

 

Yes. Would you refer the following PDF (figure 2.6) on page 15 ?

https://www.xilinx.com/support/documentation/ip_documentation/dp_rx_subsystem/v2_1/pg233-displayport-rx-subsystem.pdf

 

>Do I have to assign any additional timing constraints to ensure that these signals are routed correctly on the FPGA?  

 

It depends on your current timing constraints.

Make sure the timing path on a boundary whether there are timing constraints or not.

 

Best regards,

View solution in original post

florentw
Moderator
Moderator
787 Views
Registered: ‎11-09-2015

HI @efpkopin 

You have to be cautious about the native video interface from the DP RX controller.

Firstly, when using the AXI4-Stream interface, we are usually using a faster clock for the interface. This is ok because we are doing buffering with the VDMA after. But with native interface, you need to make sure that the video clock is exactly the video clock.

Then, note that the PLL inside the FPGA is not accurate enough to support any M and N values so you might need an external PLL

From PG233:

pg233.JPG

Finally, you need to check that the sync signals match with what your other interface expects


Florent
Product Application Engineer - Xilinx Technical Support EMEA
**~ Don't forget to reply, give kudos, and accept as solution.~**
pg233.JPG
efpkopin
Adventurer
Adventurer
772 Views
Registered: ‎01-20-2017

@florentw@watari, thank you for your responses.  Specifically in response to @florentw

 

I am using a FIFO line buffer and an rx_vid_clk of 200 MHz.  The pixel data is coming in from the GPU at a pixel rate of 553.3 MHz.  B/c the data comes out of the DP_RX block via 4 lanes, my 200 MHz x 4 interface should be fast enough.  As @wateri suggets, I am using an assumed interface as defined Figure 2-6 of pg233 (I clock the data into my FIFOs using a 200 MHz clock that is 180 degrees out of phase with rx_vid_clk when rx_vid_enable is high).  I assume in this set-up, I don't have to worry about the 'M' and 'N' values coming out of the DP_RX core (lnk_m_vid & lnk_n_vid), as specified in the middle of page 13 of pg233: 

pg233_comment_on_clock_def.PNG

Are my above assumptions correct?

 

0 Kudos
watari
Professor
Professor
766 Views
Registered: ‎06-16-2013

Hi @efpkopin 

 

> as specified in the middle of page 13 of pg233:

 

As you know, The value of 'M' and 'N' are used as generating recovery pixel clock.

Please refer the following data flow.

There are different clock at each stage.

 

# Data flow for DP Rx

Packet data @ each lanes -> Streaming data @ each streamings -> Recovery pixel data @ pixel clock

 

In this case (the middle of page 13 of PG233), it mentioned to prevent FIFO overflow or underflow using faster clock (pixel clock >= streaming clock).

So, if you chose native video interface instead of streaming and you ware not annoying to change pixel clock freqency by PLL when M and/or N changed, you only follow "Figure 2-6 of PG233". It's variable.

If you ware annoying to change pixel clock frequency, I suggest you to use faster fixed clock.

 

If you are interesting about this mechanism and concept, please refer official document of VESA about DisplayPort.

 

Best regards,

0 Kudos
efpkopin
Adventurer
Adventurer
757 Views
Registered: ‎01-20-2017

@watari, I'm a bit confused.  Using Figure 2-6 of pg233 as a reference:  I'm using a 200 MHz rx_vid_clk and there are four 'lanes' of pixel values coming out: rx_vid_pixel_ii[47:0].  But the incoming data stream from the DisplayPort source is coming in at 553 MHz, but only 1 pixel at a time.  Thus, four pixels come in at an effective clock-rate of 553/4 = 138 MHz <= this would seem to correlate with the 'streaming clock' that you mention, correct?  Based on this, I feel like my pixel clock is about 45% faster (200/138) than the streaming clock.  Don't you agree?

0 Kudos
watari
Professor
Professor
748 Views
Registered: ‎06-16-2013

Hi @efpkopin 

 

>Thus, four pixels come in at an effective clock-rate of 553/4 = 138 MHz <= this would seem to correlate with the 'streaming clock' that you mention, correct?

Yes, if you use nateve video interface.

 

>Based on this, I feel like my pixel clock is about 45% faster (200/138) than the streaming clock. Don't you agree?

 

I'm little confused. In your case, it's correct (using faster clock as transfer clock), if you use ex. AXI4-Stream.

But if I'm you, I will use 553[MHz]/4 as pixel clock. (Unit is 4pixel/clock)

 

As you know, in native video mode, pixel clock is variable. And it is defined by video timing.

For example, 1920x1200@60Hz (It's definded by VESA) and 1080p60 (It's defined by CEA) are different clock frequency.

 

To apply yor case, there are some different points between original video timing and your recovery timing and (my) concerns as below.

 

- Vertical frequency is different between them.

- Vertical blanking line is different, if you use gen-lock mechanism.

- Perhaps, it causes some issue in your system. Not DP. It's relevant your target design.

 

Sorry for the inconvenience. It's hard to explain it without some figures.

 

Best regards,

0 Kudos
florentw
Moderator
Moderator
697 Views
Registered: ‎11-09-2015

Hi @efpkopin 

I am a little confused. If you have a FIFO, then how do you reconstruct the video interface? Do you have a vtc? Else, how can you guaranty that there is no gap in the data enable?

What is the final endpoint for the native interface?

How are you recreating the pixel clock?


Florent
Product Application Engineer - Xilinx Technical Support EMEA
**~ Don't forget to reply, give kudos, and accept as solution.~**
0 Kudos
efpkopin
Adventurer
Adventurer
652 Views
Registered: ‎01-20-2017

Hi @florentw, I'm not sure this will answer the question - but I reconstruct the video interface as follows:

- I recognize the pixel data comes into the FIFO in bursts (i.e. the rx_vid_enable signal is only high for valid pixel inputs - very much as illustrated in Figure 2-6)

- However, once a line of pixels has been stored into a FIFO, I pull the pixels out continuously and send it out to my display.  

- We have a custom display that uses its own video out protocol <= I have a logic core that controls the writing of this data to the display.

 

Does that answer your question?

0 Kudos
florentw
Moderator
Moderator
629 Views
Registered: ‎11-09-2015

Hi @efpkopin,

That looks fine, I understand better. 

On your side, do you need more clarification? I guess the initial questions were covered right?

Regards 


Florent
Product Application Engineer - Xilinx Technical Support EMEA
**~ Don't forget to reply, give kudos, and accept as solution.~**
0 Kudos