cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Adventurer
Adventurer
386 Views
Registered: ‎11-18-2013

System level query on LVDS link speeds

Hello,

      Hope everyone is keeping well. 

This is a query regarding interpreting the datasheet values for setup and hold times, length matching and synchronous data fed from an external source. 

The referenced IC is LTC2175-14 (a 125MSps Serial LVDS output ADC) to a ML605 (Virtex6 - Speed Grade 1 device). The system works but we are trying to make sense of some of the numbers. 

In the two lane interface and 16 bit interface mode the LTC2175-14 gives out serial LVDS data in DDR mode at 1Gbps (so the clock from the ADC is at 500MHz). 

As per DS152 of V6, the -1 grade device on the board should support 1.1Gbps (under DDR LVDS receiver -1) under all PVT conditions and in our case the data is being received at 1Gbps which is within the set limit. 

shashankm_0-1597900748338.png

Digging further to understand the length matching requirements, the setup and hold times for ISERDES, the minimum setup and hold time allowed is 0.09/0.11 or Tsu = 90ps and Thd = 110ps (the last row in the screenshot). The setup and hold time is different when using IODELAY (for tap setting 0) 0.14/0.07 (setup time has increased but hold time has decreased). 

Question 1) What is the reason for this? 

I suppose ISERDES is inevitably used with IODELAY so the second value (0.14/0.07) should be used while designing 

The IODELAY has a jitter of +/- 5ps per tap which will also have be accounted for. 

shashankm_1-1597900885679.png

Question 2) For regular LVDS (SDR or DDR) is table 49 the correct table to refer to? Here the setup and hold time requirements with and without IODELAY are 0.1/0.54 and 0.14/0.42 resp. Is this correct? 

shashankm_2-1597901458730.png

-----------------------------------------------------------------------------------------------

Now to understand the timing and length matching requirements: 

The IC in reference has the following spec: 

shashankm_3-1597901691983.png

The DDR data is center aligned (so one data during the rising edge and the next data during the falling edge). 

The source synchronous data and clock (Data and DCO) have rise and fall times of 0.17ns and the DATA to DCO is between 0.5tSER (tSER = 1ns in our case, ideal) with a range of 0.35tSER to 0.65tSER. This translates to a variation of 0.3tSER = 0.3ns between the data transistion and the clock edge. 

The total duration during which the data may NOT be usable (due to the following reasons) 

= 0.3ns (uncertaininty between data and DCO)) + 0.17ns (rise time of data) + 0.17ns (rise time of clock) + 0.06ns (DCO jitter) 

= 0.7ns. 

Now this is where a confusion arises:

The ~ time between data valid and the rising edge of DCO is 0.5ns while the uncertain time interval itself is 0.7ns. 

The design works and is quite stable (tested over a period of several hours and across multiple boards).

The following figure tries to illustrate my understanding. The first line is DCO (clock) with the shaded area covering both the rise times and clock jitter (0.17  + 0.06 = 0.23ns). The second line shows the data with the shaded area showing the variation in the time between the rising edges of the data and clock with the rise time itself (0.3 + 0.17 = 0.47ns)

Please advise on where I am going wrong in my understanding. 

WhatsApp Image 2020-08-20 at 11.18.20 AM.jpeg

--------------------------------------------------------------------------------------------

Length matching requirement:

Consider a DDR LVDS interface at 125MSps (so data is at 250Mbps). In this case the time between the data transition and rising edge of the clock is 2ns. Going by the previous rise times and jitter numbers we have about 2 - 0.7 = 1.4ns. Considering the minimum setup time requirement of 0.1ns as per ILOGIC table gives us 1.4 - 0.1 = 1.3ns i.e., data in this window will be correctly read by the FPGA. 

To translate this to length matching requirements: 1mm corresponds to about 6.5ps, 10mm to 65ps and 50mm corresponds to 325ps. So even a difference of 50mm between the differential pairs (pair to pair) will not matter. Question: Is this correct? 

Now there will be pin delays of course (Question: where can they be obtained from and are they consistent across devices). 

Is there anything else that has to be considered?

Looking forward to your help in understanding these aspects, Thanks in advance 

0 Kudos
2 Replies
Highlighted
Adventurer
Adventurer
290 Views
Registered: ‎11-18-2013

Hello,

    Looking forward to your inputs. Thanks,

 

0 Kudos
Highlighted
Guide
Guide
263 Views
Registered: ‎01-23-2009

The timing analysis you did probably appeared reasonable, but, unfortunately, misses the biggest component of uncertainty in the LVDS capture; the clock distribution mechanism.

The timing numbers for the ISERDES specify the setup and hold requirements of the D pin of the ISERDES with respect to the CLK pin of the ISERDES. But neither of these are the pins of the FPGA. The data has to get through the IBUF, possibly through an IDELAY and to the D pin of the ISERDES. On the clock path, the clock needs to go from the clock pin of the FPGA, through the IBUF on that pin, possibly through an IDELAY,  through some kind of clock buffer (presumably the BUFIO), through the clock network and then end at the CLK pin of the ISERDES. All of these components add not only delay but uncertainty. Using these component delays, it is not possible to do manual static timing of these paths.

For these reasons, the FPGA data sheet has "Device Pin to Pin Input Parameter Guidelines". These give "guidelines" for the entire pin-to-pin requirements that include all of these sources. Take a look at this post on input capture clocking topologies, which also makes reference to the timing symbols used in the datasheet to "guide" you to the performance of these mechanisms.

Assuming you are using the BUFIO, table 74 of DS152 (for the Virtex-6) gives Tpscs/Tphcs which, for the -1 speedgrade is -0.28/1.33, meaning it needs a guaranteed 1.05ns of stable data for capture. Since you are running at 1Gbps, your bit period is only 1.00ns, and the stable data window will be measurably smaller than that - it looks like the device alone takes 70% of that bit period in uncertainty - meaning your stable data window is only 0.3ns (even before considering signal integrity and jitter). The net result is that this is WAY WAY WAY too small for static capture.

Now we need to reconcile this with table 41 that says that you can receive 1.1ns. First, this is specifically for SPI 4-2, which (I am willing to bet) provides WAY more than 30% of the bit period as valid data. But regardless, there is still no way to capture even a 900ps window (1.1Gbps) statically. And that is the key word; you can only get the speeds documented in table 41 with dynamic calibration. The datasheet tells you that the minimum real window required for sampling is tSAMP_BUFIO, which for a -1 is 400ps. This means that with the perfect dynamic calibration mechanism, you can capture a data stream with valid stable data as small as 400ps. It does not, however, actually tell you how to design this perfect dynamic calibration mechanism...

Furthermore, since this requires dynamic calibration (also called dynamic phase adjust or DPA), static timing analysis is meaningless...

So, the takeaways are

  • There is no way you can reliably capture data coming from this device statically
    • So static timing analysis is meaningless
  • Even dynamically, if we are reading the specs properly, then this can't even be captured dynamically, since the device only guarantees 300ps of stable data whereas the FPGA requires 400ps...
    • This isn't really fair, though, even though the device says 0.35tSER to 0.65tSER is the only stable data window, this is across PVT; I doubt there is a PVT where the window is only the 300ps specified above
    • But since this is the only data the device gives you, there is simply no guarantee that data from this device is reliably capturable in the FPGA

Avrum

0 Kudos