cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
562 Views
Registered: ‎02-26-2019

Texas Instruments ADS4245 ADC LVDS DDR Interface

Jump to solution

Hi,

I am confused about ADC datasheet giving setup and hold time specifications for output of digital data outputs. Normally setup and hold time are define for receiver inputs generally.

In datasheet following specs are given: (ADS4245 datsheet)Untitled.png

As i understand from the waveform this interface seems to be source synchronous center aligned data transfer. I can't understand why the hold time is same for all frequencies. I am expecting that hold time values are getting wider with decreasing frequency, same as setup time. I asked to TI but no answer yet. 

Additionally if this hold time is 0.35 ns for all frequencies, does it mean that i need some clock or data shifting in order to make sampling clock rising edge to happen at the center of the data eye? ( I am only thinking ADC output clock&data relation and assume that traces are all matched)

After understanding the capture method, shifting of not shifting the clock/data, i am planning to apply proper timing constraints however i firstly want to understand what the setup and hold time actually tell?

Especially for the hold time ADC datasheet defines as below:

"rising edge of clkoutp to data becoming invalid."

Does it mean that data becomes invalid 0.33 ns after rising edge of clkoutp? ( I dont guess this hold time tells that data becomes invalid but i could not understand what it tells.)

What happens to data if it is becoming invalid:)?

If i am working at low frequencies, eg 10 MHz, the next transition of data is at about 50 ns later and it is not logical that data stays only 0.33 ns valid after the rising edge clock.

If hold time is really telling that data becomes invalid, it is still complicated to capture data even if you work at low sampling frequencies.

 

I think skew specification which is given for edge aligned interface is much more understandable. Anyway, can someone clarify my confusion about setup/hold time specifications of ADC data outputs?

Mustafa

mustafasu
0 Kudos
Reply
1 Solution

Accepted Solutions
Highlighted
Guide
Guide
480 Views
Registered: ‎01-23-2009

Any device that sends data must specify the limits of the timing relationship between the clock and data. While it is more common to specify this as min and max "clock to output" delay, there is nothing wrong (or even uncommon) about specifying it as a minimum setup and hold time as this chip does. In fact, this tends to be relatively common with devices with clock forwarded interfaces.

The datasheet is telling you what you need to know. It is saying that across all legal process, voltage and temperature (PVT) conditions

  • The data will be available no later than tSU before the clock edge
  • The data will remain stable at least as long as tH after the clock edge

These can easily be converted to a tCO(min) and tCO(max) - the minimum and maximum clock to output time

tCO(min) is the same as tH - the earliest the data can change is tH after the clock edge - this is the same definition as tCO(min)

tCO(max) is basically the opposite of tSU - in concept tCO(max) = tPER/2-tSU

This relationship also shows why tH doesn't change with frequency (which is also common) - the minimum clock to output time is generally not dependent on frequency.

On the other hand, you can see that if tCO(max) were also a constant, tSU would increase as the half period increases. In fact if you look at the various frequency points they give you and calculate tCO(max)=tPER/2-tSU the result is fairly close to a constant - it varies betweeen 1.63ns and 1.79ns.

However, it isn't really that simple... Since the duty cycle isn't guaranteed to be exactly 50/50 (although the datasheet isn't helpful when it just gives a Nominal value of 46% for the duty cycle - I am not even sure what this means) you need to consider what happens when it isn't 50/50. Since the half period can be smaller than tPER/2, you need to derate this a bit more...

With this information - regardless of whether it is given as tCO(min)/tCO(max) or tSU/tH there is enough information to create proper input constraints for the FPGA.

Additionally if this hold time is 0.35 ns for all frequencies, does it mean that i need some clock or data shifting in order to make sampling clock rising edge to happen at the center of the data eye?

[First, I presume you mean just "edge" not "rising edge" - this is a DDR interface so timing is with respect to both the rising and falling edges]

It isn't a given that the "best" place to sample a data window is in the middle. This would only be true if your receiving device (the FPGA) had a perfectly centered required window - that the receiver's setup requirement is the same as its hold requirement. This is usually not true, and is known not to be true for almost all capture mechanisms in an FPGA. Therefore, unless the windows are so large that you don't have to worry about margins, you almost always have to do some kind of clock/data phase adjustment at the receiver to maximize the margin on the interface; the setup margin is the amount of time the data is valid before the start of the required setup time and the hold margin is the amount of time the data is valid after the end of the hold requirement (i.e. the slack on the checks - how much you pass timing by).

Does it mean that data becomes invalid 0.33 [presumably 0.35] ns after rising edge of clkoutp?

No device can say exactly when it will change from the old data to the new data - it is always a range. What this is saying is that at some PVT condition, the device may start making the change from the old data to the new data as early as 0.35ns after the clock edge, but at other PVT conditions it may not complete the transition until tSU before the next clock edge. So between these two points (tH after the edge to tSU before the next edge, or written differently between tCO(min) and tCO(max) after an edge) the data is potentially in transition and cannot be sampled.

If i am working at low frequencies, eg 10 MHz, the next transition of data is at about 50 ns later and it is not logical that data stays only 0.33 [presumably 0.35] ns valid after the rising edge clock.

Not only is it logical, it is often the case.

If hold time is really telling that data becomes invalid, it is still complicated to capture data even if you work at low sampling frequencies.

Yes! That is the reality.

Avrum

View solution in original post

Tags (1)
4 Replies
Highlighted
Teacher
Teacher
537 Views
Registered: ‎07-09-2009
this is more a question about the TI chip,
you might be better off asking the question on the great ti forums
<== If this was helpful, please feel free to give Kudos, and close if it answers your question ==>
Highlighted
532 Views
Registered: ‎06-21-2017

I would suggest that the data is being clocked out of the ADC by CLKOUT.  This would explain why the setup time increases with lower frequency while the hold time does not change.  You need to note that this is showing a double data rate (DDR) interface where the data is being clocked out o both the rising and falling edges of CLKOUTP.

0 Kudos
Reply
Highlighted
Guide
Guide
481 Views
Registered: ‎01-23-2009

Any device that sends data must specify the limits of the timing relationship between the clock and data. While it is more common to specify this as min and max "clock to output" delay, there is nothing wrong (or even uncommon) about specifying it as a minimum setup and hold time as this chip does. In fact, this tends to be relatively common with devices with clock forwarded interfaces.

The datasheet is telling you what you need to know. It is saying that across all legal process, voltage and temperature (PVT) conditions

  • The data will be available no later than tSU before the clock edge
  • The data will remain stable at least as long as tH after the clock edge

These can easily be converted to a tCO(min) and tCO(max) - the minimum and maximum clock to output time

tCO(min) is the same as tH - the earliest the data can change is tH after the clock edge - this is the same definition as tCO(min)

tCO(max) is basically the opposite of tSU - in concept tCO(max) = tPER/2-tSU

This relationship also shows why tH doesn't change with frequency (which is also common) - the minimum clock to output time is generally not dependent on frequency.

On the other hand, you can see that if tCO(max) were also a constant, tSU would increase as the half period increases. In fact if you look at the various frequency points they give you and calculate tCO(max)=tPER/2-tSU the result is fairly close to a constant - it varies betweeen 1.63ns and 1.79ns.

However, it isn't really that simple... Since the duty cycle isn't guaranteed to be exactly 50/50 (although the datasheet isn't helpful when it just gives a Nominal value of 46% for the duty cycle - I am not even sure what this means) you need to consider what happens when it isn't 50/50. Since the half period can be smaller than tPER/2, you need to derate this a bit more...

With this information - regardless of whether it is given as tCO(min)/tCO(max) or tSU/tH there is enough information to create proper input constraints for the FPGA.

Additionally if this hold time is 0.35 ns for all frequencies, does it mean that i need some clock or data shifting in order to make sampling clock rising edge to happen at the center of the data eye?

[First, I presume you mean just "edge" not "rising edge" - this is a DDR interface so timing is with respect to both the rising and falling edges]

It isn't a given that the "best" place to sample a data window is in the middle. This would only be true if your receiving device (the FPGA) had a perfectly centered required window - that the receiver's setup requirement is the same as its hold requirement. This is usually not true, and is known not to be true for almost all capture mechanisms in an FPGA. Therefore, unless the windows are so large that you don't have to worry about margins, you almost always have to do some kind of clock/data phase adjustment at the receiver to maximize the margin on the interface; the setup margin is the amount of time the data is valid before the start of the required setup time and the hold margin is the amount of time the data is valid after the end of the hold requirement (i.e. the slack on the checks - how much you pass timing by).

Does it mean that data becomes invalid 0.33 [presumably 0.35] ns after rising edge of clkoutp?

No device can say exactly when it will change from the old data to the new data - it is always a range. What this is saying is that at some PVT condition, the device may start making the change from the old data to the new data as early as 0.35ns after the clock edge, but at other PVT conditions it may not complete the transition until tSU before the next clock edge. So between these two points (tH after the edge to tSU before the next edge, or written differently between tCO(min) and tCO(max) after an edge) the data is potentially in transition and cannot be sampled.

If i am working at low frequencies, eg 10 MHz, the next transition of data is at about 50 ns later and it is not logical that data stays only 0.33 [presumably 0.35] ns valid after the rising edge clock.

Not only is it logical, it is often the case.

If hold time is really telling that data becomes invalid, it is still complicated to capture data even if you work at low sampling frequencies.

Yes! That is the reality.

Avrum

View solution in original post

Tags (1)
Highlighted
441 Views
Registered: ‎02-26-2019

@bruce_karaffa , @avrumw 

Firstly, because of i was writing the message quickly i forgot to mention double data rate.

I am in aware of both rising edge and falling edge since we use DDR interface and th=0.35 ns as in the datasheet which i wrote 0.33 ns by mistake.

After your explanations i realized that given ts/th times are different style of explanations similar to tco(min)/tco(max). By using the explanations which avrumw wrote it is seen that tco(min) is 0.35 ns and tco(max) is between 1.63 ns to 1.79 ns. It is more comprehensible in this way while building the sampling architecture and timing constraints on FPGA.

Some ADCs give tskew (clock to data) , for example Analog Devices AD9467.

tskew=> min=-200 ps, max=200 ps 

For tskew, must we think about the way how to convert them to tCO values or is it common way to using negative and positive tskew values directly? 

I think my confusion was coming from center aligned and edge aligned definitions.

For example ASD4249 ADC paremeters are similar to ADS4245 and in one of application notes it is mentioned that this interface is center aligned. Center aligned definition was steering me to think that ADC provides the clock always centered of the data eye in its outputs. I think it is not always the case however it says clock edge(both rise/fall) is always outside the transition region of data, right?

So in order to clarify modes of center aligne or edge aligned, are following definitions sensible?

If ts/th values are given and equivalent tco(min)/tco(max) values are positive then clock edge is outside the transition region, we can define this interface center aligned source synchronous.

If tskew is given with negative and positive values then clock edge is between somewhere in the transition region, we can define this interface as edge aligned source synchronous.

@avrumw i will read other topics which you explain ADC data capturing mechanism which includes edge aligned or center aligned manner, input delay and clock constraints, using idelay or MMCM, capturing mechanism such as BUFG capture, BUFIO capture, MMCM capture.

After the design by giving the constraints properly and getting the implementation without timing errors;

Is it also good that at the initial power up applying a test pattern from ADC with alternating 1s and 0s,then finding the upper boundary that data is corrupted then finding the lower boundary that data is corrupted and then shifting the clock in the middle of the upper/lower boundaries.

Thank you for explanations.

mustafasu