UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
1,183 Views
Registered: ‎03-12-2018

Serial ADC timing constraints

Jump to solution

I know this questions has been asked before but the answers on the forum that I have found have not cleared up my confusion. See for example.

https://forums.xilinx.com/t5/Timing-Analysis/Constraining-external-serial-ADC-interface/m-p/105547#M1297

https://forums.xilinx.com/t5/Timing-Analysis/OFFSET-CONSTRAINS-for-ADC-interface/m-p/243716#M3098

 

Our FPGA (Artix 7) has an ADC input (TI ADS1601) that consists of a Serial Clock, Data Bits and a frame synchronization signal. We supply a 24MHz clock to the FPGA which goes to the MMCM to generate a 128Mhz SysClk, and a 16Mhz ADC Clock. ADC clock then exits the FPGA to the ADC where it is used to generate a SerClk (same frequency, but could be phase shifted up to 15ns), and Frams Sync (FSO, which is one SerClk pulse every 16 SerClk periods and could be phase shifted up to 5ns). See image below:

 

FpgaTimingQuestion.png

 

We are having trouble understanding how to constrain the tools (Vivado) so that they can simulate without timing errors.

We have tried several different clock constraint schemes but each has various warnings, critical warnings, timing issues, and/or unsafe clock interactions.

 

 

First person with the answer gets a Kudos!

 

 

0 Kudos
1 Solution

Accepted Solutions
Historian
Historian
1,554 Views
Registered: ‎01-23-2009

Re: Serial ADC timing constraints

Jump to solution

So first you have to decide on an architecture, and as I see it you have two choices:

 

1) Treat this as a source synchronous interface with its own clock (SerClk)

2) Sample the data on the clock that generates the clock to the ADC (the internal 16MHz clock generated by the MMCM)

 

At these frequencies both can probably be made to work... But they are different architectures and use different resources, and need different constraints.

 

Before we get to the differences between them, I want to talk about your forwarded clock. I see that you are generating your forwarded clock "ADC Clock" using an ODDR, which is the right thing to do (I am assuming there is are BUFGs on the outputs of the MMCM. An FPGA generated clock is fine for "digital" purposes, but any imperfection on an ADC clock will affect the accuracy of the ADC; any jitter on the clock will manifest as error in the samples. However, this clock is soooooo slow, that the error is probably insignificant. At faster ADC rates, you shouldn't do this, though.

 

Let's look at #2 first. In this situation, you simply ignore SerClk - you don't even need it. There is a timing relationship between the internal clock from the MMCM and the data on SerDat and the FCO (which is also data), and you use a simple IOB flip-flop to capture SerDat and FCO. To do this, you need to specify the timing relationship.

 

First create a generated clock on the ADC_clk port

 

create_generated_clock -name my_ADC_clk -source [get_pins my_ODDR.C] -divide_by 1 [get_ports ADC_clk]

 

Next specify the min and max time that the data can change.

 

We know that the delay from ADC_clk to SerClk is a maximum of 15 - I will assume the minimum is 0 (but the datasheet doesn't explicitly say so, which makes me uncomfortable). Then we know that SerDat is valid no later than  5ns after the latest SerClk (which is 15ns+5ns) and will be held for at least 20ns after the earliest falling edge of SerClk; the earliest falling edge is 15ns after the earliest rising edge (so 15ns + 0ns + 20ns). So, with respect to ADC_clk the data is valid from 20ns after the rising edge of ADC_clk to 35ns after the rising edge of ADC_clk (with some modifications for board time propagation delay - lets say the board delay is 0.3ns min and 0.5ns max, so we add 0.6ns (0.3ns in each direction) to the minimum and 1.0 in the maximum; hence 21ns to 35.6ns. This means that it becomes valid no later than 21ns after the clock and goes invalid as early as 62.5-35.6ns before the next clock edge, so 26.9ns before the edge (which is -26.9ns after the clock edge)

 

So the constraints are

 

set_input_delay -clock my_ADC_clk -max 21 [get_ports SerDat]

set_input_delay -clock my_ADC_clk -min -26.9 [get_ports SerDat]

 

So now the constraints are correct. That being said, the interface can't possibly work; the timing of the valid window is nowhere near the rising edge of the clock, and hence it will fail. However, if you capture the data with the falling edge of the internal clock (which has no effect on the timing constraints), it could work.

 

The timing of FCO is more difficult since the spec doesn't give you enough information. It says the maximum SerClk to FCO is 5ns, and the "Typical" length of FCO is 1ns. As a rigorous ASIC and FPGA designer, I ignore "Typical" numbers - they are not a promise, and hence are not meaningful. But in this case you will just have to assume that it is no worse than SerDat (although if I was buying these in huge quantities, I would complain to the manufacturer that "Typical" numbers are meaningless - tell me the min and max... - ADC vendors are particularly bad at this).

 

Now lets look at option #1. This is a completely different architecture.

 

Everything up to the ODDR is the same. However, in this case, we now ignore the internal 16MHz clock and focus only on SerClk; we do not try and specify any relationship between these two - we treat them as asynchronous (well technically mesochronous).

 

In this architecture, SerClk must be on a clock capable I/O and either drive a BUFG, a BUFH or a BUFR directly. The output of this buffer is then used to clock the capture interface of the ADC.

 

Regardless of which buffer we use, we define the clock on SerClk as a primary clock

 

create_clock -name my_SerClk -period 62.5 [get_ports SerClk]

 

(We can get fancy with the duty cycle later, if needed)

 

The data becomes valid as late as 5ns after the clock and remains valid until at least 25ns+15ns (tCPW + tDH) after the clock, hence -27.5ns after the next clock. Since the board delay is 0.3ns to 0.5ns (only one way matters now), we add 0.2ns to the max and subtract 0.2 from the min, resulting in 5.2 to -27.2

 

set_input_delay -clock my_ADC_clk -max 5 [get_ports SerDat]

set_input_delay -clock my_ADC_clk -min -27.7 [get_ports SerDat]

 

Again, this will probably work with a falling edge capture. Also again, the FCO is handled the same.

 

But here, we need to worry more about the duty cycle. The duty cycle of the internal FPGA clock is guaranteed to be pretty close to 50/50, and the tools understand what it is anyway, so this is fine. But SerClk can have a very odd duty cycle; they only promise 25ns high and 25ns low (out of the 62.5). This is messy - lets go back to the clock.

 

The clock falling edge would normally be at 31.25, but the tool says it can be as little as 25 (corresponding to a high time of tCPW), which is 6.25ns early. Conversely a low time of 25ns means it comes 6.25ns too late. So lets define that.

 

set_clock_latency -source -fall -min -6.25 [get_clocks my_SerClk]

set_clock_latency -source -fall -max -6.25 [get_clocks my_SerClk]

 

Now the clock is correctly defined.

 

With this, we could go back and get more precise definition of the set_input_delay to be related to the falling edge, but that is more complicated...

 

BUT, when defined this way, we have defined (and cannot define and should not define) any relationship between my_SerClk and the internal clock. We must now assume they are mesochronous - i.e. they have no known relationship between them. Of course you could continue to operated on SerClk for the rest of your datapath. But if you want to transfer it back to the internal 16MHz clock, you must use a clock crossing FIFO. No other clock domain crossing circuit can handle one new data every destination clock period. A small FIFO will do (using distributed RAMs for example), and there are ways of handling the FIFO so that you don't need to continually check the empty (see this post on managing the empty)

 

It you choose to transfer it to the 128MHz domain, then you have more flexibility, but you still need a clock domain crossing circuit - you might be able to get away with one of the  XPM_CDC macros. But even here, the XPM_CDC macro is clocked with legal clocks on both sides; the SerClk (after BUFG/BUFH/BUFR) on one side and the 128MHz clock (after BUFG probably) on the other side.

 

Avrum

Tags (1)
5 Replies
Historian
Historian
1,173 Views
Registered: ‎01-23-2009

Re: Serial ADC timing constraints

Jump to solution

This is a very odd way to deal with the return data from an ADC. Normally, the SerClk is used as a "real" clock which is then used to sample the SerData and FSO.

 

What it looks like you are trying to do here is oversample the return interface. At these frequencies (really slow), this should also be possible.

 

But first - why are you doing it this way? While the interface is slow enough to oversample, it is also trivially easy to meet timing "conventionally"? Why are you trying to oversample it?

 

If there is a reason to oversample it (and I can think of a few), then which XPM_CDC are you using, and how is it configured and used? There are multiple ways of trying to do this, but one of the key questions are

  - are you using SerClk as a "clock" to the XPM_CDC

      - i.e. are you trying to pretend this is a "clock to clock transfer"

  - are you trying to get the XPM to oversample the SerClk to figure out when to sample FSO and SerData

 

Neither of the two solutions I can see would have the SerClk and FSO go to the XPM_CDC, but not the SerData.

 

So before we start looking at constraints, you need to tell us exactly how you are planning to capture this and/or why you are using this method rather than a "conventional" static capture?

 

Avrum

0 Kudos
1,152 Views
Registered: ‎03-12-2018

Re: Serial ADC timing constraints

Jump to solution

The reason we have the XPM_CDC there is because we could not figure out how to tell Vivado the relationship between the SerClk, SerData and FSO.  We were using SerClk directly as a clock for the Data and we were using FSO directly as a clock to count samples. However, the tools kept giving us timing errors that seemed to indicate that it didn't know the relationship between SerClk, FSO and the Data.  When we synchronize the clocks then the timing issues are handled better by the tools.

 

We are not trying to over sample. just get the tools to be able to correctly time things. We would love to go back to using the signals directly, as that makes things easier, but we don't know how to handle the timing warnings and errors we get.

 

Our current constraint file looks something like below. (We have actually tried several things).

 

# The below line is commented out because the Clock Generation Module seems to create its own timing constraint for the input clock
#create_clock -period 41.667 -name INPUT_CLK -waveform {0.000 20.834} [get_ports i_ClockIn]

create_clock -period 1000.000 -name ADCFso -waveform {0.000 62.500} [get_ports i_ADCFso]
create_clock -period 62.500 -name ADCSerClk -waveform {0.000 31.250} [get_ports i_ADCSerClk]
set_clock_latency -source 15.0 [get_clocks ADCSerClk]
set_clock_latency -source 20.0 [get_clocks ADCFso]

 

create_generated_clock -name ADCSerClkGenSync -source [get_pins {xpm_cdc_SerClk/syncstages_ff_reg[0]/D}] -multiply_by 1 [get_pins {xpm_cdc_SerClk/syncstages_ff_reg[1]/Q}]

0 Kudos
Historian
Historian
1,555 Views
Registered: ‎01-23-2009

Re: Serial ADC timing constraints

Jump to solution

So first you have to decide on an architecture, and as I see it you have two choices:

 

1) Treat this as a source synchronous interface with its own clock (SerClk)

2) Sample the data on the clock that generates the clock to the ADC (the internal 16MHz clock generated by the MMCM)

 

At these frequencies both can probably be made to work... But they are different architectures and use different resources, and need different constraints.

 

Before we get to the differences between them, I want to talk about your forwarded clock. I see that you are generating your forwarded clock "ADC Clock" using an ODDR, which is the right thing to do (I am assuming there is are BUFGs on the outputs of the MMCM. An FPGA generated clock is fine for "digital" purposes, but any imperfection on an ADC clock will affect the accuracy of the ADC; any jitter on the clock will manifest as error in the samples. However, this clock is soooooo slow, that the error is probably insignificant. At faster ADC rates, you shouldn't do this, though.

 

Let's look at #2 first. In this situation, you simply ignore SerClk - you don't even need it. There is a timing relationship between the internal clock from the MMCM and the data on SerDat and the FCO (which is also data), and you use a simple IOB flip-flop to capture SerDat and FCO. To do this, you need to specify the timing relationship.

 

First create a generated clock on the ADC_clk port

 

create_generated_clock -name my_ADC_clk -source [get_pins my_ODDR.C] -divide_by 1 [get_ports ADC_clk]

 

Next specify the min and max time that the data can change.

 

We know that the delay from ADC_clk to SerClk is a maximum of 15 - I will assume the minimum is 0 (but the datasheet doesn't explicitly say so, which makes me uncomfortable). Then we know that SerDat is valid no later than  5ns after the latest SerClk (which is 15ns+5ns) and will be held for at least 20ns after the earliest falling edge of SerClk; the earliest falling edge is 15ns after the earliest rising edge (so 15ns + 0ns + 20ns). So, with respect to ADC_clk the data is valid from 20ns after the rising edge of ADC_clk to 35ns after the rising edge of ADC_clk (with some modifications for board time propagation delay - lets say the board delay is 0.3ns min and 0.5ns max, so we add 0.6ns (0.3ns in each direction) to the minimum and 1.0 in the maximum; hence 21ns to 35.6ns. This means that it becomes valid no later than 21ns after the clock and goes invalid as early as 62.5-35.6ns before the next clock edge, so 26.9ns before the edge (which is -26.9ns after the clock edge)

 

So the constraints are

 

set_input_delay -clock my_ADC_clk -max 21 [get_ports SerDat]

set_input_delay -clock my_ADC_clk -min -26.9 [get_ports SerDat]

 

So now the constraints are correct. That being said, the interface can't possibly work; the timing of the valid window is nowhere near the rising edge of the clock, and hence it will fail. However, if you capture the data with the falling edge of the internal clock (which has no effect on the timing constraints), it could work.

 

The timing of FCO is more difficult since the spec doesn't give you enough information. It says the maximum SerClk to FCO is 5ns, and the "Typical" length of FCO is 1ns. As a rigorous ASIC and FPGA designer, I ignore "Typical" numbers - they are not a promise, and hence are not meaningful. But in this case you will just have to assume that it is no worse than SerDat (although if I was buying these in huge quantities, I would complain to the manufacturer that "Typical" numbers are meaningless - tell me the min and max... - ADC vendors are particularly bad at this).

 

Now lets look at option #1. This is a completely different architecture.

 

Everything up to the ODDR is the same. However, in this case, we now ignore the internal 16MHz clock and focus only on SerClk; we do not try and specify any relationship between these two - we treat them as asynchronous (well technically mesochronous).

 

In this architecture, SerClk must be on a clock capable I/O and either drive a BUFG, a BUFH or a BUFR directly. The output of this buffer is then used to clock the capture interface of the ADC.

 

Regardless of which buffer we use, we define the clock on SerClk as a primary clock

 

create_clock -name my_SerClk -period 62.5 [get_ports SerClk]

 

(We can get fancy with the duty cycle later, if needed)

 

The data becomes valid as late as 5ns after the clock and remains valid until at least 25ns+15ns (tCPW + tDH) after the clock, hence -27.5ns after the next clock. Since the board delay is 0.3ns to 0.5ns (only one way matters now), we add 0.2ns to the max and subtract 0.2 from the min, resulting in 5.2 to -27.2

 

set_input_delay -clock my_ADC_clk -max 5 [get_ports SerDat]

set_input_delay -clock my_ADC_clk -min -27.7 [get_ports SerDat]

 

Again, this will probably work with a falling edge capture. Also again, the FCO is handled the same.

 

But here, we need to worry more about the duty cycle. The duty cycle of the internal FPGA clock is guaranteed to be pretty close to 50/50, and the tools understand what it is anyway, so this is fine. But SerClk can have a very odd duty cycle; they only promise 25ns high and 25ns low (out of the 62.5). This is messy - lets go back to the clock.

 

The clock falling edge would normally be at 31.25, but the tool says it can be as little as 25 (corresponding to a high time of tCPW), which is 6.25ns early. Conversely a low time of 25ns means it comes 6.25ns too late. So lets define that.

 

set_clock_latency -source -fall -min -6.25 [get_clocks my_SerClk]

set_clock_latency -source -fall -max -6.25 [get_clocks my_SerClk]

 

Now the clock is correctly defined.

 

With this, we could go back and get more precise definition of the set_input_delay to be related to the falling edge, but that is more complicated...

 

BUT, when defined this way, we have defined (and cannot define and should not define) any relationship between my_SerClk and the internal clock. We must now assume they are mesochronous - i.e. they have no known relationship between them. Of course you could continue to operated on SerClk for the rest of your datapath. But if you want to transfer it back to the internal 16MHz clock, you must use a clock crossing FIFO. No other clock domain crossing circuit can handle one new data every destination clock period. A small FIFO will do (using distributed RAMs for example), and there are ways of handling the FIFO so that you don't need to continually check the empty (see this post on managing the empty)

 

It you choose to transfer it to the 128MHz domain, then you have more flexibility, but you still need a clock domain crossing circuit - you might be able to get away with one of the  XPM_CDC macros. But even here, the XPM_CDC macro is clocked with legal clocks on both sides; the SerClk (after BUFG/BUFH/BUFR) on one side and the 128MHz clock (after BUFG probably) on the other side.

 

Avrum

Tags (1)
1,130 Views
Registered: ‎03-12-2018

Re: Serial ADC timing constraints

Jump to solution

avrumw, thank you for your detailed post.  I will go over this tomorrow and try your suggestions and let you know if we hit any problems. I think that your details are just what we were looking for.

0 Kudos
1,087 Views
Registered: ‎03-12-2018

Re: Serial ADC timing constraints

Jump to solution

Using the examples given by avrumw we are much further along the path now. Thank you again for your help

0 Kudos