cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
18,703 Views
Registered: ‎07-26-2013

How to properly synchronize ADC data in Virtex 6 using OFFSET IN constraint and/or IDELAY?

Hi,

 

after spending the last couple of days on trying to get the parallel LVDS interface of an ADC to properly synchronize within a Virtex 6 FPGA, I'm now at a point where I need some help or advice. Even though this issue is very popular, I was not able to get a satisfying result with the help of all the other threads addressing the same problem.

 

My setup:

 

- Virtex 6 device (195T)

- 12 Bit AD9230-210 ADC (Analog Devices) connected via parallel LVDS

- ADC is clocked with a 200MHz output of an MMCM (I'm fully aware of the downsides of this setup but unfortunately, I have no possibility to clock the ADC with an external oscillator)

- ADC is running with SDR

 

The ADC is capable of throwing out a pseudo-noise squence (PN9) with which I can validate proper synchronization of all parallel data bits (using chipscope and Matlab). Up to now, I was not able to get all 12 bits properly synchronized within the FPGA at the same time.

 

The best result was achieved by declaring OFFSET IN constraints on the data pins and the DCO signal of the ADC. The DCO signal is the feedback clock, such that this is a source-synchronous design. My constraints are declared as follows:

 


 

# DCO signal from ADC (200MHz)
NET "ak22" TNM_NET = "ak22";
TIMESPEC TS_ak22 = PERIOD "ak22" 5 ns LOW 50%;
NET "ak22" CLOCK_DEDICATED_ROUTE = FALSE;

 

# group parallel data bits of ADC
NET "af20" TNM = rf_adc_data;
NET "ag22" TNM = rf_adc_data;
NET "aj20" TNM = rf_adc_data;
NET "am21" TNM = rf_adc_data;
NET "ap20" TNM = rf_adc_data;
NET "ae21" TNM = rf_adc_data;
NET "am20" TNM = rf_adc_data;
NET "ac20" TNM = rf_adc_data;
NET "an19" TNM = rf_adc_data;
NET "af19" TNM = rf_adc_data;
NET "ap19" TNM = rf_adc_data;
NET "ak19" TNM = rf_adc_data;

 

# offset in constraint between data groups and DCO signals
TIMEGRP "rf_adc_data" OFFSET = IN -0.4 ns VALID 4.8 ns BEFORE "ak22" RISING;


 

The cryptic combinations of letters and numbers are the pin names as shown within the FPGA Editor. The CLOCK_DEDICATED_ROUTE constraint was necessary because the DCO signal is not connected to a BUFIO-capable pin. I'm routing the DCO signal directly through a BUFG and then capture the date with a 2-stage register. The clock domain crossing towards the internal global clock is done with a DRAM-FIFO.

 

My first question relates to the OFFSET IN values. The datasheet of the ADC specifies a 'Data to DCO Skew' of -0.3 to 0.5 ns and a 'Rise/Fall Time' of 0.2ns. Is the -0.4ns = -0.5ns + 0.2/2ns correct? The VALID time is simply 5ns (period at 200MHz) - 0.2ns, correct me if I'm wrong.

I suppose that PAR takes these values into account, because playing with them results in different delay values on the respective data signals (used FPGA Editor to find out).

However, there still exist some bit errors that I must get rid of.

 

What is a suitable way to enable proper synchronization of the ADC's data? Must I additionally insert variable IDELAY elements on the clock and data signals? How would the OFFSET IN constraint change if I do so?

 

Eventually, I must properly synchronize 6 of these ADC's which I assume is not a big deal if one of them works correctly.

 

Maybe there are some suggestions on this issue. I'm really new to FPGA-related issues, so it's very likely that I'm missing important information.

 

Regards,

Christian

0 Kudos
9 Replies
Highlighted
Scholar
Scholar
18,702 Views
Registered: ‎02-27-2008

Christian,

 

Are all the traces to the ADC equal delay (length)?

 

Are all the output IO pins used in the same bank?

 

If you have unequal traces, or pins not in the same bank, I would respin the pcb to fix that.

 

Use of a non clock route for a clock means that the delay is not known really known for the constraints, so that is not a good thing.  As you cannot put it on a clock capable pin, you will have different behavior if the chip process varies (from unit to unit).  Bad practice.  I would respin the pcb to place the feedback clock on the proper pin.  Or, I would avoid using the feedback clock, if at all possible.

 

 

Austin Lesea
Principal Engineer
Xilinx San Jose
0 Kudos
Highlighted
18,681 Views
Registered: ‎07-26-2013

Hi Austin,

 

thanks for your fast reply.

 

Unfortunately, a respin of the pcb is not an option for now.


Are all the traces to the ADC equal delay (length)?


 

I will try to find out whether the traces are of equal length.

 


Are all the output IO pins used in the same bank?


 

What do you mean, when writing "output IO pins"? I would have considered them as "input pins".

Anyway, they are all used in the same bank.

 


Use of a non clock route for a clock means that the delay is not known really known for the constraints, so that is not a good thing.  As you cannot put it on a clock capable pin, you will have different behavior if the chip process varies (from unit to unit).  Bad practice.  I would respin the pcb to place the feedback clock on the proper pin.  Or, I would avoid using the feedback clock, if at all possible.


 

Following your statement, PAR will not take the delay from the IO pin to the BUFG into account. Is it possible to check the value using the FPGA Editor and then incorporate it into the OFFSET IN constraint?

 

Before I was working on the constraints approach, I was already testing a design where the data from the ADC is adjustable delayed (utilizing IODELAYE1) and then clocked directly by the internal 200MHz clock (MMCM output), but this wasn't resulting in proper synchronization.

 

From my point of view, using the constraints already works far better than with the delay elements only. All I need is some fine tuning in order to get rid of the bit errors, which do occur on some bits only (8 out of 12 bits are working).

 

Regards,

Christian

0 Kudos
Highlighted
Scholar
Scholar
18,679 Views
Registered: ‎02-27-2008

c,

 

The non-clock route worst case delay is taken into account.  But, that worst case delay will almost never be present.


As opposed to clock poaths which have their delay engineereed, and matched, general interconnect, when used for a clock, will lead to eventual pain (it will not be as slow, or as fast as desired, almost gauranteed).

 

As you note, yes they are inputs (from the ADC, to the FPGA)....

 

The same bank implies that at least the delay between each input is likely to be small, as opposed to if the pin is in another bank, where the skew (difference in delays) is larger.

 

"Can't respin the board" means you will have a 'hack' -- you will make it work for this board, but making it work for the next board may require 'hacking' each design until they each work.

 

Not dealing with the inevitable will only lengthen the misery.  Been there, done that.  Sometimes the boss says "no changes allowed" but that is equivalent to "better start circulating your resume" (as you won't be in business if you are not allowed to correct mistakes).

 

Engineers who never make mistakes, have never made a product.  The best engineers never make the same mistake again, ever.

 

One of the best engineers I know used to brag that he could make mistakes faster than anyone else (hence, he was always on time, and successful with his projects).  So, it is not about avoiding all mistakes.  This is totally lost on managers that do not understand what engineering is all about.  Finding, and fixing mistakes, even if you are Boeing, Airbus, or Mercedes, is key to success.  (Note that NONE of those EVER make the same mistake).

 

 

 

 

 

Austin Lesea
Principal Engineer
Xilinx San Jose
0 Kudos
Highlighted
Explorer
Explorer
18,655 Views
Registered: ‎02-27-2008

I would suggest working more on your first approach, as I've used it successfully in other applications. Ignore the DCO from the ADC, and just use the 200 MHz clock from the MMCM (no FIFO necessary). Since the IODELAY can only get you 2.5 ns of delay, you'll have to use an IDDR input register to cover the whole data period. Command the ADC to output alternating ones and zeroes (looks like test_io register = 0x07). Monitor the IDDR outputs every other clock, and advance the IODELAY elements from 0 to 31. You should be able to build a 64-entry table of the IDDR outputs for each ADC data pin. Find the transition edge for each input, and use a delay 2.5 ns away from that. If your board delays aren't too mismatched, it should be the same transition (rising/falling) for all of the input, and they shouldn't be too different.

Ideally, you'd have the FPGA perform this calibration at every start-up, to eliminate variation from re-compiles or device to device. For a quick and dirty prototype, you could hard-code those delays.

-Greg
0 Kudos
Highlighted
18,604 Views
Registered: ‎07-26-2013

Hi Austin,


austin wrote:

 

"Can't respin the board" means you will have a 'hack' -- you will make it work for this board, but making it work for the next board may require 'hacking' each design until they each work.


 

 

Thanks for your suggestions on this issue. The problem here is that we lack in time and due to the fact that I'm working on a prototype I would be very pleased with a 'hack'.

 

Anyway, I will keep in mind your advices regarding mistakes of engineers and how to learn from. I fully agree with your statement, but as mentioned before, a respin of the pcb is not feasibly under the given conditions.


gbredthauer wrote:
Since the IODELAY can only get you 2.5 ns of delay, you'll have to use an IDDR input register to cover the whole data period. Command the ADC to output alternating ones and zeroes (looks like test_io register = 0x07). Monitor the IDDR outputs every other clock, and advance the IODELAY elements from 0 to 31. You should be able to build a 64-entry table of the IDDR outputs for each ADC data pin. Find the transition edge for each input, and use a delay 2.5 ns away from that.

Greg,

 

I have already used IODELAY and the alternating ones and zeros in order to find the correct delay values, but in a slightly different manner though. I have spent some logic that checks whether the alternating pattern is correctly sampled within the FPGA. I can then read out a boolean which indicates correct/incorrect sampling. However, even when the zero-one-pattern was properly sampled (verified with Chipscope), a simple sine-wave reveals bit errors as well as the pseudo-noise sequence (pn9). Apparently, the zero-one-pattern doesn't work well with this design.

 

The idea of using an IDDR sounds promising. For my understanding: I can advance the IODELAY elements from 0 to 31 and the 64-entry table will be filled by the outputs Q1 and Q2 (Q1 for the first 32 entries and Q2 for the last 32 entries), correct me if I'm wrong. Eventually, I want to synchronize all of the data to the rising edge of the internal 200MHz clock. Since the ADC works with SDR, do I need to run the IDDR in SAME_EDGE mode in order to achieve this goal?

 

Regards,

Christian

0 Kudos
Highlighted
Explorer
Explorer
18,596 Views
Registered: ‎02-27-2008

Correct, you'd want to use SAME_EDGE for this application.  Once you've gotten the delays correct, the next step would be a slow sine wave or ramp (too bad the ADC you're using doesn't have the typical ramp test pattern).  Record a chunk of data using chipscope or similar.  At this point you can check to see if any of the bits are delayed relative to the others by a full clock cycle.  I would not expect that to happen, since a 5 ns delta is pretty large, but if they are, you can add flip-flops to align everything again.  Good luck!

 

-Greg

0 Kudos
Highlighted
18,409 Views
Registered: ‎07-26-2013

Hi,

 

just for your information. The problem with the ADC synchronization was arising from incorrect termination of the digital lines from the ADC to the FPGA. I've added DIFF_TERM = "TRUE" to the respective pins in the ucf file. With the internal 100ohm termination enabled, the ADC data can be synchronized as expected using IODELAY elements. Greg, thanks for your suggestion using IDDR registers, this was very helpful.

 

Christian

0 Kudos
Highlighted
Adventurer
Adventurer
17,817 Views
Registered: ‎02-18-2014

Christian,

 

Hi,  I am working on using data from a parallel ADC from TI ( ADS54RF63 ), and sending it over to a virtex 6 ml605 board, and then to a DAC from TI ( DAC34Sh84). I have quite a few fundamental issues with respect to implementation. I am more of a novice on working with Xilinx boards.

 

It would be extremely helpful if I could get in touch with you over mail, so as to get a few issues with respect to implementation solved.

 

Thanks a lot in advance,

 

Basil.

badaboss@gmail.com

0 Kudos
Highlighted
Adventurer
Adventurer
16,518 Views
Registered: ‎02-18-2014

Hi ,

 

 Sorry to repost again. But, I quite badly need some help with my hardware setup involving Virtex 6 ML605 board, a parallel 12-bit LVDS ADC ( ADS54RF63 ), and a 16 bit DAC board ( DAC34SH84 ).

 

 -- I am using the crystal inside the DAC to generate a 250 MHz clock, for the DAC and the FPGA.

 -- I am using the MMCM, and then a BUFG, then OBUFDS to send out a differential clock through the J57 pin (USER_GPIO) on ML605.

 -- I am using one of this differential pair to clock my ADC board, which converts the singled ended clock internally to differential.

 

But, 

1. The voltage levels from J57 pin on FPGA are less than the minimum required by the ADC board . Instead of 500 mVp-p, I obtain only 320 mVp-p. Any idea how could I improve it?

 

2. I am using a latch inside the FPGA, to take in the ADC digital data and then the latched output is directly sent to the DAC input. This latch is clocked by the FPGA clock ( generated from DAC board - 250 Mhz ). As of now, this system is not working. But, I assume this is because of the clock levels being less. But even then, I am quite skeptical about this design working. Any inputs/suggestions would be highly valuable.

 

Thanks a lot,
Basil.

0 Kudos