cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
dmiller
Visitor
Visitor
3,678 Views
Registered: ‎03-08-2017

Achieving full SelectIO performance with source synchronous signals

I originally posted this in the wrong forum and I can't edit it -- probably because I am too new.  I apologize if you are seeing this again. 

 

Hello all --

 

I have a design where I need to read in source synchronous DDR data with a 500MHz clock, so 1GHz bit rate.  The clock arrives centered to the data.  I have played with the timing tools and it looks to me like there is no way to do this with static delays.  I did some searching on the forums and found some posts that seemed to confirm my findings.   

 

Assuming I am right that I will need to do dynamic adjustment of SelectIO timing, are there any examples that someone can point me to that address reading high speed source synchronous DDR data?  

 

We are using a XC7Z020-2 FPGA and a Analog Devices HMCAD1511 ADC in the system.

 

Thanks

 

Dave

0 Kudos
7 Replies
austin
Scholar
Scholar
3,670 Views
Registered: ‎02-27-2008

0 Kudos
dmiller
Visitor
Visitor
3,647 Views
Registered: ‎03-08-2017

Thanks for the link.  They have a fast DDR DAC, but that's going to be outputs, not inputs.  The DDR ADC they have is 250 MSPS, so a 125MHZ clock.  I think that can be done with static timing so that might not be what I am looking for.  I'm looking for an example where someone is using SelectIO and dynamic timing adjustments to capture data at full rated data speeds.  (Or near full at least.)  

 

I have seen notional descriptions about how to do this for example using the PLL to precisely adjust clock delays.  I am picturing a scheme where I adjust the clock delay using a PLL and watch the frame signal transition.  With 360 degrees of phase adjustability, I should be able to watch a transition move completely across a bit.  By noting the phase when the transition first happens and last happens at a given bit and setting the phase to the middle of the two, I should be able to optimize the timing.  Is this similar to schemes that are being used or am I on the wrong track here?

 

Thanks

 

 

0 Kudos
gszakacs
Professor
Professor
3,618 Views
Registered: ‎08-14-2007

I've used pretty much the method you describe.  Some notes:

 

The number of fine phase steps per VCO cycle is fixed.  That means the number of steps per cycle of the MMCM output clock depends on the output divider.

 

The time for the MMCM to acknowledge a fine phase step is fixed.  This means you can ignore the ACK signal and just provide the increment every N clock cycles.  I seem to remember using 16 for N.

 

The clock you use for the state machine that increments (and decrements) the phase does not need to be related to the clock you're adjusting.  I seem to recall a maximum of 200 MHz for this clock, check your data sheet.

 

Using fine phase delay, the phase wraps back to 0 at 360 degrees, so it isn't necessary to reset or back up the phase.  In my design I ran the phase through two complete cycles which allows me to easily find the largest data "eye" even if it wraps through 0 degrees.  I used only phase increment.  When the full 360 degrees x 2 is finished, I continued to increment the phase to the center of the best "eye" found.  This process is easiest if your source can put out a standard pattern during the sweep.  If I'm not mistaken, your ADC has the capability to do this.  In my design I was using an image sensor that had a test pattern of alternating all ones and all zeros.

-- Gabor
beged
Visitor
Visitor
3,601 Views
Registered: ‎03-10-2017

szia Gabor, te merrefele vagy a nagyvilagban? udv, /Bertalan
0 Kudos
dmiller
Visitor
Visitor
3,592 Views
Registered: ‎03-08-2017

Thanks Gabor.  I appreciate the information.

 

(are you half way around the world?)

0 Kudos
morgan198510
Voyager
Voyager
3,578 Views
Registered: ‎04-21-2014

Overly simplified but I think you'll get the idea.


1.  After configuration of FPGA, put your ADC into a pattern mode.

2.  Control your selectIO for idelay (and framing if necessary).

3.  See which delays result in good data.
4.  Select center of that range of "good" delays.
5.  If necessary, characterize delay shifts necessary versus die temperature, monitor temperature via XADC, and change number of taps dynamically.

6.  In any event, test over temperature range required.

 

We have developed something very similar 3 or 4 times, without need to use XADC approach.  I can't give away the design, but PM me if this is something you'd want professional help on.

 

Alternatively, generate a MIG in a dummy project, e.g. for a 7-series dev card.  Take a look at the read/write leveling controllers in the source code.  You'll have to adapt the idea to your application, though, but that approach is there for study.

***Many of us who help you are just FPGA enthusiasts, and not Xilinx employees. If you receive help, and give kudos (star), you're likely to continue receiving help in the future. If you get a solution, please mark it as a solution.***
0 Kudos
gszakacs
Professor
Professor
3,545 Views
Registered: ‎08-14-2007

We're all halfway around the world from somewhere ;-)

 

I'm in New Hampshire just north of Boston.

-- Gabor
0 Kudos