cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Scholar
Scholar
1,175 Views
Registered: ‎08-01-2012

Hold failures on DDR input paths after migration from 2017.2 to 2018.2.1

 I am migrating a design (with two variants) from 2017.2 to 2018.2.1

In 2017.2, I never had any issues with the following, but after migration, I regularly get HOLD violations on these paths. The XDC for them is:

 

#eye jitter
set_property CLOCK_BUFFER_TYPE NONE [get_ports MEZZ_EYE_CLK_P]

set_input_delay -clock [get_clocks MEZZ_EYE_CLK_P] -clock_fall -min -add_delay 2.600 [get_ports {MEZZ_EYE_N_LK[*]}]
set_input_delay -clock [get_clocks MEZZ_EYE_CLK_P] -clock_fall -max -add_delay 4.870 [get_ports {MEZZ_EYE_N_LK[*]}]
set_input_delay -clock [get_clocks MEZZ_EYE_CLK_P] -min -add_delay 2.600 [get_ports {MEZZ_EYE_N_LK[*]}]
set_input_delay -clock [get_clocks MEZZ_EYE_CLK_P] -max -add_delay 4.870 [get_ports {MEZZ_EYE_N_LK[*]}]

set_property CLOCK_BUFFER_TYPE NONE [get_ports MEZZ_JIT_TFT_CLKP]

#ADC generating DDR data
#ADC DCO Clock to Data Skew: 0.3->0.7ns
#ADC DCO Skew Register set to 9F = clock delay of 2.8ns
#ADC 1/2 clock(76.25MHz) = 6.557377ns
#ADC 1/4 clock(76.25MHz) = 3.2786885ns
#Smallest skew: 3.1ns and (6.557377ns-3.1ns)=3.457ns
#Largest skew: 3.5ns and (6.557377ns-3.5ns)=3.057ns
#Track Propagation delay (+/-0.00332m)/(146385010m/s)=0.0226ns
#Worst case: 3.1-0.0226=3.077 and 3.057-0.0226=3.034

set_input_delay -clock [get_clocks {MEZZ_JIT_TFT_CLKP}] -min -add_delay 3.034 [get_ports {MEZZ_JIT_TFT_P[*]}]
set_input_delay -clock [get_clocks {MEZZ_JIT_TFT_CLKP}] -max -add_delay 3.48 [get_ports {MEZZ_JIT_TFT_P[*]}]
set_input_delay -clock [get_clocks {MEZZ_JIT_TFT_CLKP}] -clock_fall -min -add_delay 3.034 [get_ports {MEZZ_JIT_TFT_P[*]}]
set_input_delay -clock [get_clocks {MEZZ_JIT_TFT_CLKP}] -clock_fall -max -add_delay 3.48 [get_ports {MEZZ_JIT_TFT_P[*]}]

set_input_delay -clock [get_clocks {MEZZ_JIT_TFT_CLKP}] -min -add_delay 3.034 [get_ports {MEZZ_JIT_P_LK[*]}]
set_input_delay -clock [get_clocks {MEZZ_JIT_TFT_CLKP}] -max -add_delay 3.48 [get_ports {MEZZ_JIT_P_LK[*]}]
set_input_delay -clock [get_clocks {MEZZ_JIT_TFT_CLKP}] -clock_fall -min -add_delay 3.034 [get_ports {MEZZ_JIT_P_LK[*]}]
set_input_delay -clock [get_clocks {MEZZ_JIT_TFT_CLKP}] -clock_fall -max -add_delay 3.48 [get_ports {MEZZ_JIT_P_LK[*]}]

The Hold failure appears to occur as it places the reg's following the DDR regs some distance away.

 

scematic.pnglong.PNGdistance.PNG

 

In 2017.2, the paths never caused an issue, here is a layout example of the same regs:

 

2017.2.PNG

 

I have had some success putting the regs in a Pblock nearby to the DDR regs, but this seems rather excessive. Is there anything else I could/should be doing to prevent these hold failures from occuring in the first place, as pblocks seem rather excessive just to constrain a handful of registers to meet a hold requirement.

 

Any input welcome.

0 Kudos
8 Replies
Highlighted
Guide
Guide
1,145 Views
Registered: ‎01-23-2009

Re: Hold failures on DDR input paths after migration from 2017.2 to 2018.2.1

The Hold failure appears to occur as it places the reg's following the DDR regs some distance away.

 

The failures you are showing are not consistent with this conclusion. The failure is on the input capture flop - starting at the port and ending at the ISERDES (at least as far as I can see). If that is the case, then it is a clock architecture/constraint problem; the clock architecture you are using cannot capture the input given the constraints you have given it.

 

In fact, there should be no (or little) variability in these paths run to run (or even tool version to tool version) - all resources are fixed; the IOBs, the BUFGCE (at least within the handful of nearly identical ones in the bank), the IDELAY, the ISERDES. The only way to make this interface work is to manually fix it - ensure that your IDELAYs are set properly. In UltraScale, the CLOCK_ROOT may make a difference, but my understanding is that the connection from the BUFGCE to the ISERDES in the same bank do not go through the CLOCK_ROOT (they stay in the IOB column).

 

So, I can't explain why there is a difference in tool versions - there shouldn't be.

 

But you will need to redesign/retune/debug this path as a normal timing failure...

 

It might help if you post the detailed timing path report for the failing hold check...

 

Avrum

0 Kudos
Highlighted
Scholar
Scholar
1,133 Views
Registered: ‎08-01-2012

Re: Hold failures on DDR input paths after migration from 2017.2 to 2018.2.1

Thanks @avrumw

I am confused. I'm not usually involved in the io side of things, but was handling the migration. We Have 100s of builds on 17.2 without an issue here, but in 18.2 it seams to be happening fairly regularly, but even more oddly, its happening more often with one of the 2 variants.
0 Kudos
Highlighted
Scholar
Scholar
1,120 Views
Registered: ‎08-01-2012

Re: Hold failures on DDR input paths after migration from 2017.2 to 2018.2.1

Hi @avrumw

 

Here are the path reports for the same path in 2017.2 (pass)

 

path_2017.2.png

 

and failing in 2018.2

path_2018.2.1.png

 

I only assumed placement was affecting it, as it would sometimes be ok with different seeds. Hence my assumption it was trying to fix hold or setup for the follow on path, and broke the hold for this path.

0 Kudos
Highlighted
Guide
Guide
1,114 Views
Registered: ‎01-23-2009

Re: Hold failures on DDR input paths after migration from 2017.2 to 2018.2.1

So, first, these aren't the same report - the one from 2017.2 is the setup check (which passes) and the one from 2018.2.1 is the hold check (which fails). They are also not at the same timing corner. Ideally you should show us the (passing) hold check for 2017 that corresponds to the failing hold check in 2018 - look for the path that has the Path Type : Hold (Min at Slow Process Corner).

 

But the difference is still clear - in 2017.2 there is no clock buffer!

 

The clock input comes right from the IBUFDS to the ISERDES. This is an "illegal" (or at least highly discouraged) clocking scheme. This results in the clock being a "local clock" - running through fabric routing to get to the ISERDES. The tools should not have implemented it like this - it should have inserted a clock buffer.

 

In 2018.2.1, that is what it did - it inserted a BUFGCE. This is "correct". However, as you are seeing, the timing is pretty significantly different in the design with the BUFGCE and the one without; the clock insertion time is probably significantly longer (but constant from run to run), which makes the hold time harder to meet, but makes the setup time easier.

 

Due to this difference in the clocking architecture (with the first one being "illegal"), it is understandable why the timing is significantly different...

 

Avrum

Highlighted
Scholar
Scholar
1,107 Views
Registered: ‎08-01-2012

Re: Hold failures on DDR input paths after migration from 2017.2 to 2018.2.1

@avrumw

 

Hi, I noticed Id run a setup check for the 17.2, and replaced it, but likely after you replied (or were reading). So please can you have another look?

 

Thanks

0 Kudos
Highlighted
Xilinx Employee
Xilinx Employee
1,037 Views
Registered: ‎05-08-2012

Re: Hold failures on DDR input paths after migration from 2017.2 to 2018.2.1

Hi @richardhead. Are the text versions of these reports available? This would help with the comparison. When selecting the timing path from the IDE, you can right-click and select the to report timing from source to destination. You can then use the Tcl from the console and replace the -name with -file.

 


-------------------------------------------------------------------------
Don’t forget to reply, kudo, and accept as solution.
-------------------------------------------------------------------------

---------------------------------------------------------------------------------------------
Don’t forget to reply, kudo, and accept as solution.
---------------------------------------------------------------------------------------------
0 Kudos
Highlighted
Scholar
Scholar
1,027 Views
Registered: ‎08-01-2012

Re: Hold failures on DDR input paths after migration from 2017.2 to 2018.2.1

Hi All

 

It looks as though even though there is a difference (2018.2 inserting a BUFGCE) our input timing specs may actually be wrong. We are trying to fix this and possibly offsetting the clock further against the data to compensate for this..

 

 

0 Kudos
Highlighted
Moderator
Moderator
969 Views
Registered: ‎01-16-2013

Re: Hold failures on DDR input paths after migration from 2017.2 to 2018.2.1

@richardhead

 

Any update on this thread?

 

--Syed

---------------------------------------------------------------------------------------------
Kindly note- Please mark the Answer as "Accept as solution" if information provided is helpful.
Give Kudos to a post which you think is helpful and reply oriented.

Did you check our new quick reference timing closure guide (UG1292)?
---------------------------------------------------------------------------------------------
0 Kudos