cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Voyager
Voyager
1,217 Views
Registered: ‎04-12-2012

Setting output delay

Jump to solution

Hello,

I want to constrain my system (attached) for input and output delays.
The input delay is straightforward. But I have my doubts about the output delay.

This is what I did for the input delay:

create_clock -name clock_in -period 10.000 [get_ports {clock_in}]
set_input_delay -clock clock_in -max 1.0 [get_ports data_in]
set_input_delay -clock clock_in -min 0.1 [get_ports data_in]

Now, what should I do to specify the output delay ?
What clock should be used for reference ?

clock_in ?

set_output_delay -clock clock_in -max -1.0 [get_ports data_out]
set_output_delay -clock clock_in -min -0.1 [get_ports data_out]

Or clock_out ?

Now, what should I do to specify the output delay ?

set_output_delay -clock clock_out -max -1.0 [get_ports data_out]
set_output_delay -clock clock_out -min -0.1 [get_ports data_out]

system.png
0 Kudos
1 Solution

Accepted Solutions
Highlighted
Guide
Guide
433 Views
Registered: ‎01-23-2009

I think we are all confused. It would really help if you posted the specifications for the receiving device - you have described it a couple of different ways, and we are not sure what it really looks like.

Our clock is 100 MHz. And the Rx device wants 0.1 nS of hold time and 9 ns for setup.

If this is really it - the device specifies a 9ns setup requirement and a 0.1ns hold requirement to the rising edge of the clock that it receives, then you are right - this is all we need to know; we don't care what the device does internally, as long as it says "if you give me this, then I am fine" - the fact that it may or may not use the falling edge internally for this capture is irrelevant (and this whole discussion went off the rails discussing a clock inversion or falling edge of the clock).

HOWEVER, if this is what the device really needs then you have a big problem. Your constraints are not consistent with this requirement. This is saying that the FPGA must change its output between 0.1ns after the clock and 9ns before the next clock, which means 1ns after the previous clock (minus jittter). This allows an uncertainty of only 0.9ns and it is not symmetrical - this will be VERY hard (if not impossible) to meet in the FPGA. Most clock forwarded interfaces end up with an uncertainty of around +/-600ps, which is a 1.2ns uncertainty - more than you are allowing here, and the asymmetry will require some kind of a delay.

For set_output_delay, the -max is the setup requirement of the device, and the -min is the negative of the hold requirement of the device, so the constraints would really be

set_output_delay -clock clock_out -max 9.0 [get_ports data_out] 
set_output_delay -clock clock_out -min -0.1 [get_ports data_out]

This assumes that "clock_out" is the generated clock

create_generated_clock -name clock_out -source [get_pins <instance_of_ODDR>/C] -divide_by 1 [get_ports clock_out]

But again, this will not pass with just the data coming from the IOB flip-flop and the clock coming from an ODDR.

Avrum

View solution in original post

25 Replies
Highlighted
Guide
Guide
1,186 Views
Registered: ‎01-23-2009

The answer with constraint is always "what the system requires". Constraints are the mechanism for communicating the requirements of the system outside the FPGA to the FPGA tools. Simply looking at a schematic of what you have in the FPGA is insufficient to decide how to constrain it.

So the clock to use for the set_output_delay constraint is determined by what clock is used for the device connected to the outputs of FPGA, just as the values for the set_output_delay command are derived from the setup and hold requirements of that connected device. Similarly, the clock to use for the set_input_delay is determined by which clock is used to clock the device connected to the inputs of the FPGA (which may well be shared with the FPGA clock) and the values are the min and max delays of that device.

Now a couple of comments. You shouldn't forward a clock through the FPGA without using an ODDR to "mirror" the clock. See this post on generating output clocks. It also shows the required create_generated_clock command for creating the output clock (remember, the clock used for a constraint is a clock object, not a pin.

Finally, your constraints don't look reasonable. Your input delays have a variation of 0.9ns on a 10ns window - I know of very few devices that will generate such a generous output window.

Conversely, your output constraints are just wrong - your -max is smaller than your -min, and that can never be true (but the tools will accept it). What you are telling the tools is "The output will become valid 1ns after the rising edge and remain valid until 0.1ns after the clock, meaning that the data is never valid (it's actually valid for a duration of -0.9ns).

Avrum

Highlighted
Scholar
Scholar
1,177 Views
Registered: ‎08-07-2014

Adding to the excellent explanation by Avrum, to answer your question very briefly, you have use the internal clock for your constraints when specifying the set_output_delay. This is the clock that is driving the flop from which data is output.

Again as Avrum is mentioned, clock output should be done via ODDR. Note that the clock output of the ODDR is NOT used as the "clock" while specifying the set_output_delay.  It should be the internal clock that clocks the ODDR (the same internal clock is used to clock the flop from which data is output.....assuming the data output and clock output are synch).

Discard the flop example picture you posted (as you now know that clock fwd is not done that way).

E.g. - Look at the constraints for the Xilinx gmii2rgmii IP core to understand it better. It also had a short docu.

 

--------------------------------------------------------------------------------------------------------
FPGA enthusiast!
All PMs will be ignored
--------------------------------------------------------------------------------------------------------
0 Kudos
Highlighted
Guide
Guide
1,161 Views
Registered: ‎01-23-2009

Note that the clock output of the ODDR is NOT used as the "clock" while specifying the set_output_delay.  It should be the internal clock that clocks the ODDR

I don't agree. This is the whole point of the "create_generated_clock" command. Using the create_generated_clock command, you can create a static timing analysis clock (a true clock) on the output of the FPGA (see the link in my previous response). With the "-divide_by 1" option of the create_generated_clock, a new clock is generated on the output with the same frequency as the input clock, but that includes the propagation delay of the input clock to the generated clock as part of the "Source Clock Delay" of the static timing analysis.

In fact, this is critical to constraining a system that uses this forwarded clock as the clock for the destination. As far as the destination device is concerned, it can only specify a relationship between the clock that it receives (which is the forwarded clock) and the data that it receives (which is the data). The only meaningful constraints can be derived from this relationship - the setup and hold requirements of the external chips data with respect to the clock it receives from the FPGA (along with any board propagation delays) determine the -min and -max delays of the set_output_delay command with respect to the forwarded clock.

Avrum

Highlighted
Voyager
Voyager
1,066 Views
Registered: ‎04-12-2012

Avrum, 

This is a theoretical question, so the exact values of the delays is less relevant.

But I want to address this:

Conversely, your output constraints are just wrong - your -max is smaller than your -min, and that can never be true (but the tools will accept it). What you are telling the tools is "The output will become valid 1ns after the rising edge and remain valid until 0.1ns after the clock, meaning that the data is never valid (it's actually valid for a duration of -0.9ns).

The -max delay was negative (-1.0). When it  should've been positive. The -min can remain -0.1.

set_output_delay -clock clock_out -max 1.0 [get_ports data_out] 
set_output_delay -clock clock_out -min -0.1 [get_ports data_out]

What I intend to express with the above is: 

1. At worst the receiving device expects "data_out" to be stable 1nS after the positive clock edge. To allow a window of 9nS for setup (yes I know, it's an overkill).

2. The hold time requirement of the receiving side requires "data_out" to remain stable for at least 0.1nS (the constraint for the hold time (-min) uses a negative value -0.1 because of the way hold time is calculated. But it actually expresses a positive delay.

Is this correct ?

0 Kudos
Highlighted
1,032 Views
Registered: ‎01-22-2015

@shaikon 

Here is what Avrum (my teacher) is telling you, but in slightly different words.

It is important to understand that the set_output_delay constraint is used to describe delays (for data and clock) that are outside the FPGA.  The set_output_delay constraint is not used to adjust a delay in order to make a path pass timing analysis.  The following example will help clarify.

The image below shows circuits that are used for typical source synchronous output from the FPGA.
src_sync_output.jpg

Delays outside the FPGA that need to be described by set_output_delay are:

  • t_pxd = circuit board trace delay for the data
  • t_pxc = circuit board trace delay for the clock
  • t_sux = setup time for the external register that receives the clock and data
  • t_hdx = hold time for the external register that receives the clock and data

These quantities are used in a pair of set_output_delay constraints as follows:

set_output_delay -clock FCLK -max  t_max  [get_ports data_out ]
set_output_delay -clock FCLK -min  t_min  [get_ports data_out ]

where

t_max = max(t_pxd - t_pxc) + t_sux
t_min = min(t_pxd - t_pxc) - t_hdx

and the name, FCLK, of the forwarded clock comes from your create_generated_clock constraint that may look something like the following:

create_generated_clock -name FCLK1 -source [get_pins ODDR1/C] -invert -divide_by 1 [get_ports clk_out]

After placing these constraints in the Vivado .xdc file, you run timing analysis to see if the interface passes.  If the interface fails timing analysis, then you fix the problem by changing your circuit  - and not (solely) by changing your constraints.

One circuit change that can help the interface to pass timing analysis is to use the ODELAY primitive on either the forwarded-clock or the forwarded-data.  Other circuit changes are available to help the the interface pass timing analysis.

Mark

Highlighted
Voyager
Voyager
1,001 Views
Registered: ‎04-12-2012

Thanks Mark.
This is understood.

But why did you use "-invert" for the ODDR generated clock ?

0 Kudos
Highlighted
974 Views
Registered: ‎01-22-2015

@shaikon 

But why did you use "-invert" for the ODDR generated clock ?
Normally, when using an ODDR to forward a clock, we connect the ODDR inputs as D1=1 and D2=0.  However, Fig. 15.5 shows the opposite (ie. D1=0, D2=1), which causes the forwarded clock to be inverted.  Hence, the use of “-invert” in create_generated_clock.

If you don’t invert the forwarded clock, then the external register will receive clock and data that are almost edge-aligned, which is usually bad for timing analysis.  However, if you invert the forwarded clock then the external register will receive clock and data that are almost center-aligned, which is usually good for timing analysis.

edge_center_align.jpg

0 Kudos
Highlighted
Voyager
Voyager
970 Views
Registered: ‎04-12-2012

Normally, when using an ODDR to forward a clock, we connect the ODDR inputs as D1=1 and D2=0.  However, Fig. 15.5 shows the opposite (ie. D1=0, D2=1), which causes the forwarded clock to be inverted.  Hence, the use of “-invert” in create_generated_clock.

So we are "Normally" doing it wrong ?

0 Kudos
Highlighted
965 Views
Registered: ‎01-22-2015

So we are "Normally" doing it wrong ?

By "Normally", I mean that we we typically use the ODDR (with D1=1, D2=0) to send a copy (uninverted version) of the clock out of the FPGA for other purposes (eg. inspection,  troubleshooting).   -and sometimes, the source-synchronous interface feeds an external device that is not a simple register and wants edge-aligned data.  However, for a source-synchronous interface that feeds an external register (or something that has setup/hold like a register), then it is usually best to configure the ODDR so it inverts the forwarded clock.

Highlighted
Voyager
Voyager
782 Views
Registered: ‎04-12-2012

Thanks Mark.

In my case the receiving device shifts the clock 180 degree using a PLL so I don't think it's appropriate here.

 

0 Kudos
Highlighted
752 Views
Registered: ‎01-22-2015

@shaikon 

...the receiving device shifts the clock 180 degree using a PLL ...

Yes, in your case, the ODDR should NOT invert the forwarded clock.  However, you must somehow tell Vivado that the forwarded clock is being shifted by 180degs.

Let's say that your forwarded clock has a frequency of 100MHz, which is a period of 10ns.  So, the 180deg clock shift is equivalent to a delay of 5ns for either the clock or data.  This 5ns delay is one of those external delays that you must tell Vivado about using set_output_delay.

A easy way to include this 5ns delay into the set_output_delay constraints is to say that it is the trace difference, (t_pxd - t_pxc), that is creating the delay rather than the external device.  This imaginative thinking (aka lie) is OK and does no harm.  So, if your clock is 100MHz, then t_max and t_min can be computed as shown below:

t_max = max(t_pxd - t_pxc) + t_sux  ~  5.0 + t_sux
t_min = min(t_pxd - t_pxc) - t_hdx  ~  5.0 - t_hdx

 

Highlighted
Voyager
Voyager
737 Views
Registered: ‎04-12-2012

However, you must somehow tell Vivado that the forwarded clock is being shifted by 180degs.

Mark, I don't see why this is correct.

What I want to achieve is the output data to be perfectly edge aligned to the clock (at the output of OUR FPGA)...such that when the receiving side shifts it 180 degrees it'll become center aligned.

The 180 shift happens INTERNALLY in the receiving end. If we could somehow examine the input delay constraints of the remote device we'll see that it expects to get the data edge aligned to the clock. So why should we design OUR output delay constraint to reflect a different situation ?  

 

 

 

0 Kudos
Highlighted
713 Views
Registered: ‎01-22-2015

Shai,

What I want to achieve is…
Yes, thinking about these interfaces is difficult.  However, I find it helpful to remember that set_output_delay is used to describe what you “got” and not what you “want”.

When describing the 180deg clock shift, the lie we used is equivalent to the truth.  (Whoa!  I hope my kids to try that argument on me).

That is, at the external device, we can imagine that the clock enters and is routed through a phase-shifter before it reaches the register that receives both clock and data from the FPGA.  In our microwave radars, our microwave phase shifters are sometimes just coils of coaxial cable.  So, it is equivalent to think of signal phase shift as signal delay cause by propagation of the signal through an extra length of trace/cable.

We have exactly your design on one of our boards.  That is, the FPGA uses source-synchronous output to feed a DAC and the DAC shifts the clock by 180degs (well, actually 90degs in our case because our interface is DDR and yours is SDR). Anyway, what I have described to you is exactly what Avrum described to me many years ago – and our design has been working well ever since.   Sorry, I can’t find that Forum post between Avrum and I.

In summary, I recommend that you do the following:

  • setup the ODDR with D1=1 and D2=0
  • remove “-invert” from the create_generated_clock constraint
  • calculated t_max and t_min as I have described and use these values in your pair of set_output_delay constraints
  • run timing analysis and see if you get positive slack for both setup and hold of the interface

You do have a single data rate (SDR) interface, right?   What is the frequency of the forwarded clock?

Highlighted
Voyager
Voyager
693 Views
Registered: ‎04-12-2012

However, I find it helpful to remember that set_output_delay is used to describe what you “got” and not what you “want”.

I completely agree with this statement and without the 180 degree shift - it's EXACTLY what I do...I AM describing what I got ! Up to the input of the receiving device's pins that is...

1. What happens on board (any trace length imbalance) should be expressed in the output delay constrains of the Tx.  

2. But what happens afterwards (on the Rx device's silicone) IS NOT in the scope of the Tx device's constraints.

IMO, expressing the 180 degree phase shift in the constraints of the Tx device - will ruin the precise edge alignment you got with the ODDR. Because you're essentially telling Vivado that the Rx device is expecting to get the data 180 degrees after the clock edge - which IS NOT true. The Rx device is expecting to get the data phase aligned to the clock - and then manipulate the clock (on chip) to make it center aligned.

 

 

 

0 Kudos
Highlighted
Guide
Guide
676 Views
Registered: ‎01-23-2009

I completely agree with this statement and without the 180 degree shift - it's EXACTLY what I do...I AM describing what I got ! Up to the input of the receiving device's pins that is...

I'm not sure I am following this thread completely, but I am worried that there may be some confusion here.

Most constraints are there to describe to the tool the information about the environment in which the FPGA is working in. In other words, they describe the stuff outside the FPGA; not the stuff inside.

Specifically, output constraints (set_output_delay commands) communicate to the tools what the external device needs taking into account the timing of the board. So it is what the external stuff needs.

This is completely true of create_clock (applied to port) and set_input_delay and set_output_delay - these describe the external conditions of the FPGA; what the external clock is providing to the FPGA, what the external devices are providing to the FPGA as inputs and what the external device requires of the outputs of the FPGA (taking into account any effects of the board - specifically including board propagation delays).

There are other constraints that are "internal exceptions" - things that describe special timing characteristics of the internals of the FPGA. These would include any exceptions around clock domain crossing circuits, multi-cycle path constraints, and false path constraints on pseudo-static inputs and true false paths (these are the majority of cases where internal exceptions are required).

The create_generated_clock command is sort of like an internal exception. This command is used to describe derived clocks - these can be internal clocks (like ones from a fabric clock divider [not recommended] or from a BUFGCE/BUFHCE generated clock) or, as is the case here, a generated clock that is coming out of the FPGA (the create_generated_clock through the ODDR). In the case of the ODDR clock, you need to inform the tool if the propagation path through the ODDR is a "pure propagation delay" (where the rising edge of the internal clock generates the rising edge of the external clock) or an inverted path (where the falling edge of the internal clock generates a rising edge of the external clock).

But I want to be very clear - the set_input_delay and set_output_delay commands describe the timing conditions/requirements outside the FPGA. You use this to constrain and validate that the design you implemented can meet the requirements of the devices (and board) outside the FPGA.

Avrum

0 Kudos
Highlighted
666 Views
Registered: ‎01-22-2015

Up to the input of the receiving device's pins that is...
Fundamentally, timing analysis ensures the correct transfer of data from register to register.  So, we must describe the source-synchronous-output-interface to Vivado as if it were a register to register transfer of data – and not, for example, as an FPGA to DAC transfer of data.

The template for t_max and t_min that I have shown to you can be derived from the fundamental timing analysis of a register-to-register transfer of data. 

That is, if you were to do this analysis, then you would find that the combination of terms found in t_max are part of the setup analysis and the combination of terms found in t_min are part of the hold analysis.  Since the terms in t_max and t_min are located outside the FPGA, Vivado has no knowledge of them.  Therefore, set_output_delay constraints are used to quantify the terms found in t_max and t_min so that timing analysis has the information it needs to do its job.

You may find <this post> to be helpful with the understanding of things.

0 Kudos
Highlighted
Voyager
Voyager
662 Views
Registered: ‎04-12-2012

Avrum:

In the case of the ODDR clock, you need to inform the tool if the propagation path through the ODDR is a "pure propagation delay" (where the rising edge of the internal clock generates the rising edge of the external clock) or an inverted path (where the falling edge of the internal clock generates a rising edge of the external clock)

In my example I'm using the ODDR to a non-inverted clock. generate My. And I specify the "pure propagation delay exactly like you said.

Later, when this clock gets into the Rx device - it's shifted by a PLL 180 degree. And this version of the clock is used to sample the data. What I argue - that this phase shifting done by the Rx device's PLL SHOULD NOT be expressed when setting the output delay constraints of the Tx device. 

Mark disagrees: 

Yes, in your case, the ODDR should NOT invert the forwarded clock.  However, you must somehow tell Vivado that the forwarded clock is being shifted by 180degs.

0 Kudos
Highlighted
Guide
Guide
653 Views
Registered: ‎01-23-2009

Yes, in your case, the ODDR should NOT invert the forwarded clock.  However, you must somehow tell Vivado that the forwarded clock is being shifted by 180degs.

I agree with Mark. The create_generated_clock -invert command implies that you are inverting the clock "inside" the FPGA, and you are not, so you should not use the -invert flag.

Now, if the ADC does do something "odd" with the clock - i.e. the clock to data relationship at the pins of the ADC are not expressed with respect to the rising edge of the clock at the ADC pins, then you must somehow represent this. It would be easier to say how to do this if we saw the datasheet of the ADC - it is unusual for the receiving device to "invert" the clock. There are a couple of solutions:

  • Modify the values of the setup and hold requirements of the ADC to be referenced to the rising edge of the clock at the pins of the ADC (can be messy)
  • If this is an SDR interface, then "inverting" the clock means that the constraints are given with respect to the falling edge of the ADC clock; there is a format for this
    • set_output_delay -clock <clock_name> -clock_fall <value> [get_ports]

(And probably a few others).

Avrum

0 Kudos
Highlighted
Voyager
Voyager
636 Views
Registered: ‎04-12-2012

I don't really understand what ADC you're referring to. The scenario is simple. I'll describe it again:

1. I have an FPGA sending clock + data to another device.

2. An ODDR will be used for the output clock of my FPGA (to make sure it's aligned).

3. The receiving device shifts the clock 180 degrees and uses this shifted version of the clock to sample the data.

 

I'm writing the output delay constraints for my FPGA.

Mark's note:

Yes, in your case, the ODDR should NOT invert the forwarded clock.  However, you must somehow tell Vivado that the forwarded clock is being shifted by 180degs.

I agree about the first part of the statement (about not having to invert the clock) but not with is the second part (in bold) about having to tell Vivado that the receiving device shifting the clock after it receives it.

 

0 Kudos
Highlighted
Guide
Guide
552 Views
Registered: ‎01-23-2009

I don't really understand what ADC you're referring to.

Sorry - I assumed the "another device" was an Analog to Digital Converter (one of the most common devices connected to an FPGA via a clock forwarded interface). Other than misnaming your "another device" as an ADC, I think my response answers your questions.

Avrum

0 Kudos
Highlighted
Voyager
Voyager
489 Views
Registered: ‎04-12-2012

I'm still having trouble understanding this.

This is how I see it - please tell me where I'm wrong :

1. From the datasheet of the Rx device - we learn that it expects to get the data edge aligned to the clock.

2. So we follow and sending the data edge aligned to the clock + the clock itself. And we do a very good job by sampling the clock using the ODDR - without inverting it.

3. Our board designer and his friend the PCB layout designer also did a very good job. They routed the data and clock lines from our FPGA to the Rx device such that they have ZERO skew between them. 

4. Our clock is 100 MHz. And the Rx device wants 0.1 nS of hold time and 9 ns for setup.  So this is how we write it:

set_output_delay -clock clock_out -max 1.0 [get_ports data_out] 
set_output_delay -clock clock_out -min -0.1 [get_ports data_out]

Now, IMO we are done !

We shouldn't care one bit if the Rx device does something "odd" with the clock afterwards.

Let it can shift the clock 180 degrees or 158.53 degrees or ANY other favorite number of degrees to meet some strange requirement.

We satisfied the requirement for the data to be edge aligned to the clock and we where able to meet the setup and hold requirements !!!

 

As the chef I served the food to the customer EXACTLY how he requested it. If he wants to add extra ingredients he brought from home - he can do so with or without telling me...it won't break my heart (or timing in our case) 

 

 

0 Kudos
Highlighted
480 Views
Registered: ‎01-22-2015

@shaikon 

We satisfied the requirement for the data to be edge aligned to the clock…
Your hardware thinking is correct.  -and hardware thinking is what matters most. 

However, you are still missing the purpose/function of the set_output_delay constraints.

The purpose of the create_generated_clock and set_output_delay constraints is that they allow timing analysis to run on the circuits of the interface. 

The create_generated_clock and set_output_delay constraints DO NOT directly affect operation of the hardware.  That is, once you get the hardware working perfectly and everything is locked in position, you can then erase the create_generated_clock and set_output_delay constraints and the hardware will continue to work perfectly.

Timing analysis will consider details of hardware thinking that you cannot (eg. PVT variations, delay/skew of things inside the FPGA).  However, timing analysis works correctly only if you tell it everything about the interface – including the fact (in your case) that the receiving device is shifting the clock by 180degs.  Using the added details provide by timing analysis, we can improve our hardware thinking and we can improve the actual hardware of the interface (eg. make it more tolerant of PVT variations).

As the chef I served the food to the customer EXACTLY how he requested it. If he wants to add extra ingredients he brought from home - he can do so with or without telling me...it won't break my heart (or timing in our case).
I like that you are trying to make this analogy – but it is not a good analogy – because the customer is adding hardware. Your analogy is like saying that the some receiving devices (customers) in the source-synchronous_interface(restaurant) can choose to add the 180-deg phase shift(ingredient) and some can choose not to add it.  In one case the customer will destroy the interface(food).

0 Kudos
Highlighted
Voyager
Voyager
459 Views
Registered: ‎04-12-2012

once you get the hardware working perfectly and everything is locked in position, you can then erase the create_generated_clock and set_output_delay constraints and the hardware will continue to work perfectly.

One of the most important uses of the constraints is to GET to this point (without having to lock hardware into position). I.E: there're not just for analysis purposes - they impact the way the P&R tool works.

In our scenario we read the Rx device's datasheet and see that it expects to get the data edge aligned. 

If we worked in Xilinx and had access to the TSMC libraries of our FPGA device we could manually floor plan our interface and have the same results.

But we don't - so instead we use an ODDR for the clock and write this for the output delay:

set_output_delay -clock clock_out -max 1.0 [get_ports data_out] 
set_output_delay -clock clock_out -min -0.1 [get_ports data_out]

You say that this effort isn't enough:

However, you must somehow tell Vivado that the forwarded clock is being shifted by 180degs

And you justify this by:

Timing analysis will consider details of hardware thinking that you cannot (eg. PVT variations, delay/skew of things inside the FPGA).

And this is where (IMO) you wrongly over-extend our responsibility towards the Rx devices silicon (which actually does the clock shifting). The reason I think it's wrong is because...yes, Vivado knows about "PVT variations, delay/skew of things inside the FPGA" - but it knows NOTHING about the silicon of the Rx device - it only knows how it requested to get the data relative to the clock.

*This isn't the same as having the a passive coil component on the PCB shift the clock.

0 Kudos
Highlighted
Guide
Guide
434 Views
Registered: ‎01-23-2009

I think we are all confused. It would really help if you posted the specifications for the receiving device - you have described it a couple of different ways, and we are not sure what it really looks like.

Our clock is 100 MHz. And the Rx device wants 0.1 nS of hold time and 9 ns for setup.

If this is really it - the device specifies a 9ns setup requirement and a 0.1ns hold requirement to the rising edge of the clock that it receives, then you are right - this is all we need to know; we don't care what the device does internally, as long as it says "if you give me this, then I am fine" - the fact that it may or may not use the falling edge internally for this capture is irrelevant (and this whole discussion went off the rails discussing a clock inversion or falling edge of the clock).

HOWEVER, if this is what the device really needs then you have a big problem. Your constraints are not consistent with this requirement. This is saying that the FPGA must change its output between 0.1ns after the clock and 9ns before the next clock, which means 1ns after the previous clock (minus jittter). This allows an uncertainty of only 0.9ns and it is not symmetrical - this will be VERY hard (if not impossible) to meet in the FPGA. Most clock forwarded interfaces end up with an uncertainty of around +/-600ps, which is a 1.2ns uncertainty - more than you are allowing here, and the asymmetry will require some kind of a delay.

For set_output_delay, the -max is the setup requirement of the device, and the -min is the negative of the hold requirement of the device, so the constraints would really be

set_output_delay -clock clock_out -max 9.0 [get_ports data_out] 
set_output_delay -clock clock_out -min -0.1 [get_ports data_out]

This assumes that "clock_out" is the generated clock

create_generated_clock -name clock_out -source [get_pins <instance_of_ODDR>/C] -divide_by 1 [get_ports clock_out]

But again, this will not pass with just the data coming from the IOB flip-flop and the clock coming from an ODDR.

Avrum

View solution in original post

Highlighted
Voyager
Voyager
409 Views
Registered: ‎04-12-2012

If this is really it - the device specifies a 9ns setup requirement and a 0.1ns hold requirement to the rising edge of the clock that it receives, then you are right - this is all we need to know; we don't care what the device does internally, as long as it says "if you give me this, then I am fine" - the fact that it may or may not use the falling edge internally for this capture is irrelevant (and this whole discussion went off the rails discussing a clock inversion or falling edge of the clock).

THANK YOU!

I was a bit off with the exact constraint values and thanks for correcting me.

What you wrote above is exactly the point I was trying to convey to Mark.

 

0 Kudos