UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Participant sean.durkin
Participant
10,319 Views
Registered: ‎05-15-2013

IO timing constraints when using ODDRs for SDR interface

Jump to solution

Hi *,

 

this might be a stupid or irrelevant question, but here it goes:

 

I inherited some code for a source-synchronous bus interface (FPGA <-> external device, clock driven by the FPGA) that I now want to migrate to Vivado, including the input/output delay constraints.

 

Timing always fails with hold-time violations, and I think I know why. What I think I need is some sort of false-path constraint or multicycle constraint or similar... Let me explain about the system and the problem:

 

- Bidirectional, SDR data bus connecting the FPGA to an external device

- Clock for the bus (~100MHz) is generated inside the FPGA and forwarded through an ODDR, inverted (so, for the forwarded clock I use "create_generated_clock" with the "-invert" flag; this is the clock that is used to specify the input/output delay constraints)

- The bus data is also output via ODDRs, even though this is an SDR bus. I guess the original designer manually instantiated ODDRs to make absolutely sure IO flip flops would be used. So the ODDR is really used as an SDR flip flop, by connecting the same data to both D1+D2 inputs. So to the outside this basically behaves like an SDR flip flop.

 

The problem is that for timing analysis, this is still an ODDR, using BOTH clock edges, even though the data can in reality only change on the rising edge.

Timing always fails for the data pins with hold time violations, because Vivado of course sees an ODDR and then also analyses the worst case of data changing on the FALLING edge of the clock, which would then violate hold time.

 

This can't really happen, since data always stays the same on the falling edge (it's just the same data as for the rising edge that is output again), but it's considered in the timing analysis. If this really was a DDR interface, I would have to introduce an output delay to make sure both setup and hold time requirements are met, but in this case timing should really be OK without that, shouldn't it?

 

The question is: How do I handle this elegantly?

 

The way I see it there's several possibilities:

 

- Don't set input/output delay constraints at all for these ports, since the way the interface is designed makes sure everything is OK, and in that case those constraints are more or less useless anyway (placement and routing is locked down anyway, it's not like the tools have much of a choice) -> don't like that, I want everything to be constrained properly

 - Don't use ODDRs for the data outputs, but regular registers instead and then either hope the tools will push those into the IOBs automatically or set an HDL attribute/XDC constraint that makes sure they do; or trust that the input/output delay constraints make sure the tools place the flip flops in a way that makes sure timing is met -> too much trust in the tools and the constraint set, I also prefer to manually instantiate IO stuff to make sure everything is implemented the way I want it to be

- Set some sort of false path (or maybe multicycle path?) to force the tools to ignore the ODDR's falling clock edge in analysis because it is not relevant in this special case -> that would be ma favourite, but I've been fiddling around with this for quite some time and couldn't manage...

 

The same of course applies for the IDDRs (since it's a bidirectional bus), same problem here.

 

How do you normally do this? What's the recommended way to do this?

 

Greetings,

Sean

0 Kudos
1 Solution

Accepted Solutions
Historian
Historian
19,534 Views
Registered: ‎01-23-2009

Re: IO timing constraints when using ODDRs for SDR interface

Jump to solution

The basis of the answer is

 

set_false_path -fall_from [get_clocks <internal_clock>] ...

 

When you specify a path from a clock (as opposed to a pin, or cell) the "fall_from" means the falling edge of the clock (in all other cases it means the falling edge of the data).

 

But you can't just use this on its own, otherwise it will disable timing on all internal paths that use the falling edge of clock (there may or may not be other ones, but it is not safe to assume this is the only one). So you need to make sure that it only covers this path. I haven't tried this, but it might work to say

 

set_false_path -fall_from [get_clocks <internal_clock>] -to [get_ports <output_ports>]

 

The output port is a static timing path endpoint, so this may work. This would work (in reverse) for the inputs, using -fall_to instead of -fall_from (and using -from for the ports).

 

If not, then I would normally say that you could specify the path by using a separate clock for the input or output interface, and then do the constraint from clock to clock. For the input path this could be done by creating a virtual clock that is identical to your input clock - similar to what is done for DDR interfaces in "the right way" (see  this post). For your output interface, you already have a separate clock for the output - the generated clock - that should do

 

set_false_path -fall_from [get_clocks <internal_clock>] -to [get_clocks <virtual_clock>]

 

Let me know if this works.

 

Avrum

Tags (2)
0 Kudos
2 Replies
Historian
Historian
19,535 Views
Registered: ‎01-23-2009

Re: IO timing constraints when using ODDRs for SDR interface

Jump to solution

The basis of the answer is

 

set_false_path -fall_from [get_clocks <internal_clock>] ...

 

When you specify a path from a clock (as opposed to a pin, or cell) the "fall_from" means the falling edge of the clock (in all other cases it means the falling edge of the data).

 

But you can't just use this on its own, otherwise it will disable timing on all internal paths that use the falling edge of clock (there may or may not be other ones, but it is not safe to assume this is the only one). So you need to make sure that it only covers this path. I haven't tried this, but it might work to say

 

set_false_path -fall_from [get_clocks <internal_clock>] -to [get_ports <output_ports>]

 

The output port is a static timing path endpoint, so this may work. This would work (in reverse) for the inputs, using -fall_to instead of -fall_from (and using -from for the ports).

 

If not, then I would normally say that you could specify the path by using a separate clock for the input or output interface, and then do the constraint from clock to clock. For the input path this could be done by creating a virtual clock that is identical to your input clock - similar to what is done for DDR interfaces in "the right way" (see  this post). For your output interface, you already have a separate clock for the output - the generated clock - that should do

 

set_false_path -fall_from [get_clocks <internal_clock>] -to [get_clocks <virtual_clock>]

 

Let me know if this works.

 

Avrum

Tags (2)
0 Kudos
Participant sean.durkin
Participant
10,040 Views
Registered: ‎05-15-2013

Re: IO timing constraints when using ODDRs for SDR interface

Jump to solution

@avrumw wrote:

The basis of the answer is

 

set_false_path -fall_from [get_clocks <internal_clock>] ...

 

Let me know if this works.

 

Avrum


Hi Avrum,

 

works like a charm. That was easier than I thought it would be...

 

Thanks!

 

Sean

0 Kudos