We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Showing results for 
Search instead for 
Did you mean: 
Visitor leileier
Registered: ‎07-31-2018

how to write input/output delay with clock mux

We have a question about set_output_delay/set_input_delay,as picture shown below,CLKMUX is mux out of CLK_DIV2 and CLK_DIV4CLK_DIV2 and CLK_DIV4 is the half and quarter clock of CLK_M.

DOUT is data input/output, I already used "create_generated_clock –add option" and defined two generated clock of CLKMUX_DIV2 and CLKMUX_DIV4 on CLK_IO.

We need to set input and output delay on DQ_IO, but I don't know how to write set_output_delay/set_input_delay because the clock is the MUX output, there are two clocks defined on CLK_IO.

I tried to add following constrains with two generated clocks but got critical warning from VIVADO.

set_output_delay -clock [get_clocks CLKMUX_DIV2 ] -min -1.700 [get_ports DQ_IO]
set_output_delay -clock [get_clocks CLKMUX_DIV4] -min -1.700 [get_ports DQ_IO]

Could someone tell me how to constrain this IO?  Thanks

I found a -add_delay option in set_input/output_delay. Does that work in my case?



0 Kudos
4 Replies
Mentor watari
Registered: ‎06-16-2013

Re: how to write input/output delay with clock mux

Hi @leileier


I suggest that you define OUT_CLK_IO_xxx (every clock source) with "create_generated_clock" command on CLK_IO.

This defined clock setting is for "set_output_delay".


Would you try it ?


Best regards,

0 Kudos
Registered: ‎01-16-2013

Re: how to write input/output delay with clock mux


I would like to share some inputs which is not directly related to your constraint related query, but important for your design.

1) It looks like you are using clock divider (Fabric logic maybe counter using FF) to divide the clock, this is not recommended practice in FPGA design. If you have 1 master clock and need divide by 2 & 4 clock use MMCM/PLL for clock synthesis.

2) You cannot connect BUFGMUX (or clock mux) clock directly to IO pin. This is not how one should forward the clock. You need to use ODDR technique for proper clock forwarding. 

3) Your output data looks like triggered using CLK_M and not with the BUFGMUX (clock mux) clock, is it correct? 


0 Kudos
Guide avrumw
Registered: ‎01-23-2009

Re: how to write input/output delay with clock mux

My first response to you is "don't do any of this".

This design is inherently unsafe for a number of reasons. As others have mentioned:

  • Don't generate divided clocks using fabric logic
  • Don't send a clock directly to an I/O pin

Furthermore, I would go a step further in this case and say "Don't MUX the clock".

First, I see that the data you have coming out of the device is coming directly from a flip-flop. This is the right way of doing it. Furthermore you should set the IOB property of the port to TRUE to ensure that the flip-flop gets packed into the IOB (hence is an IOB flip-flop).

From what I can tell in your description, your problem is primarily an interface, rather than the whole FPGA; you have an interface that either operates at F, F/2, or F/4. In a case like this, I would not use any kind of clock gating at all - I would use pure synchronous logic.

First, (and related to what others have told you), do not send a clock out of the FPGA directly - use an ODDR. Tie the C of the ODDR to your internal clock. To send a clock of frequency F out of the port, have the D1 of the ODDR driven to 1 and the D2 to 0; this will have the output of the ODDR go to 1 on the rising edge of the clock and to 0 on the falling edge of the clock - mirroring the clock out of the FPGA. This, on its own, has the advantage that the clock remains on dedicated clocking resources instead of having to move into the fabric routing to get to the OBUF.

Now, if you want to have this run at F/2 then

  • change the data on your data output every second clock and
  • change the values of the D1 and D2 on the ODDR
    • On even clocks have them both be 1, and and odd clocks have them both be 0

This will have the effect of "mirroring" a half speed clock - effectively you are clocking out (on every 1/2 internal clock period) 1100, which will have the shape of a clock divided by 2.

The same can be done for F/4, but do 11 for two internal clock periods and 00 for the other two.

This will allow you to generate this interface at F, F/2 and F/4 without doing any nasty asynchronous stuff.

As for how to constrain it, why are you doing this? Is it because this interface can be connected to different devices with different speed requirements, or is it something having to do with power saving.

If it is for power saving then simply constrain the interface at the fastest speed - if it works for the fastest, it will work for the slower ones. This is done by creating a generated clock on the output clock port

create_generated_clock -name out_clk -source [get_pins <ODDR_instance>/C] -divide_by 1 [get_ports <clock_output_port>]

Now simply constrain your outputs using the required setup and hold of the external device against the generated clock. Of course, there is nothing that says this will pass - if your device needs anything more than about -600ps of hold time (i.e. the data can go away as early as 600ps before the edge of the clock) then this will fail; this interface is "edge aligned" which means that the data will change at the same time as the clock +/- some skew (around 600ps in most cases).

If this is being done to connect to different devices, some of which are slower, then this is more difficult. An FPGA can only be constrained for one device at a time - you may have to go through each device and find out which one is the worst, and constrain that one.


Tags (1)
Visitor leileier
Registered: ‎07-31-2018

Re: how to write input/output delay with clock mux

Thanks for your detailed relpy and good advice.

I know it is not good in FPGA practice, but I have to say that we use FPGA as emulation platform for ASIC. So the coding is not for FPGA and the code implemented into FPGA must be the same as AISC. I just need to make sure the interface can work stable even poor speed.

I finially got this problem solved by adding

set_output_delay -clock [get_clocks CLKMUX_DIV2 ] -min -1.700 [get_ports DQ_IO]
set_output_delay -clock [get_clocks CLKMUX_DIV4] -min -1.700 [get_ports DQ_IO] -add_delay

0 Kudos