01-20-2017 07:06 AM
My design generates multiple derived clocks using PLL from primary clock. The input data signals are driven to FFs clocked by the different generated clocks. Can the input delay be constrained using generated clocks as given below or virtual clocks to be used?
set_input_delay -clock [pll_clock_1] -clock_fall -min -add_delay 10.000 [DIN_1]
set_input_delay -clock [pll_clock_2] -min -add_delay 10.000 [SPI_IN_1]
P.S. pll_clock_1 and pll_clock_2 are the generated clocks from PLL. DIN_1, SPI_IN_1 are the input data signals.
01-20-2017 08:11 AM
While it is structurally legal to do so - a set_input_delay can use any clock defined in the clock database - it is largely meaningless to do so...
The point of the set_input_delay command is to describe to the tool the portion of the static timing path for the input that exists outside the FPGA. This generally is only meaningful if you describe the external delay with respect to an external timing reference. The numbers used for the constraints should be extracted from characteristics of the external device - usually from the datasheet.
If you define them with respect to an internal clock, where can you get meaningful values for the input delay values? They clearly can't be derived solely from the external device since they contain a dependency on the portion of the clock path that exists inside the FPGA...
Second, there are a couple of problems with your constraints as they are
- they specify only min, and not max
- they use the -add_delay when they are the only constraint
- you would either need to use "-clock pll_clock_1" or "-clock [get_clocks pll_clock_1]" - the format you have is illegal
- the name of the ports to be constrained must be [get_ports DIN_1] and [get_ports SPI_IN_1]
Lastly, let me speculate about what you are doing. From the name, I am assuming these are SPI signals. SPI usually runs very slowly (with respect to the internal clock) and hence is often oversampled by a faster clock. If this is the case, then the actual inputs are treated as asynchronous inputs, and hence have no timing relationship to any clock in the design (internal or external) - they are false paths (or paths that should be covered by some other exception).
You probably have two goals with this interface
1) constrain the maximum insertion delay of the signal (to ultimately constrain the skew on different bits)
2) keep the tools happy
To accomplish 2), you need a set_input_delay. The easiest thing to do is to create a virtual clock and specify an (essentially meaningless) constraint
create_clock -name virt_spi_clk -period 1000;
set_input_delay -clock virt_spi_clk 100 [get_ports {DIN_1 SPI_IN_1}]
Now check_timing will be happy.
But to accomplish 1) what you want is a set_max_delay -datapath_only constraint
set_max_delay -datapath_only -from [get_ports {DIN_1 PSI_IN_1}] 10;
This will constrain the insertion delay from the pad to the internal flip-flops (regardless of what clock they are running on) to 10ns.
Avrum
01-20-2017 08:11 AM
While it is structurally legal to do so - a set_input_delay can use any clock defined in the clock database - it is largely meaningless to do so...
The point of the set_input_delay command is to describe to the tool the portion of the static timing path for the input that exists outside the FPGA. This generally is only meaningful if you describe the external delay with respect to an external timing reference. The numbers used for the constraints should be extracted from characteristics of the external device - usually from the datasheet.
If you define them with respect to an internal clock, where can you get meaningful values for the input delay values? They clearly can't be derived solely from the external device since they contain a dependency on the portion of the clock path that exists inside the FPGA...
Second, there are a couple of problems with your constraints as they are
- they specify only min, and not max
- they use the -add_delay when they are the only constraint
- you would either need to use "-clock pll_clock_1" or "-clock [get_clocks pll_clock_1]" - the format you have is illegal
- the name of the ports to be constrained must be [get_ports DIN_1] and [get_ports SPI_IN_1]
Lastly, let me speculate about what you are doing. From the name, I am assuming these are SPI signals. SPI usually runs very slowly (with respect to the internal clock) and hence is often oversampled by a faster clock. If this is the case, then the actual inputs are treated as asynchronous inputs, and hence have no timing relationship to any clock in the design (internal or external) - they are false paths (or paths that should be covered by some other exception).
You probably have two goals with this interface
1) constrain the maximum insertion delay of the signal (to ultimately constrain the skew on different bits)
2) keep the tools happy
To accomplish 2), you need a set_input_delay. The easiest thing to do is to create a virtual clock and specify an (essentially meaningless) constraint
create_clock -name virt_spi_clk -period 1000;
set_input_delay -clock virt_spi_clk 100 [get_ports {DIN_1 SPI_IN_1}]
Now check_timing will be happy.
But to accomplish 1) what you want is a set_max_delay -datapath_only constraint
set_max_delay -datapath_only -from [get_ports {DIN_1 PSI_IN_1}] 10;
This will constrain the insertion delay from the pad to the internal flip-flops (regardless of what clock they are running on) to 10ns.
Avrum
01-20-2017 08:24 AM
Thanks for the response. Makes sense to me now. It was the pseudo syntax I given to keep it simple in the forum. Again, it is not SPI interface, i randomly chosen the name to post here. The actual design has different interfaces running at high speed.