01-11-2021 11:28 PM
01-12-2021 04:01 AM
Can you define the document you are referring to that defines "stage 1"?
Here are the Timing Baselining documents I reference:
01-13-2021 10:50 AM
Can baselining be stopped at stage 1, if timing is met (without specifying IO constraints)? I remember seeing yes and no both in 2 places in Xilinx docs
ABSOLUTELY NOT! (Trying to be clear here).
A design is only "timing closed" when all constraints have been accurately written; clocks, I/O, exceptions, jitter, etc
The only point of baselining is that if you try and debug all aspects of timing in one run, you can spend a long time having the tool optimize incorrect constraints, and it can be hard to figure out what to fix when you get failing paths. The baselining process starts with the most basic constraints, which you then debug, then add the next set, which you debug, and then ultimately complete all the constraints, which you then debug.
In fact it is expected that you get no failing paths after each step - that is the trigger to move on to the next step. But you must continue until you have a complete and accurate set of constraints before your design can be considered to meet timing.
01-13-2021 02:04 PM
Hi @avrumw ,
Thanks for the clarification.
I have a concern with respect to IO delay constraints though.
Basically in specifying IO constraints in Vivado we need to know the delays of external devices right? I am not quite clear as to how this can be done.
While there are plenty of examples/videos on assigning the values (which is quite straight forward), non of them clearly mention as to where the delays can be obtained for different interfaces. And for the more complicated interfaces, where core generator does the IP generation (or external IPs are used), the constraint files are provided. But if the designer were to find the IO delays them selves, where can they be obtained?
Trace delays we can obtain from the FPGA board user manual I am guessing? But where to get the values for the IO delays for different interfaces, say GT, or DDR or PCIE even something simpler.
01-13-2021 05:27 PM
But if the designer were to find the IO delays them selves, where can they be obtained?
I/O delays must be extracted from the datasheet of the devices driving or being driven by the FPGA. Each device that uses synchronous interfaces must have a datasheet that specifies the relationship between the shared clock (between the FPGA and the external device) and the data. The timing from these datasheets needs to be modified by the timing of the board, and then must be turned into the appropriate set_input_delay and set_output_delay commands.
But where to get the values for the IO delays for different interfaces, say GT, or DDR or PCIE even something simpler.
All of these are bad examples.
Any interface that comes through the Gigabit Transceivers (GT and PCIe) have no timing constraints. These are not synchronous interfaces; they are self-timed (using clock/data recovery).
A DDRx_SDRAM is also not a good example; the I/O of these interfaces are not conventional; they use dynamic calibration, on-die calibrated termination, and odd timing (bidirectional DQS strobes). The DDRx_SDRAM interfaces pretty much must be generated by the Memory Interface Generator (MIG) and will be properly constrained by the MIG.
Constraints are needed for (pretty much) all other interfaces that use conventional I/O (i.e. not through the GT); interfaces between pairs of FPGAs, interfaces between FPGAs and DAC/ADC components, interfaces to/from physical layer chips (USB, Ethernet, ...) and interfaces between the FPGA and any other on-board device that uses conventional interfaces and synchronous clocking. For all these, the external device must have a datasheet that documents the timing.