05-07-2019 02:16 PM
We recently upgraded from Vivado 2017.4 to 2018.3 and the time it takes to open a synthesized design has increased dramatically. The file it is getting stuck on is xpm_cdc_async_rst.tcl. Opening the file reveals a single line:
set_false_path -through [get_ports src_arst] to [all_registers]
The [all_registers] command seems suspect to me, like everytime this file is sourced the entire design is searched.
Has anyone else run into this problem and has anyone found a work around?
05-07-2019 07:15 PM
Are you migrating a post-synthesis design to 2018.3 or an RTL based design?
Where is the XPM in your design? Are you using the XPM in your own RTL or is it in a Xilinx IP?
If it is an RTL based design and the XPM is in a Xilinx IP, please upgrade the IP to the 2018.3 version.
-vivian
05-07-2019 11:06 PM
Hi @jake.freeman ,
Do not use all_registers for your constraints if you want to reduce runtime.
all_registers command query all the cells connected to a particular clock/cell depends on your option which you used with it. The returned list can be very large while only a few objects need to be constrained. This can impact the runtime negatively.
Following is an example of timing exceptions that can negatively impact the runtime:
set_false_path -from [get_ports din] -to [all_registers]
If the din port feeds only to sequential elements, there is no need to specify the false path to the sequential cells explicitly. This constraint can be written more efficiently:
set_false_path -from [get_ports din]
If the false path is needed, but only a few paths exist from the din port to any sequential cell in the design, then it can be more specific (all_registers can potentially return thousands of cells, depending upon the number of registers used in the design):
set_false_path -from [get_ports din] -to [get_cells blockA/config_reg[*]]
05-08-2019 08:29 AM
Thanks for the quick responses. Here's some answers to your questions.
Q. Are you migrating a post-synthesis design to 2018.3 or an RTL based design?
A. No, it's an RTL based design using many Xilinx IP in a block design.
Q. Where is the XPM in your design? Are you using the XPM in your own RTL or is it in a Xilinx IP?
A. In Xilinx IPs
Q. If it is an RTL based design and the XPM is in a Xilinx IP, please upgrade the IP to the 2018.3 version.
A. I have already upgraded all IPs to 2018.3
And regarding "Do not use all_registers for your constraints if you want to reduce runtime." I completely agree. I believe its Xilinx IP that are using the xpm, which is where the constraint is.
Any thoughts? Thanks a lot.
05-08-2019 11:44 PM
Which IP is it?
-vivian
05-09-2019 10:29 AM
Looking at the timing constraints window, I see it's referenced 176 total times by the following:
- AXI Interconnect 2.1
- AXI Clock Converter 2.1
- FIFO Generator 13.2
Thanks,
-Jake
05-09-2019 09:33 PM
Hi @jake.freeman ,
Which OS version are you using for Vivado 2018.3?
Is it possible to share your testcase to reproduce this issue at my end? If yes, i will send the ezmove ftp through which you can provide archived testcase.
05-10-2019 10:00 AM
We are using ubuntu 16.04 LTS. Unfortunately I cannot share the actual design, I'd have to create a generic test case using those IP.
Thanks a lot,
-Jake
05-10-2019 10:15 AM
Hi @jake.freeman ,
It will be helpful to debug this issue if you will provide a testcase/generic testcase where we can reproduce this issue at our end.
05-13-2019 03:50 AM
I see the problem in XPM with those IPs.
What is the runtime difference of opening synthesized design between 2017.4 and 2018.3?
-vivian
05-13-2019 04:09 AM - edited 05-13-2019 04:10 AM
Can you provide the log file in 2018.3 that can demonstrate the long runtime of opening synthesized design and the printed messages during this process?
Can you provide the post-synthesis DCP for use to reproduce the long runtime?
-vivian
05-13-2019 09:09 AM
@viviany Thanks for all your help. The time to open my synthesized design in 2017.4 was 12 minutes, and it jumped to ~25 minutes for 2018.3.
05-15-2019 02:59 AM
In 2019.1, we have a better solution for you.
You can change the xpm_cdc_async_rst.tcl file in below folder.
....../Vivado/2019.1/data/ip/xpm/xpm_cdc/tcl/xpm_cdc_async_rst.tcl
Change
"set_false_path -through [get_ports src_arst] -to [all_registers]"
to
"set_false_path -through [get_ports -no_traverse src_arst]"
2019.1 will be released soon.
However, in 2018.3 this solution is not available.
I suggest you move to 2019.1.
-vivian
06-18-2019 02:51 PM
Vivian,
I'm the local Avnet FAE working with Jake on this issue. Could you explain why the 2019.1 fix you suggested will not work in 2018.3? Is the "-no_tranverse" option new to 2019.1?
It is too late in the development cycle for the customer to move to 2019.1. What workarounds are available for 2018.3?
06-19-2019 01:00 AM
As you know, only a small amount of code (HDL) is needed to create a circuit that is equivalent to the reset-synchronizer produced by the xpm_cdc_async_rst macro. Does the customer not prefer this solution to a workaround for Vivado v2018.3?
Mark
06-19-2019 08:54 AM
Mark,
The constraint created in the xpm_cdc_async_rst.tcl is file is used by Xilinx IP. I can see in the constraints window that it used by at least AXI width converters and AXI clock crossing IP. Is there a way to force these IP to use custom HDL rather than the Xilinx macro?
Thanks,
Jake
06-19-2019 09:24 AM
Is there a way to force these IP to use custom HDL rather than the Xilinx macro?
I must defer to Xilinx and the FAE on that one - sorry.
Mark
06-19-2019 08:14 PM
@jdehaven wrote:Vivian,
I'm the local Avnet FAE working with Jake on this issue. Could you explain why the 2019.1 fix you suggested will not work in 2018.3? Is the "-no_tranverse" option new to 2019.1?
It is too late in the development cycle for the customer to move to 2019.1. What workarounds are available for 2018.3?
Yes, the -no_tranverse option is new to 2019.1.
In 2018.3, it will be not easy to resolve the issue.
Can the customer figure out which IP that is using the xpm_cdc_async_rst.tcl caused the long runtime?
If there is just one or two, we may be able to resolve it at the IP top level.
-vivian