11-22-2019 03:12 PM
2018 3. has better WNS, TNS and number of failing points
2019.2 has less LUTS (6869 vs 6928)
11-22-2019 04:25 PM
It is difficult to compare performance of two versions of Vivado when your project is failing timing analysis (ie. Vivado reports a negative value of WNS).
It is better to modify your project so that it passes timing analysis in at least one version of Vivado – and then make the comparison.
If your project completes implementation and passes timing analysis with one version of Vivado and not with the other version, then you immediately know which version of Vivado is better.
If your project completes implementation and passes timing analysis with both versions of Vivado then the comparison becomes more difficult. You cannot make the comparison based on WNS and TNS – because Vivado runs until it just passes timing analysis and then stops. However, you can make the comparison based on how long it took each version of Vivado to complete implementation or how many resources (eg. LUTs) where used by each version.
11-22-2019 04:54 PM
I would also try reviewing why the WNS path failed. Looking at the path report and implementation log should give clues as to why this is occuring.
11-22-2019 05:06 PM
Consider I put 0.2 ns extra margin (which can be done and is a known technique to make the tool perform better), now one is successful and the other one not.
Timing constraints met is extremely important compared to 86 or 87 percent LUT usage. So it is a very very important comparison metric.
I can make any comparison I want, it could be a implementation time comparison , area comparison or WNS TNS comparison,
11-22-2019 05:11 PM
I know the critical path, can't be changed by code in that specific path. It is a design requirement.
Need to reduce the area in the other parts of the design by resource sharing so that net delay might reduce by 0.2 ns.
I will also try changing run strategies.
11-22-2019 05:49 PM
Now 2019.2 Fails and 2018.3 does not fail. (0.2 ns extra margin added)
I put extra 0.2 ns margin on the 80MHz clock(12.5ns), now it is 81.3MHz (12.3ns)
Vivado 2018.3 failed only by 0.046 ps whereas Vivado 2019.2 failed by 0.462 ns.
11-22-2019 06:27 PM - edited 11-22-2019 06:28 PM
In order to compare performance of two Vivado versions, I usually run 20-25 builds of each with different synthesis and implementation strategies. Then plot a histogram of TNS and WNS, and compare mean and standard deviation of distributions. The distributions often end up bell-shaped with heavy tails. That approach provides more statistically significant results than comparing a single build.
I've developed automated scripts that schedule builds and do analysis in command line. But it's not too hard to do it in Vivado GUI as well.
11-22-2019 06:49 PM
Consider I put 0.2 ns extra margin (which can be done and is a known technique to make the tool perform better),
How did you come to this conclusion (or where did you read this). Xilinx specifically does not recommend overconstraining your design (at the implementation phase) - there are many documented cases where overconstraining actually produces worse results than a properly constrained design. It will certainly result in a design that takes more resources and consumes more power...
And to your original statement (as others have stated) it is meaningless to compare a single run and draw conclusions about the tool. The synthesis, place and route processes are all NP complete problems which inherently lead to chaotic results. Any change in the input conditions (and clearly the tool version is a change) no matter how small, can lead to radically different results. As @evgenis1 mentioned, you can only look at it statistically, and I know that the Xilinx software QA process has not one but thousands of designs that are tested on each release. If the tool really performed statistically significantly poorer, then it probably wouldn't have been released.
11-22-2019 11:54 PM
This will be used in a product, we do not care about power(0.273W vs 0.276W) or resource usage(0,014% less ) especially when the design does not meet timing.
I overconstrained, design passed timing, overconstraining produced better results, Please also document also this case for overconstraining.
Let me rephrase the original statement, 2018.3 performed better than 2019.2 for this specific case for us.
11-23-2019 12:57 AM
11-23-2019 01:08 AM
11-23-2019 03:36 AM
There is not much to discuss here, I use Xilinx for many years and I am very happy with it, it works from minus degrees to + 80 without any problem when the design in 87 percent full. I stated a fact for a special case for a specific seed.
have a nice weekend
Thank you for time
@richardhead You are right
@drjohnsmith I overconstrained by 0.2 ns and critical path failed by 0.046ns. So timing did passed. We are not almost there , we are there, we have a healthy baby from 2018.3 :)
11-23-2019 03:49 AM