Showing results for 
Show  only  | Search instead for 
Did you mean: 

Saving Compile Time Series 1: General Methods to Save Compile Time

Xilinx Employee
Xilinx Employee
7 1 2,063

Compile Time Analysis

Compile time is impacted by several factors, including the tool flow, tool options, RTL edits, constraint edits, the part which is being targeted, and whatever critical issues are faced by the tools during the implementation of the design.

On top of this, the machines that are used and their load factors are also key contributors. In this blog, we will only explore factors related to the design and tool flow.

It is also important to note that the techniques described will not be applicable for all users.

For example, on a design that consists of 50 FPGA images with 50 constraint files, constraint changes are likely not practical. But for a single design run, they will be more relevant.

Additionally, individual recommendations will have an impact on some designs more than others. For example, where a constraint change is applied to a design that runs 50 runs in parallel, the constraint change will impact all runs. However, on a design with just a single implementation run, a change will have limited impact.

We will describe the merits and costs of each technique here, but the user will ultimately have to decide whether they are worth implementing in their own use case.


Measuring Compile Time

When comparing compile time before and after a change, it is important to run on similar machines to get an apples to apples comparison.

Where this is not practical, you can get an idea of the compile time direction by comparing numbers without relying on absolute numbers. We can compare the time in several different ways.

For a complete Vivado run, we can always search for compile time information in the vivado.log file. For example, you can find lines similar to the following:

place_design: Time (s): cpu = 03:21:34 ; elapsed = 01:58:53 . Memory (MB): peak = 21362.934 ; gain = 3668.312 ; free physical = 12076 ; free virtual = 142273


This line includes information on the total time spent in the phase place_design, along with memory usage. The time for “cpu” is the cumulative time spent on multiple threads which are assigned with sub-tasks in place_design.

The “elapsed” time is what we care about, the timing difference between launch and completion of the place_design phase.

There are also other lines with a time report in the same format, but with no command name in front of them, for example:

Time (s): cpu = 00:34:50 ; elapsed = 00:17:24 . Memory (MB): peak = 21322.859 ; gain = 3612.184 ; free physical = 42807 ; free virtual = 172805


This is the time spent for each individual phase in a specific step. So to get the total compile time, we just need to add up the compile time reported for each step run either in project mode or non-project mode:

T(synth_design)+T(opt_design)+ T(place_design)+ T(phys_opt_design)+T(route_design).

One thing to note is that the project mode takes time to generate multiple report files, and this time should also be added. You would then get a clear idea of which step has taken most of the total compile time.

If you want to investigate how much time is spent for a single command instead of a run step, you can use Tcl commands to track it.

For example, using the command below, you can get 44 milliseconds as the time to run a single get_pins command:

set start [clock milliseconds]; get_pins -filter {NAME =~ *FPGA*/O}; set stop [clock milliseconds] ; puts "TIME: [expr $stop -$start]"

TCL console output -> TIME: 44


This is helpful when you have a huge constraints file with thousands of lines and want to get a quick view of how much time is being spent for each command.

For the incremental flow, we directly generate a table in the log file to count the total compile time for both the default run and incremental run at each individual step, so it is very easy to read.

While you should be aware that the incremental run can read and copy the information from the reference file at the synthesis or implementation phase, the time saved will only come from some phases, and if the netlist changes a lot, less of it will be referenced and the compile time will be impacted accordingly.



Analyzing Compile Time

After you get the expected compile time information, the next step is to analyze the time data and decide which step has the most impact, so that you can find the resolution.

Some examples are:

  • Example 1:
    Say we identify that the route_design step has consumed most of the compile time.
    By reading the log report, we realize that this is a design with high utilization which causes routing congestion, and so the router’s compile time is quite long.
    Hence, we can rely on report_design_analysis to get the congestion report and figure out which area or module is causing the problem. Accordingly, we can decide to optimize the code for a lower congestion RTL coding style, or rely on the tool’s congestion strategies.

  • Example 2:
    if we have utilized a high number of IP or modules which do not need to be updated for each run, we can look at flow optimization.
    For example, for some IP cores instantiated in a design, we can enable IP caching so that we will not need to regenerate them each time, and can save on the IP generation time.
    We can enable a bottom-up development flow for parallel development which will ultimately save time for design implementation integration.
    We can also enable the incremental flow for fast design iterations after completing one flow to get a guiding file.

The content below is broken into 2 sections, based on the 2 different approaches we can take to solving compile time issues.

Each section then links to several blogs. The links to subtopics with red underscore are active now, and this main blog will be updated with active links once each of the other subtopics are ready.


Solving Compile time issues on design specific problems

To resolve design specific compile time issues, we can use the techniques below, which are sorted into 4 categories based on commonly seen root causes and resolutions:

  • Constraints
  • Incremental implementation
  • Tool driven options
  • Using the out-of-context run


Having clean, reasonable and precise constraints in your design can help to make efficient use of the system memory, and hence reduce the overall compile time. We need to analyze how much compile time is being spent on constraints, understand how that compile time is spent, and improve the constraints syntax to be more efficient. This will be covered in the Save compile time with efficient constraints blog with examples.

Incremental Flow

Incremental synthesis and incremental implementation flows can be a very straightforward and easy-to-manage way to achieve maximum output. You can quickly iterate on succeeding runs when the design change ratio is at a very low percentage, and it can also produce more consistent and predictable results, helping to save on compile time. Some prerequisites for adopting this flow, along with information on how to understand the report are also included.

Tool and Report Options

Tool driven options can help to minimize design specific issues like design DRC issues, inappropriate timing constraints coverage, or design congestion which could greatly impact on compile time, and should be explored before taking on any more tool optimizations. We can rely on some Vivado reporting tools to generate reports and do the analysis.

  • Run report_methodology to solve design methodology issues. Some bad practices indicated in the report could impact the compile time, and you can get easy fixes from the report before kicking off the following run. This will be covered in the blog entry Identify issues which have an impact on compile time with report_methodology.
  • Run report_design_anlaysis to solve timing/complexity/congestion issues. This will help you to better understand the bottleneck in your design, by reading the top critical paths, design complexity rent, and design placement hot spot. It can provide some easy ideas to find a solution. This will be covered in the blog Identify issues which have an impact on compile time with report_design_analysis.
  • Run report_qor_suggestions to get additional suggestions in the lower level Tcl script, which can then be applied to the design directly. This will be covered in the blog Identify issues which have an impact compile time with report_qor_suggestion.
  • Run report_exceptions to get information on timing interactions and coverage. An improper timing constraint which has caused it to over-tighten could cause longer compile time. This will be covered in Identify issues which have an impact on compile time with report_exceptions.

Out-of-Context Run/Block-Level Synthesis

Running design cores in out-of-context mode generates parallel sub runs, meaning that the design integration time is shortened, and the block level synthesis can also define different compile time or performance strategies for different sub modules. It also shortens the integration time such that the total compile time is reduced. It will be covered in this blog entry.

Solving Compile Time Issues across Multiple Designs

When the compile time reduction is required across multiple designs, we need some more general methods to apply and iterate based on the design results. These techniques are broken into 2 categories below:

Auto Suggested Flow and Constraints by Vivado

Since the 2019.1 release, Vivado has enabled new features to provide multiple auto-generated strategies in a Tcl format, which can then be directly sourced.

This can help shorten the cycles of sweeping strategies, and makes it easy to find some of the best compile time/performance balanced strategies, without needing to do the manual work of sweeping all designs in parallel.

This feature is enabled in report_qor_suggestion, which will be covered in the blog entry Identify issues which have an impact on compile time with report_qor_suggestion.

Sweep implementation Directives

This covers how to choose the compile time targeted directives when selecting from the existing strategies, and also provides some suggestions for defining the user’s own compile time reduction strategies.

This will be covered in the blog entry Save compile time with Vivado Compile time strategies.


With utilizing the above techniques, we expect that the total compile time should get analyzed, scoped to find some ways to optimize, and finally get reduced.


Tags (2)
1 Comment
Xilinx Employee
Xilinx Employee

Hi txu,


Thanks for the summary on decreasing compile time. I looked into this a little on my own before this post and I found a few other options to decrease compile time. I have listed them below.


Further Suggestions to Reduce Compile Time

1. Remove physopt steps.

2. Change route_design (UG835 v2019.1 p.1490) to use -ultrathreads for threading parallelism Reduce Reporting (UG906 v2019.1 p.183)

3. Change synthesis, opt_design, place_design, route_design  (UG835 v2019.1) -directive RuntimeOptimized (Quick is an option for place but need timing driven placement in most cases; These directive options can adversely affect timing)

4. Caching timing objects used in constraints per UG894 v2019.1 p. 107