UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Explorer
Explorer
727 Views
Registered: ‎05-23-2017

Re: array partitioning: long runtime and suboptimal QoR due to large multiplexers

Jump to solution

@nupurs

Thanks for your reply.

I checked vivado_hls.log file and didn't find the II number for using unrolling. The compilation takes a very long time(at least 14 hours).

INFO: [HLS 200-10] ----------------------------------------------------------------
INFO: [HLS 200-42] -- Implementing module 'read_query_or'
INFO: [HLS 200-10] ----------------------------------------------------------------
INFO: [SCHED 204-11] Starting scheduling ...
INFO: [SCHED 204-11] Finished scheduling.
INFO: [HLS 200-111]  Elapsed time: 1158.04 seconds; current allocated memory: 3.672 GB.
INFO: [HLS 200-434] Only 0 loops out of a total 1 loops have been pipelined in this design.
INFO: [BIND 205-100] Starting micro-architecture generation ...
INFO: [BIND 205-101] Performing variable lifetime analysis.
INFO: [BIND 205-101] Exploring resource sharing.
INFO: [BIND 205-101] Binding ...
INFO: [BIND 205-100] Finished micro-architecture generation.
INFO: [HLS 200-111]  Elapsed time: 1803.52 seconds; current allocated memory: 243.396 MB.

And I just found it stoped at linking stage and compalined with a error.

Here is the output from the terminal and _x/link/vivado/vivado.log:

2323.jpg

ddd.jpg

But didn't find any useful information.

Could the previous warning cause this error?

0 Kudos
1 Solution

Accepted Solutions
Explorer
Explorer
596 Views
Registered: ‎05-23-2017

Re: array partitioning: long runtime and suboptimal QoR due to large multiplexers

Jump to solution

Finally I found this issue is caused by the partionning and unroll of large size of array

0 Kudos
5 Replies
Moderator
Moderator
688 Views
Registered: ‎06-24-2015

Re: array partitioning: long runtime and suboptimal QoR due to large multiplexers

Jump to solution

@mathmaxsean

 

It seems you are using SDAccel. Hence I am moving this to SDAccel board.

Thanks,
Nupur
--------------------------------------------------------------------------------------------
Google your question before posting. If someone's post answers your question, mark the post as answer with "Accept as solution". If you see a particularly good and informative post, consider giving it Kudos (click on the 'thumbs-up' button).
0 Kudos
Scholar u4223374
Scholar
673 Views
Registered: ‎04-26-2015

Re: array partitioning: long runtime and suboptimal QoR due to large multiplexers

Jump to solution

@mathmaxsean You haven't actually told us anything about your project, but that warning is normally a bad sign. If you've got a very large fully-partitioned array that's being accessed one element at a time, that can take an extremely large amount of hardware - which then results in problems when the tools try to build the design.

0 Kudos
Explorer
Explorer
659 Views
Registered: ‎05-23-2017

Re: array partitioning: long runtime and suboptimal QoR due to large multiplexers

Jump to solution

@u4223374

Thanks.

Is there a way that I can send the project to you. The size of the project is large.

Thanks.

0 Kudos
Scholar u4223374
Scholar
639 Views
Registered: ‎04-26-2015

Re: array partitioning: long runtime and suboptimal QoR due to large multiplexers

Jump to solution

@mathmaxsean There's no point sending the project to me since I don't have SDAccel; my experience with this comes from HLS, which is the tool underlying much of SDAccel's FPGA-side functionality. However, if you post the code here then lots of people can have a look at it. It's unlikely that your C code is very large.

0 Kudos
Explorer
Explorer
597 Views
Registered: ‎05-23-2017

Re: array partitioning: long runtime and suboptimal QoR due to large multiplexers

Jump to solution

Finally I found this issue is caused by the partionning and unroll of large size of array

0 Kudos