UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Adventurer
Adventurer
1,100 Views
Registered: ‎05-30-2018

Segmentation Fault when dealing with big array in Vivado HLS - reposting

Jump to solution

Hi everyone !

 

I am implementing a CNN (convolutional Neural Network) on UltraScale FPGA using Vivado HLS.

 

The fully connected layer has 1152 inputs and 1024 neurons, which means that the weight array for this layer is of size 1152x1024.

 

The array is defined as 2D extern const array in a header file, and the header is included in the top level module.

 

The problem is that I am getting Segmentation Fault error every time I try to run C Simulation.

 

I was working on my laptop (6GB RM) at first, then I moved to some server, but still getting the same error.

 

my laptop is running Vivado HLS 2018 on Windows 10, while the server is running Vivado HLS 2015 on RedHat.

 

By the way, I get the same error when I run the .tcl script from command line.

Trying to define the array as extern static also didn't solve the problem.

Also, the code is working well when I try it on a compiler (e.g. Code::Blocks) so looks like it's not that some pointer is not well used or something is not correctly defined ... I don't think so.

 

Do you have any suggestions / workarounds to overcome this problem please?

 

Thanks in advance.

 

P.S. this is not spam, and if the answer is to define it as malloc then please tell me how to initialize it !

 

0 Kudos
1 Solution

Accepted Solutions
Adventurer
Adventurer
663 Views
Registered: ‎05-30-2018

Re: Segmentation Fault when dealing with big array in Vivado HLS - reposting

Jump to solution

After a lot of trials and hopeless workarounds, I found out that using CLANG compiler instead of GCC in C-Simulation solved the problem.

The huge project finally compiled, took some time on a powerful machine though ..

 

However I read that CLANG is only available in Linux.

 

Other hints might be using static arrays .. I'm not sure. 

 

Best regards !

0 Kudos
7 Replies
Moderator
Moderator
1,020 Views
Registered: ‎06-24-2015

Re: Segmentation Fault when dealing with big array in Vivado HLS - reposting

Jump to solution

@rashedkoutayni

 

Can you try increasing the heap and stack size and see if it helps?

Thanks,
Nupur
--------------------------------------------------------------------------------------------
Google your question before posting. If someone's post answers your question, mark the post as answer with "Accept as solution". If you see a particularly good and informative post, consider giving it Kudos (click on the 'thumbs-up' button).
0 Kudos
Xilinx Employee
Xilinx Employee
1,010 Views
Registered: ‎05-06-2008

Re: Segmentation Fault when dealing with big array in Vivado HLS - reposting

Jump to solution

Hello @rashedkoutayni,

 

I also recommend reducing the array sizes to a smaller version.  Verify that this works, then go back to the larger array sizes.  This will make the runtime faster and easier to debug.

 

Good Luck,
Chris

0 Kudos
Adventurer
Adventurer
664 Views
Registered: ‎05-30-2018

Re: Segmentation Fault when dealing with big array in Vivado HLS - reposting

Jump to solution

After a lot of trials and hopeless workarounds, I found out that using CLANG compiler instead of GCC in C-Simulation solved the problem.

The huge project finally compiled, took some time on a powerful machine though ..

 

However I read that CLANG is only available in Linux.

 

Other hints might be using static arrays .. I'm not sure. 

 

Best regards !

0 Kudos
Scholar u4223374
Scholar
644 Views
Registered: ‎04-26-2015

Re: Segmentation Fault when dealing with big array in Vivado HLS - reposting

Jump to solution

Using a static array almost always solves this problem, because it shifts the data off the (fairly small) stack. It also mimics real HLS behaviour (where the array will be implemented as BRAM and not reset between function calls) properly.

0 Kudos
Adventurer
Adventurer
632 Views
Registered: ‎05-30-2018

Re: Segmentation Fault when dealing with big array in Vivado HLS - reposting

Jump to solution

Hi @u4223374 you're right about static arrays.

However even when using static arrays with GCC, I get segmentation fault error.

 

So static arrays and CLANG all together is the solutions here, which I'm not able to do in SDSoC.

 

SDSoC uses aarch64-linux-gnu-g++ compiler when compiling for ZCU102 and still gives segmentation fault 😠

I'm stuck at this point now...

0 Kudos
Scholar u4223374
Scholar
607 Views
Registered: ‎04-26-2015

Re: Segmentation Fault when dealing with big array in Vivado HLS - reposting

Jump to solution

@rashedkoutayni  That's a new one to me. The array size is large, but it's not really large - I've often dealt with much larger ones (eg. 1920x1080x24-bit for image processing) using just the static keyword. Any chance of posting the code (or code snippets) for further analysis?

 

With regards to malloc - the "nice" way to initialize it is to simply have a data file somewhere and use fread to populate the array. However, HLS won't do either malloc or fread at run-time, so then you need a separate implementation (protected by appropriate macros) for synthesis. I'm not sure how SDSoC would handle this as I've never used it.

0 Kudos
Adventurer
Adventurer
575 Views
Registered: ‎05-30-2018

Re: Segmentation Fault when dealing with big array in Vivado HLS - reposting

Jump to solution

@u4223374 

well the way I initialize the array is a bit clumsy.

In a header file, I use something like:

static data_type array[ARRAY_SIZE] = {

#include "values.h"

};

where values.h is CSV file (with last comma removed).

 

However, I noticed something else as well ..

if data_type is a class (i.e. ap_fixed), it's more likely that the problem occurs.

But if I use uint8_t or uint16_t for example, the situation is better (of course I have to do proper reinterpret cast later when the array is used) .

 

Best,

Rashed

 

0 Kudos