UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Visitor po092000
Visitor
3,941 Views
Registered: ‎07-24-2016

HW function interface maximum array size ?

Hi, i have a problem. and this is my code

 

#pragma SDS data copy(src_2[0:12*12*20])
#pragma SDS data mem_attribute(src_2:NON_CACHEABLE)

#pragma SDS data copy(convolution_filer_2[0:5*5*20*50])
#pragma SDS data mem_attribute(convolution_filer_2:NON_CACHEABLE)

#pragma SDS data copy(dst_2[0:50*8*8])

void Convolution_Layer_2(float src_2[12*12*20], float convolution_filer_2[5*5*20*50], float dst_2[50*8*8]);

 

error message is like this.

Description Resource Path Location Type
Function "Convolution_Layer_2" argument "convolution_filer_2" is mapped to RAM interface, but it's size is bigger than 16384. Please specify #pragma SDS data zero_copy(convolution_filer_2) or #pragma SDS data access_pattern(convolution_filer_2:SEQUENTIAL) work C/C++ Problem

 

i think problem is limitation of transfer size.

 

is there any way to solve this problem, such as setting max transfer size ?

thank you.

0 Kudos
1 Reply
Xilinx Employee
Xilinx Employee
3,924 Views
Registered: ‎06-29-2015

Re: HW function interface maximum array size ?

Hi po092000,

 

Please check out the SDSoC User guide page 93 which presents the maximum BRAM depth as 16384. 

http://www.xilinx.com/support/documentation/sw_manuals/xilinx2016_2/ug1027-sdsoc-user-guide.pdf

 

FPGAs have limited BRAM storage on-chip so the maximum array size for a function argument is limited to 16K (16384 elements) in SDSoC. This limitation is only for arrays on the interface of the function. You can have arrays stored as BRAMs internal to your function of any size (just beware that it might not fit if its very large).

 

You have two options to work around this limitation:

  1. If you only read the data once, and in order (sequentially reading each element) then you can specify a FIFO interface by using the SDSoC pragma: #pragma SDS data access_pattern(A:SEQUENTIAL) where 'A' is the argument name. The pragma goes just above your function (decalaration preferred in header, but definition is fine too). This will give you a FIFO interface on the core and a data mover will copy the data to your core appropriately. If the array is small enough to fit in BRAM on the device, you could have a copy loop that copies from the FIFO argument into a local internal array that will be implemented as a BRAM for random access.
  2. Or you can choose to implement a AXI Memmory Mapped interface that gives the core the ability to perform read/write directly to PS memory (DDR or cache) using this pragma: #pragma SDS data zero_copy(A[offset:size]) where 'A' is the argument name and offset is the initial location in the array that you'll read from (most of the time its zero) and size is the length of the array to copy. 

Be warned, that if you choose #2 that your access pattern dictates the achievable bandwidth of your system. The AXI protocol handshake takes at least 4 cycles per read/write operation in addition to the time it takes the PS memory controller to respond wit the data for reads. You may end up with 20+ cycles of latency on each read operation. So be sure to pipeline your loop that accesses this argument and that accesses are sequential in order to get burst reads/writes.

 

If you are able to pipeline the read/write loop on this array and have sequential incrementing accesses, Vivado HLS will automatically produce a design that emits Burst read/writes that is just as fast as a DMA engine.

 

Sam

0 Kudos