03-05-2019 09:42 PM - edited 03-05-2019 09:43 PM
I'm working on coding this accelerator: //www.xilinx.com/support/documentation/application_notes/xapp1170-zynq-hls.pdf
It notes that to improve performance I can increase AXI bit width (page 10) and the read/write loop latencies decrease by half. Has anyone done this? Can you share how?
In the .cpp file, using vivado HLS, I modified the typedef from: "typedef ap_axiu<32,4,5,5> AXI_VAL;" to "typedef ap_axiu<64,4,5,5> AXI_VAL;" but that didn't work.
Where do I modify this parameter and what other considerations should I have when I integrate this into AXI4 Stream interface in Vivado?
03-06-2019 06:53 AM
You need two things:
1) Wider interface (function bus/pointer parameter) data type
2) In vivado, on the block diagram, ensure the width of you axi interfaces is wide enough
03-06-2019 11:10 AM
I understand that defining the data width to be a larger value (from 64 to 32) is done in the axi stream values: in ap_axiu<64,4,5,5>. Yet I get an error about conflicting data widths because in my accel the width is 32 bits. If I change the data width that my accel uses from 32 bits to 64 bits, then the unpacking functions (pop/push stream) would not work.
Pop/Push stream use "union" to convert the types (one to one size conversion) and make the unpacking/packing of the in/out axi streams. Would I need to replace these functions with my own packing/unpacking? Is there a high level way of doing multiple packing?
Part of code from application notes for pop function:
T pop_stream(ap_axiu <sizeof(T)*8,U,TI,TD> const &e)
converter.ival = e.data;
T ret = converter.oval;
volatile ap_uint<sizeof(T)> strb = e.strb;
volatile ap_uint<TD> dest = e.dest;
03-06-2019 12:46 PM