cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
m_bridges_etl
Visitor
Visitor
396 Views
Registered: ‎03-11-2021

AXI4-Stream Data FIFO depth setting not applied

Hi,

I am trying to use the AXI4-Stream Data FIFO with Vivado 2020.2. 

I edited the TDATA width but left the rest untouched.

The depth set in the IP core generator does match the settings used to synthesize the core. I use the default of 512 as shown in the image below. However, this setting is not used to generate the core as explained below.

Xilinx_AXIS_FIFO_bug.PNG

Looking at the logs it is clear that a depth setting of 1024 has been used. Some of the other settings don't appear to match either.

---------------------------------------------------------------------------------
Starting RTL Elaboration : Time (s): cpu = 00:00:05 ; elapsed = 00:00:05 . Memory (MB): peak = 1431.430 ; gain = 187.641
---------------------------------------------------------------------------------
INFO: [Synth 8-6157] synthesizing module 'axis_data_fifo_0' [e:/VivadoProjects/project_12/project_12.gen/sources_1/ip/axis_data_fifo_0/synth/axis_data_fifo_0.v:58]
INFO: [Synth 8-6157] synthesizing module 'axis_data_fifo_v2_0_4_top' [e:/VivadoProjects/project_12/project_12.gen/sources_1/ip/axis_data_fifo_0/hdl/axis_data_fifo_v2_0_vl_rfs.v:54]
	Parameter C_FAMILY bound to: zynquplus - type: string 
	Parameter C_AXIS_TDATA_WIDTH bound to: 64 - type: integer 
	Parameter C_AXIS_TID_WIDTH bound to: 1 - type: integer 
	Parameter C_AXIS_TDEST_WIDTH bound to: 1 - type: integer 
	Parameter C_AXIS_TUSER_WIDTH bound to: 1 - type: integer 
	Parameter C_AXIS_SIGNAL_SET bound to: 3 - type: integer 
	Parameter C_FIFO_DEPTH bound to: 1024 - type: integer 

 

Inspecting the .xci I found these 2 settings:

        <spirit:configurableElementValue spirit:referenceId="MODELPARAM_VALUE.C_FIFO_DEPTH">1024</spirit:configurableElementValue>
...
        <spirit:configurableElementValue spirit:referenceId="PARAM_VALUE.FIFO_DEPTH">512</spirit:configurableElementValue>

 

Am I correct in saying that this is a Vivado bug or at least a bug in that IP? The main reason that this issue concerns me is that the BRAM usage is doubled which I cannot afford.

Block RAM: Preliminary Mapping	Report (see note below)
+----------------------------------------------------------------------------------------+----------------------------------+------------------------+---+---+------------------------+---+---+------------------+--------+--------+-----------------+
|Module Name                                                                             | RTL Object                       | PORT A (Depth x Width) | W | R | PORT B (Depth x Width) | W | R | Ports driving FF | RAMB18 | RAMB36 | Cascade Heights | 
+----------------------------------------------------------------------------------------+----------------------------------+------------------------+---+---+------------------------+---+---+------------------+--------+--------+-----------------+
|inst/\gen_fifo.xpm_fifo_axis_inst /xpm_fifo_base_inst/\gen_sdpram.xpm_memory_base_inst  | gen_wr_a.gen_word_narrow.mem_reg | 1 K x 84(READ_FIRST)   | W |   | 1 K x 84(WRITE_FIRST)  |   | R | Port A and B     | 0      | 2      | 1,1             | 
+----------------------------------------------------------------------------------------+----------------------------------+------------------------+---+---+------------------------+---+---+------------------+--------+--------+-----------------+

2 RAMB36 used instead of 1.

 

Thanks in advance for any feedback or tips. Kind regards,

Matthew

Tags (4)
0 Kudos
6 Replies
richardhead
Scholar
Scholar
380 Views
Registered: ‎08-01-2012

The issue  is that the maximum width of a BRAM is 36 bits. So for a 64 bit data bus you are instantly required to have 2 BRAMs. Hence using 512 addresses would then only be using half of the BRAM.

But it really should limit the size to the one you specify (it might be 512 for a reason).

0 Kudos
m_bridges_etl
Visitor
Visitor
368 Views
Registered: ‎03-11-2021

Thank you for the reply. It is really appreciated. The BRAM on the Ultrascale+ device I am using can do at lease 64 bits from what I have read.

If I override the depth and set it to 256 it is able to use a single BRAM. See below

Block RAM: Preliminary Mapping	Report (see note below)
+----------------------------------------------------------------------------------------+----------------------------------+------------------------+---+---+------------------------+---+---+------------------+--------+--------+-----------------+
|Module Name                                                                             | RTL Object                       | PORT A (Depth x Width) | W | R | PORT B (Depth x Width) | W | R | Ports driving FF | RAMB18 | RAMB36 | Cascade Heights | 
+----------------------------------------------------------------------------------------+----------------------------------+------------------------+---+---+------------------------+---+---+------------------+--------+--------+-----------------+
|inst/\gen_fifo.xpm_fifo_axis_inst /xpm_fifo_base_inst/\gen_sdpram.xpm_memory_base_inst  | gen_wr_a.gen_word_narrow.mem_reg | 256 x 84(READ_FIRST)   | W |   | 256 x 84(WRITE_FIRST)  |   | R | Port A and B     | 0      | 1      |                 | 
+----------------------------------------------------------------------------------------+----------------------------------+------------------------+---+---+------------------------+---+---+------------------+--------+--------+-----------------+

 

0 Kudos
m_bridges_etl
Visitor
Visitor
287 Views
Registered: ‎03-11-2021

I am pretty confident that this is a bug. The bug could either be in that specific IP core or in Vivado itself; I am not sure the answer to that.

I am quite new to using these forums for creating support cases. Can anyone advise how I get Xilinx to create a support case for this?

0 Kudos
m_bridges_etl
Visitor
Visitor
277 Views
Registered: ‎03-11-2021

I have managed to find a work around which is not ideal, but could work.

Using the FIFO Generator IP core instead of the AXI4-Stream Data FIFO IP core I am to get the correct results. See the image below.

Xilinx_AXIS_FIFO_workaround.PNG

I configure the FIFO Generator core for AXI Stream mode. I set all the other settings to match what I had for the AXI4-Stream Data FIFO IP core.

The utilization figures when using this method look correct:

Report Cell Usage: 
+------+---------+------+
|      |Cell     |Count |
+------+---------+------+
|1     |LUT1     |    20|
|2     |LUT2     |    11|
|3     |LUT3     |     7|
|4     |LUT4     |    25|
|5     |LUT5     |     4|
|6     |LUT6     |     7|
|7     |MUXCY    |    20|
|8     |RAMB36E2 |     1|
|9     |SRL16E   |     1|
|10    |FDRE     |   118|
|11    |FDSE     |    12|
+------+---------+------+

 

This is not a solution to the original problem, but it is usable.

0 Kudos
m_bridges_etl
Visitor
Visitor
199 Views
Registered: ‎03-11-2021

I am still trying to solve this issue. Does anyone have any ideas, or know how I can get Xilinx to investigate it?

@richardhead does my reply about the BRAM width make sense? I should have mentioned that I was using an UltraScale part. The documentation states "When used as SDP memory, the read or write port width is x64 or x72".

0 Kudos
richardhead
Scholar
Scholar
187 Views
Registered: ‎08-01-2012

Yes it does, thanks.

Maybe I need to re-assess some of my FIFOs..

0 Kudos