07-30-2019 02:45 AM
I created a design with a RISC-V Processor which instantiates a XPM dual port macro. Simulation works fine and i can initialize the memory within simulation with readmemb/readmemh. I can also initialize it for implementation using
.MEMORY_INIT_FILE("risc_init.mem"), // String
.MEMORY_INIT_PARAM(""), // String
The only reason to use XPM was to be able to initialize the BRAM after generating the bitstream.
The current BRAM is a dual port 32bit wide and 128kB in total.
The .mmi file i got from the tool looks like this:
<MemInfo Version="1" Minor="6">
<MemoryArray InstPath="risc_inst_10/TCM_instance/i_dp_memory/dual_port_mem/xpm_memory_tdpram_inst/xpm_memory_base_inst" MemoryPrimitive="auto" MemoryConfiguration="enabled_configuration">
<MemoryLayout Name="risc_inst_10/TCM_instance/i_dp_memory/dual_port_mem/xpm_memory_tdpram_inst/xpm_memory_base_inst" CoreMemory_Width="32" MemoryType="RAM_TDP">
<BRAM MemType="RAMB36" Placement="X1Y28">
<DataWidth_PortA MSB="0" LSB="0"/>
<AddressRange_PortA Begin="0" End="32767"/>
<DataWidth_PortB MSB="0" LSB="0"/>
<AddressRange_PortB Begin="0" End="32767"/>
<Parity ON="false" NumBits="32767"/>
<Option Name="Part" Val="xc7k325tffg900-2"/>
<Rule Name="RDADDRCHANGE" Val="false"/>
In total there are 32 BRAMs in this file, each one of them creates one bit of the 32bit read/write data. At least that's what I think it does.
I tried with various .mem files, the latest looks like:
Now i'm calling the updatemem:
updatemem -debug -force --meminfo rsic_v_uart_genesys_top.mmi --data risc_init.mem --bit rsic_v_uart_genesys_top.bit --proc risc_inst_10/TCM_instance/i_dp_memory/dual_port_mem/xpm_memory_tdpram_inst/xpm_memory_base_inst --out manipulated.bit > dump.txt
and the dump.txt looks like this (for size issue i only copied top and bottom):
****** updatemem v2018.3 (64-bit)
**** SW Build 2405991 on Thu Dec 6 23:36:41 MST 2018
**** IP Build 2404404 on Fri Dec 7 01:43:56 MST 2018
** Copyright 1986-2018 Xilinx, Inc. All Rights Reserved.
source /opt/cae/Xilinx/VivadoHL_2018.3/Vivado/2018.3/scripts/updatemem/main.tcl -notrace
Command: update_mem -meminfo rsic_v_uart_genesys_top.mmi -data risc_init.mem -proc risc_inst_10/TCM_instance/i_dp_memory/dual_port_mem/xpm_memory_tdpram_inst/xpm_memory_base_inst -bit rsic_v_uart_genesys_top.bit -out manipulated.bit -force -debug
Dump the BRAM Initialization Strings.
^^^ Bitlane with BRAM Location: RAMB36_X1Y28
.. 32 BRAM init strings all in here and all zero
Loading bitfile rsic_v_uart_genesys_top.bit
Loading data files...
Updating memory content...
Writing bitstream manipulated.bit...
0 Infos, 0 Warnings, 0 Critical Warnings and 0 Errors encountered.
update_mem completed successfully
update_mem: Time (s): cpu = 00:00:14 ; elapsed = 00:00:15 . Memory (MB): peak = 1079.832 ; gain = 463.270 ; free physical = 6372 ; free virtual = 206387
INFO: [Common 17-206] Exiting updatemem at Tue Jul 30 11:26:13 2019...
So it sets everything to zero. I also validated that in hardware with the ILA.
I don't know what I'm doing wrong and iI don't see any way to do some deeper debugging into it.
I'm also not 100% sure that my .mem file is correct. How should the addressing look like?
From my current understanding it should look like this:
12345678 <-- first 32bit vector
@00000001 <-- 1 or 4?
12345678 <-- second 32bit vector
I would highly appreciate any kind of help.
07-30-2019 07:13 AM
Ok i think i found the solution:
first of all we are using 2018.3 where i found this: https://www.xilinx.com/support/answers/71948.html
but this seems not to be the error
Anyhow i changed my.mem file that it starts with:
then i realized that updatemem only writes the init vectors of the first RAMB36. In the .mmi file because of my RAMs depth beeing 32768 Vivado decides to use one BRAM for one final bit of output thus using 32 RAMB36 instances for 32bit. In all the examples i found online this was never the case. So i halfed the RAM size and now one RAMB36 does two bits. Now it is working just fine.
This seems to be an ugly bug in the memupdate script. I will try 2019.1 and let you know if this is working better.