cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Visitor
Visitor
9,789 Views
Registered: ‎05-17-2013

Using BRAM macros to instantiate large RAMs

Jump to solution

I had seen elsewhere in the forums that Vivado generates warnings when using the BRAM macros for memories larger than officially supported but that "it seems to work". I just realized that it really doesn't work. It creates a memory that is aliased repeatedly across the requested memory range.

 

For example, I have a 32768 deep by 32-bits wide true dual-port memory. When I use the BRAM_TDP_MACRO to create it, I get a memory that is 1024 deep by 32-bits wide memory that is aliased 32 times across the address range. This matches up nicely with the comments right in the BRAM_TDP_MACRO header, that a 32-bit data width can be up to 1024 entries deep.

 

What is the right way to do this? Do I need to instantiate multiple smaller BRAMs and logic to stitch them together? Inferring the memories would be great, but these large memories take an eternity in Vivado if they are inferred. For now I'm using create_ip in my build script but we'd like to keep everything within our Verilog RTL database if possible.

 

Thanks for your help,

Dave

0 Kudos
1 Solution

Accepted Solutions
Highlighted
Xilinx Employee
Xilinx Employee
14,181 Views
Registered: ‎07-01-2010
Hi Dave,

On your requirement , it will be a tedious task to instantiate a MACRO as the memory requirement is large.

I would suggest two ways:
1.Writing a HDL code and let the tool infer BRAMs
2.use Block memory generator Core available in the IP catalog for the memory inference.

Thanks,
Ram
---------------------------------------------------------------------------------------------
Kindly note- Please mark the Answer as "Accept as solution" if information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
----------------------------------------------------------------------------------------

View solution in original post

3 Replies
Highlighted
Xilinx Employee
Xilinx Employee
14,182 Views
Registered: ‎07-01-2010
Hi Dave,

On your requirement , it will be a tedious task to instantiate a MACRO as the memory requirement is large.

I would suggest two ways:
1.Writing a HDL code and let the tool infer BRAMs
2.use Block memory generator Core available in the IP catalog for the memory inference.

Thanks,
Ram
---------------------------------------------------------------------------------------------
Kindly note- Please mark the Answer as "Accept as solution" if information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
----------------------------------------------------------------------------------------

View solution in original post

Highlighted
Visitor
Visitor
9,712 Views
Registered: ‎05-17-2013

Thanks Ram. I followed your advice. For some smaller memories in my design I let Vivado infer it and it didn't significantly increase runtime (from 86 to 90 minutes). The largest memories (four 128KB RAMs) use the IP generator because Vivado runtime really blows up (never let it finish, killed it after many hours).

 

Thank you,

Dave

0 Kudos
Highlighted
Scholar
Scholar
9,700 Views
Registered: ‎09-16-2009

Are you sure you're inferring BLOCK memory not distributed? 

 

We infer all RAMS, even some large ones - without any runtime issues.  We're using ISE, not vivado, so that's different, but shoudn't really matter. 

 

If you're accidently getting disributed memories, then yes, you'll see a big runtime hit.

 

The inference happens during synthesis, so you shouldn't see any differences for MAP, Place, and Route.

 

Regards,

 

Mark

 

0 Kudos