06-02-2010 04:26 AM
Hi all,
currently we have to build a system, that holds an amount of 16GB data. For bandwith reasons we have to use DDR3 DIMMs. The question is: Is it possible to connect more than one DIMM to one memory interface in a Virtex6, generated by MIG? If it is so, how much DIMMs could be connected (idealy 4 - 4GB each DIMM) and where can I find literature, that describes this technical issue?
best regards,
bebork
06-02-2010 09:26 PM
You can't generate one MIG controller for multiple DIMMs but MIG does support mult-controller designs so you could generate 4 controllers, one for each DIMM, and then use the top-level wrapper MIG generates in your design. Depending on the FPGA family you are using this information can be found in UG086, UG388, or UG406.
06-02-2010 09:26 PM
You can't generate one MIG controller for multiple DIMMs but MIG does support mult-controller designs so you could generate 4 controllers, one for each DIMM, and then use the top-level wrapper MIG generates in your design. Depending on the FPGA family you are using this information can be found in UG086, UG388, or UG406.
06-21-2010 02:17 AM
Hi,
thanks for the answer. I tried to generate 4 MIG controllers and it looks fine. The disadvantage of this system is the the very high pin count.
Is there a possibility to generate it, so that DIMMs share the same data bus and most pins of the address bus physically?
Regards,
bebork
06-21-2010 08:16 PM
MIG will not generate a DIMM design more than one socket with dual rank with V6 (and you only get maximum performance with a single rank DIMM). You could modify the code for your particular application (source is included), but remember that the MIG phy is only calibrating on one rank, so this should not be undertaken lightly.
Earlier you mentioned that you needed multiple DIMM's for bandwidth, but you won't get maximum bandwidth with a shared data bus. Do you really need bandwidth or storage capacity?
06-21-2010 11:12 PM
Pardon my imprecise formulation. Bandwith for a single DIMM would be enough, 16GB storage capacity is a must. Modify the MIG generated code is an idea, but not the best (you mentioned that only one DIMM will be calibrated by the PHY).
Another question is: Does Xilinx plan to support 16GB RDIMMs like Hynix HMT42GR7AMR4C or Samsung M393B2K70BM1 (both 4 rank modules)?
Is there any other solution to get 16GB running on a Xilinx FPGA with economically justifiable pin count?
Thanks and regards,
bebork
06-22-2010 01:44 PM
It's not trivial to attach very large amounts of memory to the FPGA as you request and get decent performance. The best bet for this is something that is just coming out called LR-DIMM. It includes a buffer for both the address/control and data, so the FPGA only sees one load.
There have been some press releases on LR-DIMM: http://www.eetimes.com/showArticle.jhtml?articleID=218900117
I've seen other proprietary solutions that are similar to the LR-DIMM. Netlist may have something and I bet there are others.
07-01-2010 01:33 AM
Hi again,
After long search I got a prliminary datasheet of a 16 GB DDR3 LRDIMM. This module has (like Hynix HMT42GR7AMR4C or Samsung M393B2K70BM1 RDIMMs) 4 ranks selectable by 4 Chipselects. So my question is again: Does Xilinx plan to support such quad rank interfaces with the MIG generated cores?
07-01-2010 09:13 AM
I can't speak to specific future product plans, but I can say that decisions on where to invest are based on customer demand and revenue potential. Let your FAE and sales person know that LR-DIMM is of interest to you. Note that JEDEC has yet to give its final approval to the LR-DIMM standard.
I can say that MIG does output source and with LR-DIMM you don't have to worry about the calibration of multiple ranks, so you are just dealing with logical issues. I have not looked at this in detail, but it might not be too hard to make any needed changes manually.
08-13-2010 03:20 PM - edited 08-13-2010 03:21 PM
Have you considered either SSD (Solid State Disk) or CompactFlash for your application? Either one might beat the 16GB DDR solution for board space, power, cost, and pinout. For either one, running 2 or more in parallel will extend bandwidth to meet your needs.
There are a number of viable solutions out there, and a wide selection of folks selling high-bandwidth NAND flash based products, eager to discuss your needs with you. These are products (with supporting cores/IP) which are available, off the shelf, today.
- Bob Elkind
08-16-2010 01:19 AM
Hi Bob,
thanks for your post. My bandwidth needs are at least 3 Gb/s to and 12 Gb/s from the 16 GB memory in parallel (together 15 Gb/s). Fastest SSD read performance I have found is 235 MB/s = 1,88 Gb/s. In this case I need in minimum 8 SSDs and 8 SATA 3Gb/s interfaces on the FPGA - much to expensive. CompactFlash is slower than SSD I think.
Stays high-bandwidth NAND flash. Fastest SLC NAND chip I have found is 320 Mb/s. That would be 47 chips for 15 Gb/s and 376 FPGA pins for databus only. Some chips more because 320 Mb/s is excluding times for switching between pages. Sounds affordable but is it manageable in the FPGA? Are there faster NAND devices on the market?
Any suggestions?
Thanks,
bebork
08-16-2010 05:12 AM
Bebork,
Do you need 15 Gb/sec (Gbits/sec) or 15 GB/sec (GBytes/sec) ?
08-16-2010 05:16 AM
I need 15 Gbits/s. For me Gb = Gbits and GB = Gbytes.
11-22-2020 01:58 AM
Hi
I need too use 2GB or 4GB DDR3 (component) In Zynq 7000 board and connect to the PL of Zynq.
Is it possible ?
What is the maximum density DDR3 for PL part of zynq 7000 ?
When we use multi-controller in MIG for DDR3?