UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Controller IP mates FPGA to Hybrid Memory Cube using multiple 12.5Gbps links

by Xilinx Employee ‎04-21-2014 08:32 AM - edited ‎04-21-2014 09:12 AM (29,688 Views)

The Hybrid Memory Cube (HMC) is a 3D IC memory device that delivers 15x the performance of and uses 70% less energy than DDR3 SDRAM. The HMC communicates with the host CPU or FPGA over multiple high-speed serial links. The HMC design places a logic chip at the bottom of a stack of memory die. The logic chip in the HMC’s 3D stack manages the attached DRAM and presents the rest of the system with multiple high-speed serial links that connect the system with the module’s DRAM slices through a crossbar structure.

 

The HMC project illustrates why memory is a killer 3D app. The HMC runs many, many TSVs (through silicon vias) up through a stack of DRAM die to access the inherent parallelism of the multiple DRAM arrays on each die. Each proprietary DRAM die in the HMC stack has multiple separate memory arrays, resulting in substantial potential parallelism and consequently, substantial potential memory throughput. With the bandwidths made possible by massive DRAM interconnect through TSVs, much more of  the potential bandwidth available from multiple on-chip DRAM banks that has been present on memory die for decades can finally be brought out and exploited to achieve significantly more system performance.

 

This additional bandwidth is especially important in a world that increasingly makes use of multicore processor designs. Multiple-core processor chips have insatiable appetites for memory bandwidth and the HMC demonstrates that 3D IC assembly is one way to achieve the required memory bandwidth. If this all sounds like a very new sort of memory structure that requires a very new sort of memory controller, you are on the right path.

 

Open-Silicon has just announced HMC controller IP that interfaces to and manages HMC rev2 devices using 10Gbps or 12.5Gbps links. The controller IP supports half-width (8-lane) and full-width (16-lane) HMC operation and has an AXI 4.0 system interface. It’s optimized for Xilinx 7 series programmable logic devices. Here’s a block diagram of the Open-Silicon HMC controller:

 

 

Open Silicon HMC Memory Controller IP.gif 

 

 

The Open-Silicon HMC controller IP makes use of the multiple GTX (12.5Gbps) and GTH (13.1Gbps) transceivers in Xilinx Virtex-7 and Kintex-7 FPGAs.

 

Open-Silicon is offering an evaluation platform for the new HMC Controller IP. The platform is based on the Xilinx Virtex-7 XC7VX690T FPGA and includes a fully validated reference design that integrates HMC controller along with HMC exerciser functions. The HMC exerciser along with the software stack allows you to quickly evaluate of the performance of Open Silicon’s HMC technology and of the HMC itself.

 

Note: For more information on serial-attached memories that use high-speed SerDes transceivers, see “Is DDR4 the last SDRAM protocol? Yes, says SemiWiki’s Eric Esteve. Then what are the alternatives?

Labels
About the Author
  • Be sure to join the Xilinx LinkedIn group to get an update for every new Xcell Daily post! ******************** Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.