02-10-2016 07:44 PM
02-15-2016 03:48 AM
Hi, I don't have the answer for your question, but I'm also very interrested to see what's possibile or needed for such an application.
For a project I'm currently working on we plan to use a M.2 SSD connected via 4 x PCI-e lanes to the Zynq. I'm wondering what we need to do on the zynq-internal side. E.g. is only a pci-e root-complex all, or do we need additional ip-cores (like for SATA) ?
And what about the linux drivers on the PS ?
Bandwidth what we need to achieve for sequential data in our application is > 250 MB/s, but it would be very nice if this can be a lot higher, e.g. even up to 1500 MB/s.
06-08-2016 01:09 PM
I'm using a Samsung NVME SSD on a zc706 using AXI PCIE bridge as a root port.
I wrote my own software to operate the SSD so that I can read data from the SSD and feed it directly to an accelerator in the programmable logic.
The NVME device driver is in the XIlinx Linux kernel so you should be able to use that.
08-16-2016 08:12 AM
The NVME driver is in the mainstream kernel from which the Zynq kernel is derived.
I wrote an NVME core using the Xilinx AXI PCIE bridge to make it easy to connect accelerators to NVME. I'm working on packaging that for reuse.
I'll do some performance measurements this week. With my last measurements, it looks like the best I was getting was 110MB/s when requesting (32) 512-byte blocks at a time. The test design is capable of accepting 2000 MB/s. The card does deliver bursts at 2000 MB/s but there are idle gaps that reduce the overall bandwidth delivered.
01-08-2020 03:54 PM
01-09-2020 05:37 AM
I would like to make sure I understand your question. Are you looking for IP for an NVMe controller or NVMe device?
My IP is for a very simple NVMe controller. It is implemented in BSV (and the BSV compiler will be open source at the end of the January 2020). I think it is pretty simple to modify this controller to perform different actions.
There was another open source NVMe controller implemented in BSV at IIT Madras: