06-25-2019 10:54 AM
I'm looking at the NVMe Host Accelerator IP (PG328) that Xilinx just released in 2019.1 and I'm trying to understand, exactly what role does it fill and how to do I use it?
From what I can tell it's designed to help manage the queues for me, ensuring I don't submit SQEs if I have run out of spaces in the SQ, and correctly cycling from one SQ to the next. That sounds relatively helpful and I would like have but actually using this IP is where I start to struggle.
The best piece of information for understanding this IP came from Figure 1. So please tell me if I understand correctly the order of operations here.
Which leaves some other software to do the following
Am I correctly describing how this block is intended to work? Would it be fair to say this block is intended to pair with the AXI PCIe IP as a root port such that the SSD interfaces are the PCI connections and the SW/HW interfaces are intended for my FPGA application to manage?
Thank you for clarifying this IP in any way you can.
11-19-2019 06:41 AM
In the end my conclusion was to make an NVMe Host implementation myself. I found that it wasn't to hard.
From what I recall all this IP does is manage the head/tail pointers and the command ID numbers. It's nice that you don't need to track that info but you still need to fabricate 80% of the message and all of the data so you have to ask how much is this worth it? For me it really wasn't so I just made the IP myself.