cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
pcross
Participant
Participant
929 Views
Registered: ‎11-14-2018

NVMe Host Accelerator Integration. How to use?

I'm looking at the NVMe Host Accelerator IP (PG328) that Xilinx just released in 2019.1 and I'm trying to understand, exactly what role does it fill and how to do I use it?

From what I can tell it's designed to help manage the queues for me, ensuring I don't submit SQEs if I have run out of spaces in the SQ, and correctly cycling from one SQ to the next. That sounds relatively helpful and I would like have but actually using this IP is where I start to struggle.

The best piece of information for understanding this IP came from Figure 1. So please tell me if I understand correctly the order of operations here.

  1. Configure the IP for things like PCI address, SQ0 address, etc, via the AXI4Lite interface.
  2. Insert SQEs via either SW or HW via the S_AXI and S_AXIS interfaces respectively. Specifically place them in SQ0 if I disable QoS and want to use round robin as the IP will adjust to the correct SQ internally.
  3. Write the flush command to SSDn SQm Flush register (unless QoS is disabled in which case it says to just flush all SQs). Looking back to figure 1 I think, this places SQ_Data into the SSD_AXI4 slave (for reading from some outside IP?) and generates the SQ_Doorbell via SSD_AXI4 Master (which I guess some outside IP should relay to the NVMe drive at the specified address?)
  4. Wait for the flush done status to be set to know the flush was complete. At the same time ha_interrupt will fire once the drive has rang the CQ doorbell. I need to parse the interrupt register to determine the cause but this would be one of them assuming a good transaction.
  5. At this point via SW I would use S_AXI_RDATA to read the CQE or M_AXIS to have the CQE pushed into my HW application. Now based on Figure 1 I'm guessing the way this happens is data is written into the IP via the SSD_AXI4 slave interface and trecks back up to either the SW or HW interfaces. Then the SSD_AXI4 Master is somehow told to go and read the CQ doorbell, although based on the diagram it seems the CQ doorbell is written by the FIFOs inside the IP which does not make sense to me, the IP doesn't decide when a CQ occur the NVMe drives do that. Although it could try and arbitrate info based solely on the CQ data it's getting.
  6. At this point I am done with whatever goal I had.

Which leaves some other software to do the following

  1. Via the admin queue actually create the submission queues since the drive doesn't magically have them.
  2. Create most of the SQE and parse most of the meaning from the CQE.
  3. Manage the memory so that the data referenced in the SQs is actually accessible at the specified address from the NVMe drive. 

Am I correctly describing how this block is intended to work? Would it be fair to say this block is intended to pair with the AXI PCIe IP as a root port such that the SSD interfaces are the PCI connections and the SW/HW interfaces are intended for my FPGA application to manage?

Thank you for clarifying this IP in any way you can. 

2 Replies
benedetto73
Adventurer
Adventurer
753 Views
Registered: ‎09-30-2019

I also would like to learn more about this one.

Is there an example somewhere?

0 Kudos
pcross
Participant
Participant
637 Views
Registered: ‎11-14-2018

In the end my conclusion was to make an NVMe Host implementation myself. I found that it wasn't to hard.

 

From what I recall all this IP does is manage the head/tail pointers and the command ID numbers. It's nice that you don't need to track that info but you still need to fabricate 80% of the message and all of the data so you have to ask how much is this worth it? For me it really wasn't so I just made the IP myself. 

0 Kudos