UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Visitor jveleg
Visitor
768 Views
Registered: ‎09-04-2018

PCie DMA vs AXI DMA for Ethernet

Jump to solution

Greetings,

 

I would like to ask a general question regarding the PCI DMA 4.0:

Is it possible to develop a standard network driver using this DMA, because from what

I saw in the xdma driver source code and the manual, the PCI DMA does not support

cyclic BD rings and start of xmit  by writing the next tail descriptor into the TDESC register

as the AXI DMA does.

 

Which DMA is more suitable to be used as a component of an Ethernet network device?

 

0 Kudos
1 Solution

Accepted Solutions
Xilinx Employee
Xilinx Employee
677 Views
Registered: ‎12-10-2013

Re: PCie DMA vs AXI DMA for Ethernet

Jump to solution

Hi @jveleg,

 

For QDMA, each ring buffer for descriptors associated with a specific queue has a PIDX (Producer Index) and CIDX (Consumer Index).   The host / driver creates descriptors and loads them into the associated ring buffer for a queue, and increments the PIDX into the FPGA registers.  The FPGA engines then fetch the descriptors, DMA the data, and increment the CIDX.  The driver can continue to submit new descriptors and increment the PIDX as the DMA engine is processing.  As long as the PIDX doesn’t pass the CIDX, these can continue to move in parallel.  If you are running into a situation where the PIDX is incrementing too quickly, queue size could be increased.

 

Since descriptors are processed as first in first out, the PIDX indicates the last valid descriptor, and the CIDX points at the last completed.   When the CIDX reaches the PIDX, the engine stops.

 

(For clarification, this general description is valid for H2C Memory Mapped, C2H Memory Mapped, and H2C Steam.  C2H Stream is more complicated with two set of indexes, but the conceptual description is the same)

-------------------------------------------------------------------------
Don’t forget to reply, kudo, and accept as solution.
-------------------------------------------------------------------------
3 Replies
Xilinx Employee
Xilinx Employee
720 Views
Registered: ‎12-10-2013

Re: PCie DMA vs AXI DMA for Ethernet

Jump to solution

Hi @jveleg,

 

Which device family are you targeting?  If you are looking at UltraScale+, I would highly recommend you take a look at the QDMA IP.  It is definitely more applicable to a network application, and is based on the concepts used by RNICs. 

-------------------------------------------------------------------------
Don’t forget to reply, kudo, and accept as solution.
-------------------------------------------------------------------------
0 Kudos
Visitor jveleg
Visitor
686 Views
Registered: ‎09-04-2018

Re: PCie DMA vs AXI DMA for Ethernet

Jump to solution

Greetings,

 

Thanks for your response! I see that QDMA has a similar concept (interface) to PCIe DMA, which does not include TAIL descriptor register as the AXI DMA. How can one submit new descriptors to the DMA, while it is running, if there is no TAIL register?

I suppose one must wait for the DMA to finish and the start it with again with the new transfer?

0 Kudos
Xilinx Employee
Xilinx Employee
678 Views
Registered: ‎12-10-2013

Re: PCie DMA vs AXI DMA for Ethernet

Jump to solution

Hi @jveleg,

 

For QDMA, each ring buffer for descriptors associated with a specific queue has a PIDX (Producer Index) and CIDX (Consumer Index).   The host / driver creates descriptors and loads them into the associated ring buffer for a queue, and increments the PIDX into the FPGA registers.  The FPGA engines then fetch the descriptors, DMA the data, and increment the CIDX.  The driver can continue to submit new descriptors and increment the PIDX as the DMA engine is processing.  As long as the PIDX doesn’t pass the CIDX, these can continue to move in parallel.  If you are running into a situation where the PIDX is incrementing too quickly, queue size could be increased.

 

Since descriptors are processed as first in first out, the PIDX indicates the last valid descriptor, and the CIDX points at the last completed.   When the CIDX reaches the PIDX, the engine stops.

 

(For clarification, this general description is valid for H2C Memory Mapped, C2H Memory Mapped, and H2C Steam.  C2H Stream is more complicated with two set of indexes, but the conceptual description is the same)

-------------------------------------------------------------------------
Don’t forget to reply, kudo, and accept as solution.
-------------------------------------------------------------------------