08-06-2019 09:12 PM - edited 08-06-2019 09:13 PM
We're using the PG195 XDMA in a 2 H2C, 2 C2H configuration. We've successfully modified the reference driver to implement a true circular linked list mode (many packets queued for transmit) in H2C operation to avoid the reference driver blocking approach for each packet and improve throughput to an acceptable level. This required use of the credit mode to pace the XDMA as there was no other pacing mechanism available.
The H2C approach involved giving the XDMA engine as many credits as possible (up to the amount of data ready to transfer) to allow it to contiue to transmit without SW intervention for a longer period of time. I'm looking at the C2H operation and have a concern about this approach. The credit mode only has limited documentation in PG195, but it appears that all descriptors for all channels/engines share a common FIFO inside the XDMA engine. If this is the case, it appears that a slow C2H channel could effectively block/stall a fast channel.
For example, if C2H channels 0 and 1 are both running at the same time, then the internal XDMA engine shared descriptor FIFO will contain descriptors for both channels. If channel 0 receives packets much faster than channel 1 or if channel 1 has a period of not receiving any packets, then it sounds from PG195 as if channel 0 would be starved for descriptors until the channel 1 descriptors at the head of the FIFO are removed. Is this correct? This seems like it would effectively preclude high-throughput simultaneous multi-channel operation since staying ahead of the C2H engine by providing additional descriptors would only result in blocking the other engine(s) if one engine is slower or has a period of receiving no packets.
Can anyone at Xilinx with access to the XDMA details confirm how the internal descriptor FIFO works and if this is an issue for the PG195 XDMA?
10-10-2019 02:00 AM
We use very similar design having 2 C2H and 2 H2C with ciurcular BD chains and we also use credits for controlling Rx and Tx operation of channels. We are also very concerned that channels can be affected by each other because of the same FIFO of fetched BDs.
Maybe you already received some clarification from XILINX and may share it with us?
10-10-2019 09:42 AM
Hi. I didn't receive any response at all from Xilinx. We have finally done enough SW modification to test things and we aren't seeing any issues. We have done a few tests to try to verify that this is not an issue and we haven't been able to produce any problems.
It seems like maybe it's just poor documentation, but would be nice to get some clarification.