UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Explorer
Explorer
432 Views
Registered: ‎05-03-2018

XDMA higher transfer sizes

Jump to solution

Hello,
we are developing a data transfer application via PCI Express in DMA from the Xilinx KC705 Evaluation Board to a PC running Windows 10.
In short, on the PC we have installed the driver provided by Xilinx (called XDMA) able to manage the interaction with PCI Express DMA IP on FPGA.
We have noticed, however, that the driver released by Xilinx for Windows allows a maximum transfer of 1MByte of data,
while we would need higher transfer sizes (32Mbytes).
For this reason we have tried to modify the driver provided to adapt it to our needs,
but not having achieved the desired results, we would need support on the techniques to be followed to do so.

Below are the details of the development system we use:
• for KC705 Evaluation board: Vivado v2018.3
• as a PC card: iBase MI992 Intel® 7th Gen. Core ™ / Xeon® E3 / Celeron® Mini-ITX Motherboard
• operating system (on acquisition PC): Windows 10 Enterprise 2016 LTSB Version 1607
• the KC705 board and the PC board are connected via PCI Express 8 Lane link
• IP DMA / Bridge Subsystem for PCI Express on FPGA has AXI Stream interface

Do you have any documents or suggestion to provide me to resolved this issue ?

Thanks in advance for your help.
Best regards,
Andrea


1 Solution

Accepted Solutions
Explorer
Explorer
279 Views
Registered: ‎05-03-2018

Re: XDMA higher transfer sizes

Jump to solution

Hello to all,
Here's our resolution:

The solution is a simple modification to the Xilinx XDMA driver sources.

In particular:

File dma_engine.c function EngineCreateRingBuffer
The original code was:
 // create dma data buffer
    PHYSICAL_ADDRESS low, high, skip;
    low.QuadPart = 0;
    high.QuadPart = 0xFFFFFFFFFFFFFFFF;
    skip.QuadPart = PAGE_SIZE;
    PMDL mdl = MmAllocatePagesForMdlEx (low, high, skip, XDMA_RING_NUM_BLOCKS * XDMA_RING_BLOCK_SIZE, MmNonCached, NormalPagePriority);

Our change was:
 // create dma data buffer
    PHYSICAL_ADDRESS low, high, skip;
    low.QuadPart = 0;

high.QuadPart = 0xFFFFFFFFFFFFFFFF;
skip.QuadPart = XDMA_RING_BLOCK_SIZE;
    
PMDL mdl = MmAllocatePagesForMdlEx (low, high, skip, XDMA_RING_NUM_BLOCKS * XDMA_RING_BLOCK_SIZE, MmNonCached,          MM_ALLOCATE_REQUIRE_CONTIGUOUS_CHUNKS);

Where XDMA_RING_BLOCK_SIZE must be power of 2 but can be multiple of the system's PAGE_SIZE.
In this way the buffer allocated for the data of the DMA transfer is constituted by contiguous portions of physical memory of such size as to contain at least a single block transferred via DMA, overcoming the previous problems of correspondence between physical memory and virtual memory.

Previously the problems arose as soon as the XDMA_RING_BLOCK_SIZE parameter was set to a value higher than the PAGE_SIZE of the system, which represented the maximum guaranteed size of contiguous memory allocated for the DMA.
The DMA was, however, transferring in each burst a quantity of data equal to XDMA_RING_BLOCK_SIZE, higher than PAGE_SIZE, thus overwriting unallocated zones and generating system errors.

Best regards,

Andrea

View solution in original post

0 Kudos
5 Replies
Participant miguelcosta94
Participant
378 Views
Registered: ‎11-14-2018

Re: XDMA higher transfer sizes

Jump to solution

I'm facing a similar problem. I can't read more than 16 MB. Even for reading this amount, I have to previously update a macro in "dma_engine.h" file of XDMA driver (as shown in the figure below). I'm working with an Alveo U200 board in Windows 10 x64.

Have you already figured out how to solve the problem?

Screenshot_2.png

0 Kudos
Explorer
Explorer
348 Views
Registered: ‎05-03-2018

Re: XDMA higher transfer sizes

Jump to solution

Hello,
we finally succeeded in identifying the technique with which to modify the Xilinx XDMA driver for PCIe links to support higher transfer sizes than those set by default.
In practice, the fulcrum of the question lay in the need to force the driver to allocate contiguous portions of memory for DMA transfer in the host system,
to avoid address translation problems between physical memory and virtual memory that generated instability in the system.

Best regards,
Andrea

Participant miguelcosta94
Participant
336 Views
Registered: ‎11-14-2018

Re: XDMA higher transfer sizes

Jump to solution

Hi,

Can you provide some more details? I replaced the function "WdfCommonBufferGetAlignedVirtualAddress" in the set of code depicted in the figure below with this one https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/nf-wdm-mmallocatecontiguousmemory but the problem remains.

Best regards and thanks for your help,

Miguel Costa

Screenshot_2.png

0 Kudos
Visitor rvmoreira
Visitor
296 Views
Registered: ‎06-26-2018

Re: XDMA higher transfer sizes

Jump to solution

Hi!

Did you already solve this issue? I'm having exactly the same problem... I'm stuck... 
Some help would be great...

Thanks in advance, 
Ricardo

0 Kudos
Explorer
Explorer
280 Views
Registered: ‎05-03-2018

Re: XDMA higher transfer sizes

Jump to solution

Hello to all,
Here's our resolution:

The solution is a simple modification to the Xilinx XDMA driver sources.

In particular:

File dma_engine.c function EngineCreateRingBuffer
The original code was:
 // create dma data buffer
    PHYSICAL_ADDRESS low, high, skip;
    low.QuadPart = 0;
    high.QuadPart = 0xFFFFFFFFFFFFFFFF;
    skip.QuadPart = PAGE_SIZE;
    PMDL mdl = MmAllocatePagesForMdlEx (low, high, skip, XDMA_RING_NUM_BLOCKS * XDMA_RING_BLOCK_SIZE, MmNonCached, NormalPagePriority);

Our change was:
 // create dma data buffer
    PHYSICAL_ADDRESS low, high, skip;
    low.QuadPart = 0;

high.QuadPart = 0xFFFFFFFFFFFFFFFF;
skip.QuadPart = XDMA_RING_BLOCK_SIZE;
    
PMDL mdl = MmAllocatePagesForMdlEx (low, high, skip, XDMA_RING_NUM_BLOCKS * XDMA_RING_BLOCK_SIZE, MmNonCached,          MM_ALLOCATE_REQUIRE_CONTIGUOUS_CHUNKS);

Where XDMA_RING_BLOCK_SIZE must be power of 2 but can be multiple of the system's PAGE_SIZE.
In this way the buffer allocated for the data of the DMA transfer is constituted by contiguous portions of physical memory of such size as to contain at least a single block transferred via DMA, overcoming the previous problems of correspondence between physical memory and virtual memory.

Previously the problems arose as soon as the XDMA_RING_BLOCK_SIZE parameter was set to a value higher than the PAGE_SIZE of the system, which represented the maximum guaranteed size of contiguous memory allocated for the DMA.
The DMA was, however, transferring in each burst a quantity of data equal to XDMA_RING_BLOCK_SIZE, higher than PAGE_SIZE, thus overwriting unallocated zones and generating system errors.

Best regards,

Andrea

View solution in original post

0 Kudos