cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
joshualant
Observer
Observer
662 Views
Registered: ‎10-06-2017

Offset of QDMA PCI2AXI Translation Between Different PFs BARs

Hi,

I would like to know if it is possible to adjust the BAR offset value for the "PCI to AXI translation" between different Physical Functions for the DMA interface?

We use all 4 PFs, and we require around 4TB total mapping to the FPGA. I speculate that our system can handle up to 8TB physical mapping (although this is yet to be confirmed).

A way we could do this (if our system was capable, and not concerning security issues) would be to set the DMA BAR to 4TB on each PF.

However, we only need 4TB total spread over the 4PFs. Some of the PFs need only to access a small space, but due to alignment issues of the PCI BARs (power 2 mapping) and the AXI memory map we have in the FPGA, we cannot set some of the PFs to much smaller values starting from the base 0x0.

The issue is that when the DMA interface is set to 4TB then obviously the OS/BIOS is trying to map 16TB total for the 4 PFs, So the system will not boot as it tries to map more physical memory than it is capable of (this phenomenon is documented under the "Warning" heading here: https://xilinx.github.io/XRT/master/html/p2p.html).

Is there a workaround so I can properly set this parameter (as you can for the AXI bridge interface) and reduce the overall memory being mapped to physical space in the OS?

TLDR; is there a workaround to stop this tcl command being ignored?

set_property -dict [list CONFIG.pf0_pciebar2axibar_0 {0x0000010000000000}] [get_bd_cells QDMABlock/qdma_0]

WARNING: [BD 41-721] Attempt to set value '0x0000010000000000' on disabled parameter 'pf0_pciebar2axibar_0' of cell '/QDMABlock/qdma_0' is ignored

 

0 Kudos
Reply
5 Replies
venkata
Moderator
Moderator
572 Views
Registered: ‎02-16-2010

Hi @joshualant 

It seems your command is trying to set the address translation on BAR0, which is used for DMA BAR. Based on your description, I guess you are not looking to set 1TB for DMA BAR but rather to a different BAR such as AXI_Bridge_Master. If it is so, update the command similar to below.

set_property -dict [list CONFIG.pf0_pciebar2axibar_4 {0x0000010000000000}] [get_bd_cells QDMABlock/qdma_0]

In this example command, I am assuming that AXI_Bridge Master BAR is mapped to BAR4. 

------------------------------------------------------------------------------
Don't forget to reply, give kudo and accept as solution
------------------------------------------------------------------------------
0 Kudos
Reply
joshualant
Observer
Observer
545 Views
Registered: ‎10-06-2017

Hi @venkata ,

Firstly thank you for your reply...

I'm sorry if my initial description is unclear, but that is precisely what I wish to achieve. I want to set the DMA BAR offsets.

Basically I wish to have mapped on the DMA BARs:

PF0- 64GB,

PF1- 64GB,

PF2- 64MB,

PF3- 1TB.

Obviously I can set all of them to have a 2TB space, and use the upper 1TB with PF3. However, the system will not allow this as I can map 8TB max, also it is a huge waste of physical mapping to have the driver enumerate 2TB BARs for each of the PFs.

This is why I require an offset for the DMA BARs, to give each of the PFs adequate memory, but simply at different regions. This option is available for the Bridge Master. Why not the AXI DMA Master ??

I could achieve this manually in the logic by simply adjusting offsets for AXI addresses if i could see that a particular DMA operation is associated with a particular physical function. But I wish to know why this is not enabled in the IP wizard? And if there is a better workaround?

If i were to do this manually is there any way i know which PF is associated with each transfer? Would i have to use the descriptor bypass for this ?

Many thanks,

Josh

0 Kudos
Reply
venkata
Moderator
Moderator
524 Views
Registered: ‎02-16-2010

Hi @joshualant ,

DMA BAR is not used to perform DMA transfer. It maps to the memory space in which the register space related to QDMA IP is stored. 

Are you looking to perform large DMA transfers to require 8TB memory space in the host memory? I want to understand how are you trying to use the memory space you are allocating for different BARs.

------------------------------------------------------------------------------
Don't forget to reply, give kudo and accept as solution
------------------------------------------------------------------------------
0 Kudos
Reply
joshualant
Observer
Observer
515 Views
Registered: ‎10-06-2017

Hi @venkata,

I understand that the DMA BAR does not handle the transfer, and that it relates to the memory mapping of that AXI-Master interface out of the QDMA IP.

I wish to have this memory mapping set up as four distinct, non-overlapping regions, one region for each PF. This is not possible if all PFs must have their base DMA BAR at 0x0.

This functionality is possible with the AXI-Bridge BAR, and with the AXI-Lite BAR. I am able to set up different base values for each of the physical functions for these two interfaces, so why not the AXI DMA interface?

I do not necessarily require 8TB mapped to host memory, but what I do need is as much physical host memory as is possible mapped to one of the PFs (while still needing other PFs to be able to use the DMA as well).I am aware of the physical addressing limits of our system and of the possible limits with kernel stability etc.

However, if all PFs include the DMA BAR (which is required of my design), all must start from base 0x0 if i cannot change the base offset value for each PF.

If this is the case then the other PFs which require some of the DMA mapped area will eat into the lowest portion of the memory space which i wish to reserve for this large mapped PF.

I hope this is clearer?

Many thanks for your time in responding.

Josh

0 Kudos
Reply
venkata
Moderator
Moderator
500 Views
Registered: ‎02-16-2010

Hi @joshualant 

The offset address at which you will read/write on M_AXI interface (DMA interface) is programmed using the descriptors. Based on the amount of data you are planning to do DMA transfer, you will need to setup the AXI address space that M_AXI port can address, allocate the host memory space and setup the descriptors to initiate the DMA transfer. 

 

------------------------------------------------------------------------------
Don't forget to reply, give kudo and accept as solution
------------------------------------------------------------------------------