cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
yanz
Xilinx Employee
Xilinx Employee
766 Views
Registered: ‎07-23-2018

QDMA v4.0 PCIe Block Interface

Jump to solution

I was trying to port my design using QDMA v3.0 in Vivado 2019.2 to QDMA v4.0 in Vivado 2020.2. It looks like QDMA 4.0 exposes the PCIe block interfaces, namely the RQ/RC and CQ/CC interfaces, even in QDMA (not AXI bridge) mode. What is the intended use case for these interfaces? And if I do not use them, should I just tie them off, e.g., driving tvalid to 0 and tready to 1?

 

Thanks,

Yan

qdma_v4.0.png
0 Kudos
1 Solution

Accepted Solutions
dsakjl
Explorer
Explorer
709 Views
Registered: ‎07-20-2018

Hi @yanz ,

the PCIe ports you see are due to the option split_dma enabled. To answer your questions:

1) What is the intended use case for these interfaces?

In my opinion one of the reason for splitting PCIe block from QDMA/DMA block is the possible timing improvement in SLR crossing, but an answer from a Xilinx engineer on this point would be appreciated.

2) And if I do not use them, should I just tie them off, e.g., driving tvalid to 0 and tready to 1?

No, you can't just tie them off unless you won't use QDMA without PCIe interface (but I can't imagine a real use case).

So, you should instantiate a PCIe block and connect it to the QDMA as needed or simply disable the option split_dma in QDMA IP with:

set_property CONFIG.split_dma false [get_bd_cells /qdma_0]

Best regards.

View solution in original post

8 Replies
dsakjl
Explorer
Explorer
710 Views
Registered: ‎07-20-2018

Hi @yanz ,

the PCIe ports you see are due to the option split_dma enabled. To answer your questions:

1) What is the intended use case for these interfaces?

In my opinion one of the reason for splitting PCIe block from QDMA/DMA block is the possible timing improvement in SLR crossing, but an answer from a Xilinx engineer on this point would be appreciated.

2) And if I do not use them, should I just tie them off, e.g., driving tvalid to 0 and tready to 1?

No, you can't just tie them off unless you won't use QDMA without PCIe interface (but I can't imagine a real use case).

So, you should instantiate a PCIe block and connect it to the QDMA as needed or simply disable the option split_dma in QDMA IP with:

set_property CONFIG.split_dma false [get_bd_cells /qdma_0]

Best regards.

View solution in original post

dsakjl
Explorer
Explorer
562 Views
Registered: ‎07-20-2018

Hi @yanz ,

thinking about your question I conclude that another reason to expose PCIe block is to customize it as needed instead of using default configuration.

Sure it would be more helpful if anyone from Xilinx can comment on this. Don't know if @garethc or @deepeshm can help us.

Thank you, best regards.

0 Kudos
deepeshm
Xilinx Employee
Xilinx Employee
556 Views
Registered: ‎08-06-2008

Thanks @dsakjl for follwoing up on this. 

There are few things that need to be clarified. We will post a detailed information in next few days. 

Thanks.

0 Kudos
yanz
Xilinx Employee
Xilinx Employee
497 Views
Registered: ‎07-23-2018

Hi @dsakjl,

It makes sense to support some sort of bypass mode for PCIe transactions. I guess it is probably the rationale behind the additional RQ/RC/CQ/CC interfaces. Let us wait for an updated document for this.

Thanks for the help,

Yan

0 Kudos
deepeshm
Xilinx Employee
Xilinx Employee
465 Views
Registered: ‎08-06-2008

The CQ/CC/RQ/RC ports show up when split mode is enabled in the IP. This mode, where you connect to the PCIe hard block manually, is used for internal test purpose and hence not documented. We do not encourage to use this mode. To explain what split mode is, enabling the switch generates only the DMA portion of the IP. This is enabled by default. The 'Run Block Automation' or 'Open example design' pulls the other IPs (PICe hard block, GT quads, PHY etc.) and connects them together to give the full PCIe IP. 

Thanks.

0 Kudos
yanz
Xilinx Employee
Xilinx Employee
459 Views
Registered: ‎07-23-2018

Hi @deepeshm,

Thanks for the explanation. It looks like the split mode is enabled by default as of Vivado 2020.2. If it is mainly used for internal test, would it be better to leave it disabled by default?

Regards,

Yan

 

 

0 Kudos
dsakjl
Explorer
Explorer
433 Views
Registered: ‎07-20-2018

Hi @yanz ,

reading the content of file xilinx/Vivado/2020.2/data/ip/xilinx/qdma_v4_0/xgui/qdma_v4_0.tcl it seems that the split_dma property is enabled by default only in case of  Versal, virtexuplus, kintexuplus, zynquplus, zynquplusrfsoc, virtexuplushbm, virtrexuplus58g, XCU280 FPGA and its value is true only in case of Versal.

In all other cases is not enabled.

Regards.

0 Kudos
deepeshm
Xilinx Employee
Xilinx Employee
380 Views
Registered: ‎08-06-2008

Just to clarify here, the IP generation flow is different in Versal than in US+. In Versal, the full PCIe IP is broken into pieces and they are stitched together via block automation or you could generate an example design that also stitches necessary IPs together in addition to the example design logic. When you add the QDMA IP in the IPI canvas, it is just the QDMA portion only; PCIe Hard Block, PHY, GT Quads are not included. Technically, you could manually add the other components and connect them yourself; perhaps add some intermittent logic of your own but such an approach is not recommended. The users are strongly advised to follow the block automation flow or example design generation flow. I would like to add a note here, we have discovered an issue with the block automation. We are working to fix the issue. In the meantime, you could generate the example design.

Hopefully, this clarifies.

Thanks.