cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Observer
Observer
299 Views
Registered: ‎09-22-2020

[Vitis Embedded Application Acceleration for Zynq Ultrascale+][Interfacing PC x86 host via PCIe]

I am currently using Vitis 2020.1 and prototyping designs using Zynq Ultrascale+ ZCU106.

1.) Is it possible to perform the application acceleration development flow for Ultrascale+ devices in a similar way as it is possible to do for Alveo devices, that is, to use PC x86 CPU as a host and then to interface the ZCU106 via PCIe to control the generated kernels (instead of using ARM CPU on the board as a host and controlling the kernels via AXI interface)?

 

2.) If answer to question 1.) is NO, then is it possible to adopt the conventional Vitis acceleration development flow for Zynq Ultrascale+ devices to instantiate a PCIe IP in PL of the ZCU106 board, that would exchange the data with the PC (x86 CPU) using the same XRT-like driver on the PC to read-in the data from the ZCU106?

To make things clearer, this is a simple dataflow diagram of my target design:

design.jpg

 

The main concern is: What is the simplest way to implement PCIe communication link between the PC CPU and the ZCU106 board?

 

0 Kudos
Reply
3 Replies
Moderator
Moderator
264 Views
Registered: ‎11-04-2010

Currently Custom boards with Ultrascale+ devices or edge boards cannot be used in the  similar way as Alveo boards, since the limitation of Shell and XRT.

If you intend to use PCIe in PL part, I think it's possible but you cannot use XRT-driver directly.

-------------------------------------------------------------------------
Don't forget to reply, kudo, and accept as solution.
-------------------------------------------------------------------------
0 Kudos
Reply
Observer
Observer
247 Views
Registered: ‎09-22-2020

Thank you very much for your answer.

Just a quick follow-up question to your answer:

Would it then be feasible to take the following approach:

Zynq Ultrascale+ design approach:

1.) Create a Vivado project, instantiate QDMA Subsystem for PCIe IP and initialize other (application required) I/O interfaces (like ethernet, i.e.), wrap the design in an embedded platform, as required for Vitis Acceleration Development design flow (https://www.xilinx.com/html_docs/xilinx2020_1/vitis_doc/xmg1596051749618.html)

2.) After importing the platform into Vitis HLS 2020.1, design custom kernels (in C or using OpenCL API) to process the data packets arriving to the board (through ethernet interface, for example)

Linux Host x86 CPU design approach (PC side):

1.) Set-up a Xilinx driver (DPDK driver: https://xilinx.github.io/dma_ip_drivers/master/DPDK/html/index.html) to control the QDMA IP instantiated in the PL of the Zynq U+ to exchange the data through the PCIe interface.

 

Would the approach described above work? In general, is it possible to first introduce I/O interfaces in the PL of the board using a custom Vivado project, and then to import that project (wrapped in a platform format required by Vitis) to Vitis HLS and add additional PL kernels on top of the already existing PL design introduced earlier in a Vivado project? Would such a design still be able to communicate with an external host PC running DPDK driver for QDMA IP, for example?

0 Kudos
Reply
Moderator
Moderator
225 Views
Registered: ‎09-12-2007

The Vitis Acceleration is primarily used for accelerating SW that otherwise would be deployed on a CPU. The Vitis tool will create the RTL IP and connect this to your CPU over a assigned interface (set via PFM properties in your platform).

You can just use the Vitis HLS if you want to translate your SW functions to RTL. However, you will need to handle the interface here.

0 Kudos
Reply