cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cos24boc
Visitor
Visitor
479 Views
Registered: ‎06-27-2018

Attach Alveo U250 to a KVM guest instance via PCI passthrough

Jump to solution

I am trying to allow a KVM guest instance to use an Alveo U250 card via PCI passthrough. I just followed the article below and attached the U250 to the instance using virsh.

https://developer.xilinx.com/en/articles/using-alveo-data-center-accelerator-cards-in-a-kvm-environment.html

The U250 is now visible from the guest instance with lspci. However, I got an error when executing xbutil validate as follows:

 

 

atsushi@ukvm0:~$ xbmgmt scan
*0000:00:0a.0  mgmt(inst=80)
atsushi@ukvm0:~$ xbutil validate
INFO: Found 1 cards

INFO: Validating card[0]: 
INFO: == Starting Kernel version check: 
INFO: == Kernel version check PASSED
INFO: == Starting AUX power connector check: 
AUX power connector not available. Skipping validation
INFO: == AUX power connector check SKIPPED
INFO: == Starting Power warning check: 
INFO: == Power warning check PASSED
INFO: == Starting PCIE link check: 
INFO: == PCIE link check PASSED
INFO: == Starting SC firmware version check: 
Failed to open /sys/bus/pci/devices/0000:00:0a.1/xmc.u.18874368/bmc_ver for reading: No such file or directory

ERROR: == SC firmware version check FAILED
INFO: Card[0] failed to validate.

ERROR: Some cards failed to validate.

 

 

 

I believe that I properly set up IOMMU and other features required for the PCI passthrough. However, some of lspci info (e.g., Vendor Specific Information) are missing on the guest instance while they appear on the host machine. 

lspci on the host:

 

 

3b:00.0 Processing accelerators: Xilinx Corporation Device 5004
	Subsystem: Xilinx Corporation Device 000e
	Flags: bus master, fast devsel, latency 0
	Memory at 38bff2000000 (64-bit, prefetchable) [size=32M]
	Memory at 38bff4040000 (64-bit, prefetchable) [size=256K]
	Capabilities: [40] Power Management version 3
	Capabilities: [60] MSI-X: Enable- Count=32 Masked-
	Capabilities: [70] Express Endpoint, MSI 00
	Capabilities: [100] Advanced Error Reporting
	Capabilities: [1c0] #19
	Capabilities: [400] Access Control Services
	Capabilities: [410] #15
	Capabilities: [480] Vendor Specific Information: ID=0020 Rev=0 Len=010 <?>
	Kernel driver in use: vfio-pci
	Kernel modules: xclmgmt

3b:00.1 Processing accelerators: Xilinx Corporation Device 5005
	Subsystem: Xilinx Corporation Device 000e
	Flags: bus master, fast devsel, latency 0, IRQ 144
	Memory at 38bff0000000 (64-bit, prefetchable) [size=32M]
	Memory at 38bff4000000 (64-bit, prefetchable) [size=256K]
	Memory at 38bfe0000000 (64-bit, prefetchable) [size=256M]
	Capabilities: [40] Power Management version 3
	Capabilities: [60] MSI-X: Enable- Count=32 Masked-
	Capabilities: [70] Express Endpoint, MSI 00
	Capabilities: [100] Advanced Error Reporting
	Capabilities: [400] Access Control Services
	Capabilities: [410] #15
	Capabilities: [480] Vendor Specific Information: ID=0020 Rev=0 Len=010 <?>
	Kernel driver in use: vfio-pci
	Kernel modules: xocl

 

 

 

lspci on the guest: 

 

 

atsushi@ukvm0:~$ sudo lspci -vd 10ee:
00:0a.0 Processing accelerators: Xilinx Corporation Device 5004
	Subsystem: Xilinx Corporation Device 000e
	Physical Slot: 10
	Flags: bus master, fast devsel, latency 0
	Memory at e0000000 (64-bit, prefetchable) [size=32M]
	Memory at e4200000 (64-bit, prefetchable) [size=256K]
	Capabilities: [40] Power Management version 3
	Capabilities: [60] MSI-X: Enable- Count=32 Masked-
	Capabilities: [70] Express Endpoint, MSI 00
	Kernel driver in use: xclmgmt
	Kernel modules: xclmgmt

00:0a.1 Processing accelerators: Xilinx Corporation Device 5005
	Subsystem: Xilinx Corporation Device 000e
	Physical Slot: 10
	Flags: bus master, fast devsel, latency 0, IRQ 10
	Memory at e2000000 (64-bit, prefetchable) [size=32M]
	Memory at e4240000 (64-bit, prefetchable) [size=256K]
	Memory at d0000000 (64-bit, prefetchable) [size=256M]
	Capabilities: [40] Power Management version 3
	Capabilities: [60] MSI-X: Enable- Count=32 Masked-
	Capabilities: [70] Express Endpoint, MSI 00
	Kernel driver in use: xocl
	Kernel modules: xocl

 

 

 

The host OS is Ubuntu 16.04.06 LTS and the guest OS is Ubuntu 16.04.07 LTS. XRT 2.8.743 (git branch 2020.2) is installed on both machines. 

Is there anyone who knows a solution or has experience in enabling PCI passthrough for Alveo cards in a KVM environment?
Please let me know if you have any questions or require detailed information on that. 

Thanks, 
Atsushi
 

0 Kudos
1 Solution

Accepted Solutions
JohnFedakIV
Moderator
Moderator
356 Views
Registered: ‎09-04-2020

Hi @cos24boc ,

Welcome to the Xilinx Forums!

The new U250 platform (which is a DFX-2RP platform, explained in AR 75975) does use the PCIe extended capabilities to communicate with XRT. The default KVM machine doesn't support the PCIe extended capabilities, however Q35 does. When creating the VM in the command line with virt-install (Step 2 in "Creating a Basic VM Using the virt-install Command" in the article that you have linked to) include the switch for --machine q35.

Regards,
~John

----------------------------------------------------------------------------------
* Please don't forget to reply, kudo and accept as a solution! *

View solution in original post

4 Replies
JohnFedakIV
Moderator
Moderator
357 Views
Registered: ‎09-04-2020

Hi @cos24boc ,

Welcome to the Xilinx Forums!

The new U250 platform (which is a DFX-2RP platform, explained in AR 75975) does use the PCIe extended capabilities to communicate with XRT. The default KVM machine doesn't support the PCIe extended capabilities, however Q35 does. When creating the VM in the command line with virt-install (Step 2 in "Creating a Basic VM Using the virt-install Command" in the article that you have linked to) include the switch for --machine q35.

Regards,
~John

----------------------------------------------------------------------------------
* Please don't forget to reply, kudo and accept as a solution! *

View solution in original post

cos24boc
Visitor
Visitor
325 Views
Registered: ‎06-27-2018

Hi @JohnFedakIV,

Thank you so much! The --machine q35 option works well. Now the shell name is visible on the VM with xbmgmt. I also successfully reconfigured the user partition with my own bitstream fie. 

atsushi@ukvm0q35:~$ xbmgmt scan
0000:00:0a.0 xilinx_u250_gen3x16_xdma_shell_3_1 mgmt(inst=80) 

I'd like to report one more thing. I found that DMA test of xbutil validate failed as follows. I don't need a solution for the error now, but let me just post it here as a caution/reminder.  

 

atsushi@ukvm0q35:~$ xbutil validate
INFO: Found 1 cards

INFO: Validating card[0]: xilinx_u250_gen3x16_xdma_shell_3_1
INFO: == Starting Kernel version check: 
INFO: == Kernel version check PASSED
INFO: == Starting AUX power connector check: 
INFO: == AUX power connector check PASSED
INFO: == Starting Power warning check: 
INFO: == Power warning check PASSED
INFO: == Starting PCIE link check: 
INFO: == PCIE link check PASSED
INFO: == Starting SC firmware version check: 
INFO: == SC firmware version check PASSED
INFO: == Starting verify kernel test: 
INFO: == verify kernel test PASSED
INFO: == Starting DMA test: 
/opt/xilinx/xrt/bin/unwrapped/loader: line 57:  3364 Killed                  "${XRT_PROG_UNWRAPPED}" "${XRT_LOADER_ARGS[@]}"

 

0 Kudos
JohnFedakIV
Moderator
Moderator
291 Views
Registered: ‎09-04-2020

Hi @cos24boc ,

Glad to hear that this is working well on your side.

For the DMA test, can you try to increase the number of VCPUs and RAM in the VM?
I believe the virt-install in the article only does 1VCPU and 1GB of RAM, can you try with 4+ CPUs and 8GB+ of RAM?

Note: This can be done through editing the VM's xml directly rather than going through the install: $virsh edit <kvm name>

Regards,
~John

----------------------------------------------------------------------------------
* Please don't forget to reply, kudo and accept as a solution! *
cos24boc
Visitor
Visitor
254 Views
Registered: ‎06-27-2018

Hi @JohnFedakIV , 

I assigned 4 vCPUs and 8GB RAM to my instance and then the DMA test worked. Thank you for your help!

Best,
Atsushi