10-20-2020 11:31 PM - edited 10-21-2020 12:16 AM
I am using Vivado,Vitis and petalinux 2020.1 version. I am using xczu7ev-fbvb900-1-i (active) device. I have designed XDAM(DMA/Bridge subsystem for PCIe (4.1)) PCIe root port in PL which interfaced with M.2 NMVe M-key 128GB SSD. I have tested SSD in linux Machine. I able to mount and perform read and write to ssd.
I have following observation when I am trying to validate M.2 interface.
1. Running bare metal application(xdapcie_rc_enumerate_example)
I am able to read M.2 SSD vendor id and device ID.
2. Compiled Petalinux by enabling only Xilinx XDMA PL PCIe host bridge support in kernel
I am observing petalinux is booting successfully. When I ram lspci command for few iterations command response with correct print messages. But when I run lspci command multiple times. Petalinux stuck without any response or kernel panic message.
I have added Vio with Link up and m.2 reset(active low) controlled by FPGA. Both signal is high all the time.
PFA (lspci_command_response.txt) for log
PFA for ila log when lspci command pass and fails(petalinux stuck)
3. Compile Petalinux by enabling Xilinx XDMA PL PCIe host bridge support and NVMe express block device
When I boot petalinux the linux boot stuck with following message.
[ 3.758466] sdhci-pltfm: SDHCI platform and OF driver helper
[ 24.600290] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[ 24.600562] rcu: 2-...0: (1 GPs behind) idle=58e/1/0x4000000000000000 softirq=35/36 fqs=2600
[ 24.609124] (detected by 1, t=5254 jiffies, g=-919, q=4)
[ 24.614485] Task dump for CPU 2:
[ 24.617686] kworker/u8:0 R running task 0 7 2 0x0000000a
[ 24.624703] Workqueue: nvme-reset-wq nvme_reset_work
[ 24.629624] Call trace:
[ 24.632050] __switch_to+0x1c4/0x288
[ 24.635597] wake_up_process+0x14/0x20
PFA (nvme_block_device_error.txt) for detailed petalinux log
10-26-2020 10:33 PM
I am able to make progress by using vivado and petalinux 2019.2 version with same vivado project(created with save as option from vivado 2020.1) and same petalinux setting.
lspci command working consistently.
I can see nvme device under /dev directory.
still fdisk command is not working and getting following messages in console..I tried Link Partner TX preset setting = 5. but not helped.
[ 429.069533] nvme nvme0: I/O 307 QID 3 timeout, completion polled
11-13-2020 05:32 AM - edited 11-13-2020 05:38 AM
We are using a similar FPGA and doing the same connection to M.2 nvme hard drive. Is working with custom root filesystem in 2018.3 (4.14 kernel). But we want to move the newest Vivado (2020.1) and kernel (xilinx-v2020.1) and we have the same problem, report a reset queue issue with the nvme and we don't understand why. The bitstream synthesized with Vivado 2020.1 works with all software from 2018.3, so we maybe is an issue in the kernel? Or maybe the IP core need a different connection for the newest kernel?
11-13-2020 05:41 AM
Based on my experiment issue is with vivado 2020.1 version.
For me Vivado 2019.2 + Petalinux 2020.1 worked
nvme error was specific to drive after using proper drive not observing issue.
11-13-2020 05:45 AM
11-16-2020 01:58 AM
I tried today this tactical patch (https://www.xilinx.com/support/answers/75334.html) that includes the previous that I commented and works perfectly with Vivado 2020.1! Now we are working with full support of 2020.1.
11-18-2020 12:52 AM
The release note of Vivado 2020.1.1 don't say anything about the tactical patch (Release notes about Vivado 2020.1.1: https://www.xilinx.com/support/answers/75475.html). Also this update has a previous date than the tactical patch. So, I suppose that it doesn't include the patch. But I'm using the Vivado 2020.1 with the patch, I didn't install the update 2020.1.1 yet.
11-24-2020 05:10 AM
I took a look at the patch internal details and it says that the issue is resolved in the v2020.2 release and so as v2020.1.1 is a minor release I expect that the patch is still required for this release.
12-01-2020 12:51 AM
12-03-2020 05:45 AM
Thanks for the update and I hope that with v2020.2 this issue is not resolved.