cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
rkvr
Adventurer
Adventurer
2,081 Views
Registered: ‎07-23-2019

Petalinux + Xilinx PCIe RP+ M.2 SSD interface issue

Hello,

I am using Vivado,Vitis and petalinux 2020.1 version. I am using xczu7ev-fbvb900-1-i (active) device. I have designed XDAM(DMA/Bridge subsystem for PCIe (4.1))  PCIe root port in PL which interfaced with M.2 NMVe M-key 128GB SSD. I have tested SSD in linux Machine. I able to mount and perform read and write to ssd.

I have following observation when I am trying to validate M.2 interface.

1. Running bare metal application(xdapcie_rc_enumerate_example)

  I am able to read M.2 SSD vendor id and device ID.

2. Compiled Petalinux by enabling only  Xilinx XDMA PL PCIe host bridge support in kernel

 I am observing petalinux is booting successfully. When I ram lspci command for few iterations command response with correct print messages. But when I run lspci command multiple times. Petalinux stuck without any response or kernel panic message.

I have added Vio with Link up and m.2 reset(active low) controlled by FPGA. Both signal is high all the time.   

PFA (lspci_command_response.txt) for log

PFA for ila log when lspci command pass and fails(petalinux stuck)

3. Compile Petalinux by enabling Xilinx XDMA PL PCIe host bridge support and NVMe express block device

 When I boot petalinux the linux boot stuck with following message.

[ 3.758466] sdhci-pltfm: SDHCI platform and OF driver helper
[ 24.600290] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[ 24.600562] rcu: 2-...0: (1 GPs behind) idle=58e/1/0x4000000000000000 softirq=35/36 fqs=2600
[ 24.609124] (detected by 1, t=5254 jiffies, g=-919, q=4)
[ 24.614485] Task dump for CPU 2:
[ 24.617686] kworker/u8:0 R running task 0 7 2 0x0000000a
[ 24.624703] Workqueue: nvme-reset-wq nvme_reset_work
[ 24.629624] Call trace:
[ 24.632050] __switch_to+0x1c4/0x288
[ 24.635597] wake_up_process+0x14/0x20

PFA (nvme_block_device_error.txt) for detailed petalinux log

 

Thank You.

.

Tags (1)
0 Kudos
Reply
12 Replies
rkvr
Adventurer
Adventurer
1,989 Views
Registered: ‎07-23-2019

I am able to make progress by using vivado and petalinux 2019.2 version with same vivado project(created with save as option from vivado 2020.1) and same petalinux setting.

lspci command working consistently.

I can see nvme device under /dev directory.

still fdisk  command is not working and getting following messages in console..I tried Link Partner TX preset setting = 5. but not helped.

[ 429.069533] nvme nvme0: I/O 307 QID 3 timeout, completion polled

0 Kudos
Reply
rkvr
Adventurer
Adventurer
1,969 Views
Registered: ‎07-23-2019

I did further experiment and observing issue is specific to one ssd vendor/drive with other ssd vendor/drive issue is not present 

0 Kudos
Reply
1,304 Views
Registered: ‎11-20-2019

Hi,

 

We are using a similar FPGA and doing the same connection to M.2 nvme hard drive. Is working with custom root filesystem in 2018.3 (4.14 kernel). But we want to move the newest Vivado (2020.1) and kernel (xilinx-v2020.1) and we have the same problem, report a reset queue issue with the nvme and we don't understand why. The bitstream synthesized with Vivado 2020.1 works with all software from 2018.3, so we maybe is an issue in the kernel? Or maybe the IP core need a different connection for the newest kernel?

 

Best!

0 Kudos
Reply
rkvr
Adventurer
Adventurer
1,297 Views
Registered: ‎07-23-2019

Based on my experiment issue is with vivado 2020.1 version.

For me Vivado 2019.2 + Petalinux 2020.1 worked

nvme error was specific to drive after using proper drive not observing issue. 

1,295 Views
Registered: ‎11-20-2019

Great to know it!

Do you tried the patch proposed for IP in Vivado 2020.1? https://www.xilinx.com/support/answers/75304.html

Or Xilinx don't fix this issue yet?

Best!

rkvr
Adventurer
Adventurer
1,291 Views
Registered: ‎07-23-2019

I not yet tried patch. Will let you know after trying with patch.

0 Kudos
Reply
1,222 Views
Registered: ‎11-20-2019

Hi rkvr!

 

I tried today this tactical patch (https://www.xilinx.com/support/answers/75334.html) that includes the previous that I commented and works perfectly with Vivado 2020.1! Now we are working with full support of 2020.1.

Best!

0 Kudos
Reply
rkvr
Adventurer
Adventurer
1,183 Views
Registered: ‎07-23-2019

Does patch also require for Vivado 2020.1.1 update?

Instead of applying patch if I update version to 2020.1.1 will it help?

0 Kudos
Reply
1,162 Views
Registered: ‎11-20-2019

The release note of Vivado 2020.1.1 don't say anything about the tactical patch (Release notes about Vivado 2020.1.1: https://www.xilinx.com/support/answers/75475.html). Also this update has a previous date than the tactical patch. So, I suppose that it doesn't include the patch. But I'm using the Vivado 2020.1 with the patch, I didn't install the update 2020.1.1 yet.

0 Kudos
Reply
garethc
Moderator
Moderator
1,036 Views
Registered: ‎06-29-2011

Hi @dcaruso_satlantis @rkvr 

I took a look at the patch internal details and it says that the issue is resolved in the v2020.2 release and so as v2020.1.1 is a minor release I expect that the patch is still required for this release.

-------------------------------------------------------------------------
Don’t forget to reply, kudo, and accept as solution.
-------------------------------------------------------------------------

Kind regards,
Gareth
0 Kudos
Reply
940 Views
Registered: ‎11-20-2019

Hi @garethc !

It's true. The last Friday I tested with the Vivado v2020.2 in my hardware and the issue is solved there, so we migrate all the software to the v2020.2.

Best!

garethc
Moderator
Moderator
881 Views
Registered: ‎06-29-2011

Hi @dcaruso_satlantis 

Thanks for the update and I hope that with v2020.2 this issue is not resolved.

-------------------------------------------------------------------------
Don’t forget to reply, kudo, and accept as solution.
-------------------------------------------------------------------------

Kind regards,
Gareth
0 Kudos
Reply