02-21-2021 09:42 AM
I have a simple XDMA design in an Artix 7 device. It is minimally changes from the example code you get from running subsystem level block automation in the block design. It works as expected when compiled in Vivado 2019.2.
When I open the same design in Vivado 2020.2, upgrade the IP then regenerate the bitstream the device does NOT work. The design compiles but isn't recognised by the AR65444 device driver.
Using dmesg and device driver debug enabled I can see the same BARs are being presented by the FPGA (BAR 0 is user for axi-4 lite and BAR2 is the DMA registers). They have the same sizes (BAR0=128K, BAR2=64K) as when generated using 2019.2. The device driver function is_config_bar() does 2 "trial reads" at offset addresses 0x2000 and 0x3000 in each bar to detect DMA registers (it does the test in both BARs). With vivado 2020.2 it reads back 0 for each register (it should read something like 0x1FC20006 and 0x1FC30006 from the two DMA registers in BAR2 according to PG195 table 2-78 and 2-95).
Are there any known issues with xdma in vivado 2020.2 please?
02-25-2021 11:18 PM
Are you seeing any clocking or LTSSM issue? Could you please check if enumeration and link up is successful?
02-28-2021 09:15 AM
OK, I've put an LED on the core's user_lnk_up pin. That goes to 1 after enumeration.
The enumeration occurs correctly. Here is the output from dmesg on the linux host:
[ 1.210190] pci 0000:01:00.0: [10ee:7024] type 00 class 0x070001
[ 1.210269] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x0001ffff 64bit pref]
[ 1.210322] pci 0000:01:00.0: reg 0x18: [mem 0x00000000-0x0000ffff 64bit pref]
[ 1.210421] pci 0000:01:00.0: enabling Extended Tags
[ 1.210602] pci 0000:01:00.0: PME# supported from D0 D1 D2 D3hot
[ 1.210683] pci 0000:01:00.0: 4.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x1 link at 0000:00:00.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link)
[ 1.213952] PCI: bus1: Fast back to back transfers disabled
[ 1.213974] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[ 1.214022] pci 0000:00:00.0: BAR 9: assigned [mem 0x600000000-0x6000fffff 64bit pref]
[ 1.214048] pci 0000:01:00.0: BAR 0: assigned [mem 0x600000000-0x60001ffff 64bit pref]
[ 1.214096] pci 0000:01:00.0: BAR 2: assigned [mem 0x600020000-0x60002ffff 64bit pref]
[ 1.214145] pci 0000:00:00.0: PCI bridge to [bus 01]
[ 1.214176] pci 0000:00:00.0: bridge window [mem 0x600000000-0x6000fffff 64bit pref]
(It has found the device with the correct identifiers 10ee:7024, and the correct sized BARs.)
But any read from BAR0 or BAR2 results in data returned = 0. This causes the AR65444 device driver to report "Failed to detect XDMA config BAR"
[ 4.100026] xdma:xdma_device_open: xdma device 0000:01:00.0, 0x(ptrval).
[ 4.100162] xdma:map_single_bar: BAR0 at 0x600000000 mapped at 0x(ptrval), length=131072(/131072)
[ 4.100193] xdma:map_single_bar: BAR2 at 0x600020000 mapped at 0x(ptrval), length=65536(/65536)
[ 4.100211] xdma:map_bars: Failed to detect XDMA config BAR
This is the Xilinx provided example design. The exact same design works fine when compiled in 2019.2.
02-28-2021 11:15 PM
Thank you for sharing the log capture.
As you mention that the design is working with 2019.2 driver but NOT with 2020.2 driver. Please correct me if i am wrong.
Could you please compare the libxdma.c file of both the drivers?
03-01-2021 12:20 AM
Not quite. I've tried the older driver. It is the FPGA build, not the driver, that is the problem.
Xilinx example design compiled with 2019.2 - works correctly with both old and new linux device driver
Xilinx example design compiled with 2020.2 - does not work with either device driver
I think something has changed in the XDMA IP.
03-01-2021 01:39 AM
Thank you for the confirmation.
Could you please read IRQ_ID and CFG_ID for both the BARs? If these combination is NOT correct in BAR region XDMA driver doesn’t detect a CONFIG BAR
Please add following print statement in libxdma.c file.
printk("BAR %d - IRQ ID: 0x%X (expecting 0x%X) CFG ID: 0x%X (expecting 0x%X)\n", idx, irq_id & mask, IRQ_BLOCK_ID, cfg_id & mask, CONFIG_BLOCK_ID);
03-01-2021 01:50 AM
I'd already done that...
With the IP compiled by vivado 2020.2 it reads back 0 for each register in BAR2.
With the IP compiled by Vivado 2019.2 it reads 0x1FC20006 and 0x1FC30006 from the two DMA registers in BAR2. Those values agree with PG195 table 2-78 and 2-95.
(In doing this, I've also found that the device driver makes the same reads from BAR0 (the AXI4-lite master interface). The FPGA needs to have something on the PCIe that responds to those addresses, otherwise the driver crashes with an exception. That's ought ideally to be documented).
03-08-2021 05:41 AM
We have the same problem. 2019.2 worked well, 2020.2 does not work (updated via Vivado tool). I tried also the empty project with XDMA only and it does not work either. The XDMA cannot read its internal registers on any BAR. We have also a Windows driver which also does not read any BAR registers from XDMA 2020.2 version, so it is not related to Linux driver. The problem must be in the XDMA.
Does anybody know if the bug is related to Vivado version or XDMA version? I have Vivado v2020.2 and XDMA v4.1 rev8. It worked well with Vivado 2019.2 and XDMA v4.1 rev4. But I did not test other combinations.
04-06-2021 03:46 AM
Could you please share XDMA xci file and log files which show driver loading and other issues in 2020.2?
04-12-2021 01:12 AM
This issue has been reported and will be resolved ASAP.
05-11-2021 10:19 AM
Did you get your issue resolved? I thought I was going crazy until I see your thread. I am going to try 2020.3. If that is not working, I will have to use 2019.2. This is so frustrating.
05-12-2021 05:25 AM
We have reported issue seen in XDMA linux host driver v2020.2 when PCIe BAR is enabled in XDMA.
Are you saying this is even seen in window driver?
05-12-2021 06:15 AM
One more question..
Does it work for you with PCIe BARs disabled?
05-12-2021 08:27 AM
With 2019.2, I can actually read the internal register in BAR0. (It is not working in 2020.2) This is the requirement for the linux xdma driver to work. (Needs to read 0x2000 and 0x3000 for ID). I have the bypass also enable. So I actually have 3 BARS. Since I am running 64-bit. I have BAR0, BAR2, and BAR4. I have to hack the xdma driver to create the /dev/xdma0_user and /dev/xdma0_bypass. But I still not able to read/write to my own logic. It is getting rather frustrating.
A few years back I was using AXI DMA core and it worked well. We have to write our own driver at that time. I am tempting to drop this DMA/bridge thing and go back to the old design that I know works.
05-12-2021 08:41 AM
What processor are you running on? I had to edit the device driver for ARM to get code using /dev/xdma0_user working, and there is a description of what I had to do in another thread on this forum (essentially resize some variables to 64 bit, because they were having 36 bit numbers written into them). After doing that, with Vivado 2019.2 I CAN assess my FPGA registers etc. Mine is 32 bit ARM though!
05-12-2021 08:42 AM
I am curious did anyone get this core to work properly? I am on a Artix-7. I have my own register that I need to R/W to. Currently, I am able to R/W the internal DMA registers but not the registers outside the core eventhough they are mapped in the address editor.
I am worried that even if I get it to work. What other bugs are lurking around in the core!
05-12-2021 08:46 AM
I am on a development PC. It is a 64-bit machine. We will be transition to the Nvidia Jetson soon. Do you mind to give me the link to that thread that has your changes?
05-12-2021 08:47 AM
yes with Vivado 2019.2 I have the AXI4-lite bus and AXI-4 DMA bypass bus working in an Artix 7.
You do need to get the address "window" right so that the (small) window opened via a BAR actually points to a register on the bus. Mine is a 128K window, and I've set the address map in the block diagram so that my registers begin at 0.
05-12-2021 09:05 AM
Ok. That sounds good. I will give it a try again. I think I will put my registers at 0 at the bypass BAR but leave the DMA at 0 on the slave. That should make the driver happy.