11-14-2018 10:30 AM - edited 11-14-2018 10:43 AM
Thank you for your help in advance.
I would like to know how to fix xdma driver hangs.
I am testing PCIe + MIG settings (without any peripherals) for VCU1525 FPGA on a supermicro server (ubuntu 18.04).
I am using Vivado 2018.02.
I genereted a bitfile using IPI without any errors, and programmed the bitfile into the FPGA.
Then, I loaded the driver from AR#65444 (checked both the latest and the old one) and ran run_test.sh.
I made sure that the accessed address range was within the accessible region.
However, the driver stucked after it queue the transfer (“transfers_queue(): H2C0: engine->running=1”).
Only when I killed the process, the driver returns from there with “wait_event_interruptible=-512”.
It seems like the FPGA does not respond to the requset and the interrupt is not fired.
Following the guideline from “Xilinx Answer 71435 DMA Subsystem for PCI Express - Driver and IP Debug Guide”, I checked the status registers, etc.
It showed that busy bit is set (at 0x40) and there was no comleted requests (0x48).
Related with the interrupt mask, only Channel Interrupt Enables Mask (at 0x90) was set to 0x00f83e1e and other masks were set to 0.
if there is one who succeeded the above design on VCU1525, let me know.
Please give some help for me to fix this issue.
I will appreciate your help a lot.
11-16-2018 08:59 AM
There are a few things that can be done to eliminate some setup issues.
1) Make sure to use Ubuntu 16.04.03 LTS with Vivado 2018.2. We have not tested Vivado 2018.2 and/or the DMA driver in Ubuntu 18.04 and do not plan on having support until Vivado 2018.3.
2) The VCU1525 was qualified using the SDx Tool Flow. UG1238 has information on installing SDx and UG1023 has information on how to design in SDx with a VCU1525.
3) Could you provide your testcase so I can look it over to verify your configuration?
11-17-2018 10:22 PM
First, I made some wrong/missed explanations, so I would like to correct them.
1) My OS is Ubuntu 16.04 (not 18.04).
2) I connected "a simple RTL wrapper that has 5 AXI master ports, 1 AXI slave port, and 1 AXI-LITE slave port" after I verified that the PCIe+DDR design works without the wrapper. I connected them through two AXI interconnect IPs, one for AXI interface and the other for AXI-LITE interface. (I attached my design named "mydesign.png")
3) I tested several tests:
3-0) Without RTL design, it works perfectly.
3-1) Without connecting AXI-LITE interface (simply removing it from the PCIe core) -> the driver detects the device, but a test cannot be run (like I mentioned before)
3-2) Without connecting AXI interface from the RTL design (connecting only AXI-LITE) -> the driver cannot detect the device as it cannot find any "config bar". The value of the bar is all -1.
3-3) With all interfaces -> the driver cannot detect the device as it cannot find any "config bar". The values of the bars are all -1.
I am trying to use an open source project that uses pure RTL sources (also needs to use IPs from the vivado).
The test code that I used is the one from the #AR6544 under "tests/run_test.sh" (I attached "tests/scripts/dma_memory_mapped_test.sh" for your information)
Therefore, I should use vivado instead of SDx.
11-21-2018 08:25 AM
When you add in your custom RTL logic to IPI, are you making sure to adjust the Address Mapping in Address Editor? Also do you have the PCIe BAR to AXI Translation setup within the PCIe DMA IP for the AXI Lite?
11-25-2018 02:43 AM