01-22-2020 04:17 PM
I am using an Artix7 75T device on a custom board. We have had the PCIe DMA working for quite a while now, at least in the card to host direction. I recently upgraded Vivado from 2017.4 to 2019.2 and that is when I started having problems.
I got a long list of complaints in the ip_upgrade.log which are frankly not very helpful. I decided to just start from scratch and generate the PCIe DMA core from the IP Catalog with the settings I had used for the version of the core used by Vivado 2017.4.
The issue I see is that our Jetson TX2 GPU never successfully completes boot. Here are the error messages I am getting:
[ 212.721550] xdma:request_regions: pci_request_regions()
[ 212.726815] xdma:map_single_bar: BAR0: 4194304 bytes to be mapped.
[ 212.733132] xdma:map_single_bar: BAR0 at 0x40400000 mapped at 0xffffff8015c00000, length=4194304(/4194304)
[ 212.742791] xdma:is_config_bar: Checking BAR 0 for XDMA config BAR
[ 212.845810] CPU3: SError detected, daif=1c0, spsr=0x800000c5, mpidr=80000101, esr=bf40c000
[ 212.845812] CPU4: SError detected, daif=1c0, spsr=0x800000c5, mpidr=80000102, esr=bf40c000
[ 212.845815] CPU5: SError detected, daif=1c0, spsr=0x800000c5, mpidr=80000103, esr=bf40c000
[ 212.847443] ROC:IOB Machine Check Error:
[ 212.847447] Address Type = Secure DRAM
[ 212.847459] Address = 0x0 (Unknown Device)
The previous version of the core had two BAR spaces defined; one for the PCIe DMA core and one for the AXI4 Master.
Do you have any suggestions? I moved to the newer version of Vivado since the timing closure methodology seems
I would be happy to send you both versions of the core if you send me a link.
01-28-2020 06:01 AM
01-28-2020 06:27 AM
Our SW engineer had to significantly modify the driver when we started on this project in 2017.4 since we are using an ARM processor. Just to confirm you are saying that the 4.1 version of the xdma core will not work with the 2017.4 version of the driver. Thanks for your help.
01-28-2020 06:43 AM
It would be very helpful to know which version of the Vivado tool correlates to which version of the xdma core and also which version of the driver correlates to which version of the xdma core. Thanks.
01-28-2020 08:09 AM
Here is a copy of the ip_upgrade.log that I get when I upgrade the xdma core. All the configuration settings appear to have been imported properly.
Our SW engineer is telling me that he saw the same issues when he used the newer version of the driver. I was not aware that he had tried this.
Finally if I do a design with just the xdma core example design I do not have issues. However, I do see issues when I try to use the new xdma core in my top level design which has a AXI cross connect block to connect to some other cores via the AXI4 Lite Master we have implemented in the xdma. Thanks.
01-29-2020 01:07 PM
Regarding the SW driver version, theoretically, there shouldn't be an issue but we haven't tested with an older version of the driver with the latest IP version. Our recommendation would be to use the latest driver version with the latest IP version.
Regarding the issue you are running into, would it be possible to regenerate the IP in 2019.2 in your design instead of upgrading it from 2017.4?
Please review the change log in the IP regarding the details on what have changed between 2019.2 and 2017.4.
01-29-2020 03:11 PM
I have already done what you suggested and just generated a new xdma core in 2019.2 using the same settings for all the tabs.
Through some testingon our custom board with an Artix7 75T device we have narrowed down to the actual root cause which is if you try to access an unmapped region of the AXI4 Lite Master bus the bus hangs our ARM processor and you need to reset to recover. This was not an issue with the 2017.4 version of the core.
Any suggestions on how to fix this specific issue. Thanks.
01-30-2020 09:45 AM
We don't have any such known issues. Not many things have changed between 2017.4 and 2019.2.
Could you grab the signals on axi lite interface in Vivado ILA and check the status of the signals?
01-30-2020 03:33 PM
I had already added a axi_firewall IP to my design between the AXI4 Lite Master in the xdma and the AXI Interconnect which connects to four slaves in my design.
Now when I try to read a register not defined in my memory map for the four slaves, the processor hangs, but eventually reboots itself.
Could you please tell me what behavior we should expect to see when we try to read an AXI address that does not have any slave hardware connected to it.
All of our slaves are Xilinx IP so you would think that they all respond in the same way. It is just puzzling that it behaves differently since we have upgraded.
I will try to implement your ila suggestion soon. Thanks.
02-04-2020 07:10 AM
It shouldn't result in the hang situation. The code provides a default state. The IP source is not encrypted so if you would like to investigate the root cause, you could delve into the code base. I have put below the IP file hierarchy for AXI Lite access. Having Firewall IP is fine too. Could you let us know what happens in 2017.4 IP version if you try to access a non-existent address?
02-04-2020 01:09 PM
In the 2017.4 version of the IP it just returns 0x0000_0000 when you try to read an address outside of the defined memory map for the AXI4 Lite slaves.
So we lost more than a week diagnosing the problem and implementing a fix so I do not have time now to debug the root cause of the problem. If in the future I do I will get back to you.