08-20-2019 08:02 PM
We are debugging a remoteproc-loaded baremetal on the R5 RPUs (split) and have recently upgraded from 2018.3 to 2019.1. We typically have an elf running on each R5, but we get the following issue is we load only the first R5. If we load elfs in both R5s or only on the second R5 then the memory accesses work normally.
We made the necessary 2019.1 device-tree changes and the baremetal apps run fine (https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/118358017/OpenAMP+2019.1). We can still read/write TCM memories from linux with devmem.
If only the first R5 is started, when we attach the JTAG debugger in SDK (System Debugger on Local) we can step through code, read CPU registers etc, but if we attempt to access the second TCM bank we get an exception. For example: "Exception: Cannot read target memory. Memory read error at 0x2002C. Blocked address, VA 0x2002C, PA 0xFFE2002C. TCM bank 2 powered down"
We get a similar result in xsct console with the mrd command: "mrd 0xffe2002c" gives "Memory read error at 0xFFE2002C. Blocked address 0xFFE2002C. TCM bank 2 powered down" but it does work if we force it: "mrd -force 0xffe2002c" gives "FFE2002C: 00000000" as expected. Reading addresses from the first TCM bank work (don't require force).
Has perhaps the recent 2019.1 refactoring to the zynqmp_r5_remoteproc.c driver broken the powering up of the TCMs, or at least the second one (as the first one appears to work fine)?
08-30-2019 08:07 AM
It seems that the debugger just thinks that the 2nd RPU is powered down and hence the TCM bank 2 is powered down and non accessible, but in fact it is powered as when you force the access this can be performed without issues. When you say that you cannot access to TCM bank2 are you just meaning from debugger point of view? or from application side or Linux side?
09-01-2019 01:33 PM
Yes agreed - it is purely from the debugger's point of view. The memory is definitely working as the application works as expected, can read the memory from elsewhere (eg linux devmem), and a force read with xsct works (mrd -force).
Is there a work-around or patch for SDK?
09-01-2019 11:54 PM
Are both RPU applications loaded through remoteproc? I´m wondering if the issue might be related to the fact the debugger is not aware of your applications being loaded so the memory protection feature is trying to prevent access to TCM Bank2. It would also interesting to check from which target you access to the memory (mrd command) as the access is context aware (is not the same to access from APU, A53#0 or R50).
I might need reproduce the issue on my side to report internally but meanwhile I think you could try to use memmap you application on the RPU processor to provide the debugger awareness of the memory footprint of your application.
09-02-2019 02:39 PM
Yes applications are loaded from remoteproc.
As mentioned in the original message, the behaviour is different depending on if one or both R5s are running, and also which. The issue only happens when there's a single application running in R5 #0. If I load an application in the second R5 I don't get exceptions accessing the second R5 memories, or if I load applications in both R5s then I don't get exceptions accessing eithers' memories.
Also, there was no problem when working from 2018.3, it;s only happened since migrating to 2019.1.
As I said, the application itself works fine, and I can access the memory externally (eg linux devmem or mmap), it's only an inconvenience in the debugger that I can't view/edit the TCM (local variables etc).