UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
323 Views
Registered: ‎01-29-2017

SDK 2019.1 cannot view R5 TCM bank 2 memory with JTAG debugger when only R5 #0 running split

We are debugging a remoteproc-loaded baremetal on the R5 RPUs (split) and have recently upgraded from 2018.3 to 2019.1. We typically have an elf running on each R5, but we get the following issue is we load only the first R5. If we load elfs in both R5s or only on the second R5 then the memory accesses work normally.

We made the necessary 2019.1 device-tree changes and the baremetal apps run fine (https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/118358017/OpenAMP+2019.1). We can still read/write TCM memories from linux with devmem.

If only the first R5 is started, when we attach the JTAG debugger in SDK (System Debugger on Local) we can step through code, read CPU registers etc, but if we attempt to access the second TCM bank we get an exception. For example: "Exception: Cannot read target memory. Memory read error at 0x2002C. Blocked address, VA 0x2002C, PA 0xFFE2002C. TCM bank 2 powered down"

We get a similar result in xsct console with the mrd command: "mrd 0xffe2002c" gives "Memory read error at 0xFFE2002C. Blocked address 0xFFE2002C. TCM bank 2 powered down" but it does work if we force it: "mrd -force 0xffe2002c" gives "FFE2002C: 00000000" as expected. Reading addresses from the first TCM bank work (don't require force).

Has perhaps the recent 2019.1 refactoring to the zynqmp_r5_remoteproc.c driver broken the powering up of the TCMs, or at least the second one (as the first one appears to work fine)?

 

 

0 Kudos
4 Replies
Moderator
Moderator
272 Views
Registered: ‎10-06-2016

Re: SDK 2019.1 cannot view R5 TCM bank 2 memory with JTAG debugger when only R5 #0 running split

Hi robert.turner@nz.abb.com 

It seems that the debugger just thinks that the 2nd RPU is powered down and hence the TCM bank 2 is powered down and non accessible, but in fact it is powered as when you force the access this can be performed without issues. When you say that you cannot access to TCM bank2 are you just meaning from debugger point of view? or from application side or Linux side?

Regards


Ibai
Don’t forget to reply, kudo, and accept as solution.
0 Kudos
241 Views
Registered: ‎01-29-2017

Re: SDK 2019.1 cannot view R5 TCM bank 2 memory with JTAG debugger when only R5 #0 running split

Yes agreed - it is purely from the debugger's point of view. The memory is definitely working as the application works as expected, can read the memory from elsewhere (eg linux devmem), and a force read with xsct works (mrd -force). 

Is there a work-around or patch for SDK?

0 Kudos
Moderator
Moderator
222 Views
Registered: ‎10-06-2016

Re: SDK 2019.1 cannot view R5 TCM bank 2 memory with JTAG debugger when only R5 #0 running split

Hi robert.turner@nz.abb.com 

Are both RPU applications loaded through remoteproc? I´m wondering if the issue might be related to the fact the debugger is not aware of your applications being loaded so the memory protection feature is trying to prevent access to TCM Bank2. It would also interesting to check from which target you access to the memory (mrd command) as the access is context aware (is not the same to access from APU, A53#0 or R50).

I might need reproduce the issue on my side to report internally but meanwhile I think you could try to use memmap you application on the RPU processor to provide the debugger awareness of the memory footprint of your application.

Regards 


Ibai
Don’t forget to reply, kudo, and accept as solution.
0 Kudos
211 Views
Registered: ‎01-29-2017

Re: SDK 2019.1 cannot view R5 TCM bank 2 memory with JTAG debugger when only R5 #0 running split

Yes applications are loaded from remoteproc.

As mentioned in the original message, the behaviour is different depending on if one or both R5s are running, and also which. The issue only happens when there's a single application running in R5 #0. If I load an application in the second R5 I don't get exceptions accessing the second R5 memories, or if I load applications in both R5s then I don't get exceptions accessing eithers' memories.

Also, there was no problem when working from 2018.3, it;s only happened since migrating to 2019.1.

As I said, the application itself works fine, and I can access the memory externally (eg linux devmem or mmap), it's only an inconvenience in the debugger that I can't view/edit the TCM (local variables etc).

0 Kudos