cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
jgummeson
Observer
Observer
628 Views
Registered: ‎12-16-2019

Using OCM with remoteproc

Hello,

I have PetaLinux running on UltrasScale MPSoC and I'm using remoteproc to run some code on the RPU. Currently I have memories set up in my device tree that work for running having the RPU code use either the TCM or a reserved region of DDR. Is there a way to use the OCM for code and data when using the remoteproc framework?

When I try linking my RPU code to use the OCM I get this error when I try bringing it up:

remoteproc remoteproc0: bad phdr da 0xfffc0000 mem 0xc6f0

Is there a way to add the OCM to my device tree? This is what my device tree looks like currently for the DDR and TCMs:

/{
        power-domains {
                pd_r5_0: pd_r5_0 {
                        #power-domain-cells = <0x0>;
                        pd-id = <0x7>;
                };
                pd_tcm_0_a: pd_tcm_0_a {
                        #power-domain-cells = <0x0>;
                        pd-id = <0xf>;
                };
                pd_tcm_0_b: pd_tcm_0_b {
                        #power-domain-cells = <0x0>;
                        pd-id = <0x10>;
                };
        };

        amba {
                /* firmware memory nodes */
                r5_0_tcm_a: tcm@ffe00000 {
                        compatible = "mmio-sram";
                        reg = <0x0 0xFFE00000 0x0 0x10000>;
                        pd-handle = <&pd_tcm_0_a>;
                };
                r5_0_tcm_b: tcm@ffe20000 {
                        compatible = "mmio-sram";
                        reg = <0x0 0xFFE20000 0x0 0x10000>;
                        pd-handle = <&pd_tcm_0_b>;
                };
                elf_ddr_0: ddr@3ed00000 {
                        compatible = "mmio-sram";
                        reg = <0x0 0x3ed00000 0x0 0x100000>;
                };
                test_r5_0: zynqmp_r5_rproc@0 {
                        compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
                        reg = <0x0 0xff9a0100 0x0 0x100>,
                        <0x0 0xff9a0000 0x0 0x100>;
                        reg-names = "rpu_base", "rpu_glbl_base";
                        dma-ranges;
                        core_conf = "split0";
                        srams = <&r5_0_tcm_a &r5_0_tcm_b &elf_ddr_0>;
                        pd-handle = <&pd_r5_0>;
                };
        };
};

Appreciate any help.

 

0 Kudos
3 Replies
jgummeson
Observer
Observer
552 Views
Registered: ‎12-16-2019

Ok, it works now by adding the OCM to system-user.dtsi:

r5_0_ocm: ocm@fffc0000 {
                        compatible = "mmio-sram";
                        reg = <0x0 0xFFFC0000 0x0 0x40000>;
                };

And then adding the OCM as an SRAM for the r5 remoteproc:

test_r5_0: zynqmp_r5_rproc@0 {
                        compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
                        reg = <0x0 0xff9a0100 0x0 0x100>,
                        <0x0 0xff9a0000 0x0 0x100>;
                        reg-names = "rpu_base", "rpu_glbl_base";
                        dma-ranges;
                        core_conf = "split0";
                        srams = <&r5_0_tcm_a &r5_0_tcm_b &elf_ddr_0 &r5_0_ocm>;
                        pd-handle = <&pd_r5_0>;
                };

So this is working and I can load firmware to the RPU. But do I also need to add the power domain for the OCM? Also, my code does not seem to start up properly if I put .vectors into the OCM. But it works with any/all of the other sections in OCM. Any reason for that? Could it be related to not have the power domain set up for the OCM in the device tree?

0 Kudos
mgarrick
Contributor
Contributor
338 Views
Registered: ‎08-13-2018

I got this working using 2018.3 (using the same fix shown by you).

However, now I need to get it working in 2020.1, and the dtsi is different.

Anyone know how to configure this for 2020.1?

zynqmp-rpu {
compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
#address-cells = <0x00000002>;
#size-cells = <0x00000002>;
ranges;
core_conf = "split";
reg = <0x00000000 0xff9a0000 0x00000000 0x00010000>;
r5@0 {
#address-cells = <0x00000002>;
#size-cells = <0x00000002>;
ranges;
memory-region = <0x00000039 0x0000003a 0x0000003b 0x0000003c>;
pnode-id = <0x00000007>;
mboxes = <0x0000003d 0x00000000 0x0000003d 0x00000001>;
mbox-names = "tx", "rx";
tcm_0@0 {
reg = <0x00000000 0xffe00000 0x00000000 0x00010000>;
pnode-id = <0x0000000f>;
};
tcm_0@1 {
reg = <0x00000000 0xffe20000 0x00000000 0x00010000>;
pnode-id = <0x00000010>;
};
};
};

 

I tried adding:

ocm_0@2 {
reg = <0x00000000 0xfffc0000 0x00000000 0x00010000>;
pnode-id = <0x0000000b>;
};

but it doesn't seem to help.

0 Kudos
mgarrick
Contributor
Contributor
291 Views
Registered: ‎08-13-2018

When I get to loading and releasing the RPU from within Linux, I see the following:

remoteproc remoteproc0: bad phdr da 0xfffc0000 mem 0x228

Which I think means Linux can't access the OCM at 0xFFFC0000.

Anybody got any ideas on how to configure this? Once again, need this in 2020.1

0 Kudos