cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Contributor
Contributor
368 Views
Registered: ‎07-17-2017

ZynqMP: R5 in lockstep mode with rpmsg communication in 2019.1

Jump to solution

This is not a real question but more for documentation so it might help others fighting with the same problem... I am quite disappointed that there doesn't seem to be an example of Xilinx how to operate the R5 on a ZynqUS+ in lockstep mode... was quite a huge struggle to get this to work from the information that is publicly available.

What I wanted to do:

(Port an existing design from 2018.3 to 2019.1 with the following functionality:)

Program and start the R5 with a custom firmware on a Xilinx ZCU111 (its the same issue for the ZCU102) from the Linux on the A53 using the Yocto tool flow (which is not strictly relevant for most of the points).

Tags (1)
1 Solution

Accepted Solutions
Contributor
Contributor
304 Views
Registered: ‎07-17-2017

1. Port all the old stuff from 18.3 to 19.1 using the information and examples available, esp. here:
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/118358017/OpenAMP+2019.1#OpenAMP2019.1-ZynqMPLinuxMasterrunningonAPULinuxloadsOpenAMPRPUFirmware

 

2. In the device tree overlay (system-user.dtsi), change the core_conf = "split" entry to "lockstep".

Important: It's not really documented anywhere, but before 2019.1 one had to write "lock-step" with hyphen, but since 2019.1, the hyphen is not accepted anymore! (See also this Xilinx forum post here)

 

3. If you want to use the full TCM memory, besides extending the memory size, you will also have to specify the power node ids of both banks (was handled differently in the examples prior to 2019.1). E.g. for TCM A: pnode-id = <0xf>, <0x11>;

So this was the resulting part of the R5 in my dtsi file:

        zynqmp-rpu {
                compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
                #address-cells = <2>;
                #size-cells = <2>;
                ranges;
                core_conf = "lockstep";
                r5_0: r5@0 {
                        #address-cells = <2>;
                        #size-cells = <2>;
                        ranges;
                        memory-region = <&rproc_0_reserved>, <&rproc_0_dma>;
                        pnode-id = <0x7>;
                        mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
                        mbox-names = "tx", "rx";
                        tcm_a: tcm@0 {
                                reg = <0x0 0xFFE00000 0x0 0x20000>;
                                pnode-id = <0xf>, <0x11>;
                        };
                        tcm_b: tcm@1 {
                                reg = <0x0 0xFFE20000 0x0 0x20000>;
                                pnode-id = <0x10>, <0x12>;
                        };
                };
        };

 

4. This might be special to our use case, but we had to change the reserved-memory regions (we reserve a large memory region for the R5, nearly 500MB) ending up with:

        reserved-memory {
                #address-cells = <2>;
                #size-cells = <2>;
                ranges;
                rproc_0_dma: rproc@20000000 {
                        no-map;
                        compatible = "shared-dma-pool";
                        reg = <0x0 0x20000000 0x0 0x100000>;
                };
                /* Memory for R5 firmware in DDR */
                rproc_0_reserved: rproc@20100000 {
                        no-map;
                        reg = <0x0 0x20100000 0x0 0x1ff00000>;
                };
        };

Of course, in the R5 firmware, one also needs to adapt the lscript.ld accordingly. Important in our case was that we haven't had a separate shared-dma-pool before but just one reserved region in which the functional equivalent of this region was being somewhere in the middle. This didn't work anymore with 2019.1, so we needed to shift it in front of the reserved memory region as you see above. One also needs to adapt these two lines in src/platform_info.c in the firmware accordingly:

#define SHARED_MEM_PA  0x20000000UL
#define SHARED_MEM_SIZE 0x100000UL

 

5. For the src/rsc_table.c file it is important that one uses

#define RING_TX                     FW_RSC_U32_ADDR_ANY
#define RING_RX                     FW_RSC_U32_ADDR_ANY

(contrary to what is stated in UG1186 - this is only when the firmware is loaded in the boot process as far as I understand)

 

6. Getting the RPMsg Character Device to work was also a mess.

It is not possible to use the kernel driver as builtin (CONFIG_RPMSG_CHAR=y). This lead to the character device not being created when the R5 is started (and a virtio device gets created). So it is important that in the kernel config you use "m" option. Our cfg file for this:

# Remoteproc & RPMsg Config for R5
CONFIG_VIRTIO=y
CONFIG_REMOTEPROC=y
CONFIG_ZYNQMP_R5_REMOTEPROC=y
CONFIG_RPMSG=y
CONFIG_RPMSG_VIRTIO=y
CONFIG_RPMSG_CHAR=m

Then, at the beginning, this was not taken into account properly due to the meta-openamp layer in yocto which also defines this. So in our own meta layer, we needed to add in recipes-kernel/linux/linux-%.bbappend:

# Remove the config from meta-openamp
KERNEL_FEATURES_remove = "cfg/openamp.scc"

There we also specified the additional .cfg file with the content above (using SRC_URI += "file://...").

 

7. Character Device creation works differently than before 2019.1. I ended up copying most of the initialization code from here:

https://github.com/Xilinx/meta-openamp/blob/rel-v2019.1/recipes-openamp/rpmsg-examples/rpmsg-echo-test/echo_test.c

which works fine after fixing all the rest, with the exception that I needed to replace "rpmsg-openamp-demo-channel" by "rpmsg-openamp-channel" everywhere (my virtio device had no demo in it).

---

This were probably the main points but it is very likely that I forgot something as I struggled with this issue for the last 3 months (not full time, but nonetheless definitely the most annoying and worst documented issue I had so far using Xilinx products).

I hope it might help somebody who faces similar issues.

View solution in original post

1 Reply
Contributor
Contributor
305 Views
Registered: ‎07-17-2017

1. Port all the old stuff from 18.3 to 19.1 using the information and examples available, esp. here:
https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/118358017/OpenAMP+2019.1#OpenAMP2019.1-ZynqMPLinuxMasterrunningonAPULinuxloadsOpenAMPRPUFirmware

 

2. In the device tree overlay (system-user.dtsi), change the core_conf = "split" entry to "lockstep".

Important: It's not really documented anywhere, but before 2019.1 one had to write "lock-step" with hyphen, but since 2019.1, the hyphen is not accepted anymore! (See also this Xilinx forum post here)

 

3. If you want to use the full TCM memory, besides extending the memory size, you will also have to specify the power node ids of both banks (was handled differently in the examples prior to 2019.1). E.g. for TCM A: pnode-id = <0xf>, <0x11>;

So this was the resulting part of the R5 in my dtsi file:

        zynqmp-rpu {
                compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
                #address-cells = <2>;
                #size-cells = <2>;
                ranges;
                core_conf = "lockstep";
                r5_0: r5@0 {
                        #address-cells = <2>;
                        #size-cells = <2>;
                        ranges;
                        memory-region = <&rproc_0_reserved>, <&rproc_0_dma>;
                        pnode-id = <0x7>;
                        mboxes = <&ipi_mailbox_rpu0 0>, <&ipi_mailbox_rpu0 1>;
                        mbox-names = "tx", "rx";
                        tcm_a: tcm@0 {
                                reg = <0x0 0xFFE00000 0x0 0x20000>;
                                pnode-id = <0xf>, <0x11>;
                        };
                        tcm_b: tcm@1 {
                                reg = <0x0 0xFFE20000 0x0 0x20000>;
                                pnode-id = <0x10>, <0x12>;
                        };
                };
        };

 

4. This might be special to our use case, but we had to change the reserved-memory regions (we reserve a large memory region for the R5, nearly 500MB) ending up with:

        reserved-memory {
                #address-cells = <2>;
                #size-cells = <2>;
                ranges;
                rproc_0_dma: rproc@20000000 {
                        no-map;
                        compatible = "shared-dma-pool";
                        reg = <0x0 0x20000000 0x0 0x100000>;
                };
                /* Memory for R5 firmware in DDR */
                rproc_0_reserved: rproc@20100000 {
                        no-map;
                        reg = <0x0 0x20100000 0x0 0x1ff00000>;
                };
        };

Of course, in the R5 firmware, one also needs to adapt the lscript.ld accordingly. Important in our case was that we haven't had a separate shared-dma-pool before but just one reserved region in which the functional equivalent of this region was being somewhere in the middle. This didn't work anymore with 2019.1, so we needed to shift it in front of the reserved memory region as you see above. One also needs to adapt these two lines in src/platform_info.c in the firmware accordingly:

#define SHARED_MEM_PA  0x20000000UL
#define SHARED_MEM_SIZE 0x100000UL

 

5. For the src/rsc_table.c file it is important that one uses

#define RING_TX                     FW_RSC_U32_ADDR_ANY
#define RING_RX                     FW_RSC_U32_ADDR_ANY

(contrary to what is stated in UG1186 - this is only when the firmware is loaded in the boot process as far as I understand)

 

6. Getting the RPMsg Character Device to work was also a mess.

It is not possible to use the kernel driver as builtin (CONFIG_RPMSG_CHAR=y). This lead to the character device not being created when the R5 is started (and a virtio device gets created). So it is important that in the kernel config you use "m" option. Our cfg file for this:

# Remoteproc & RPMsg Config for R5
CONFIG_VIRTIO=y
CONFIG_REMOTEPROC=y
CONFIG_ZYNQMP_R5_REMOTEPROC=y
CONFIG_RPMSG=y
CONFIG_RPMSG_VIRTIO=y
CONFIG_RPMSG_CHAR=m

Then, at the beginning, this was not taken into account properly due to the meta-openamp layer in yocto which also defines this. So in our own meta layer, we needed to add in recipes-kernel/linux/linux-%.bbappend:

# Remove the config from meta-openamp
KERNEL_FEATURES_remove = "cfg/openamp.scc"

There we also specified the additional .cfg file with the content above (using SRC_URI += "file://...").

 

7. Character Device creation works differently than before 2019.1. I ended up copying most of the initialization code from here:

https://github.com/Xilinx/meta-openamp/blob/rel-v2019.1/recipes-openamp/rpmsg-examples/rpmsg-echo-test/echo_test.c

which works fine after fixing all the rest, with the exception that I needed to replace "rpmsg-openamp-demo-channel" by "rpmsg-openamp-channel" everywhere (my virtio device had no demo in it).

---

This were probably the main points but it is very likely that I forgot something as I struggled with this issue for the last 3 months (not full time, but nonetheless definitely the most annoying and worst documented issue I had so far using Xilinx products).

I hope it might help somebody who faces similar issues.

View solution in original post