UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
250 Views
Registered: ‎08-09-2018

"metal_uio_dev_open: No IRQ for device 3ed80000.shm“ . What's this info mean?

Process libmetal demo over . the print seems all ok, except "metal: info: metal_uio_dev_open: No IRQ for device 3ed80000.shm".  

what's this mean?

printing is below:

CLIENT> ****** libmetal demo: shared memory ******
metal: info: metal_uio_dev_open: No IRQ for device 3ed80000.shm.

SERVER> Demo has started.
SERVER> Shared memory test finished
SERVER> ====== libmetal demo: atomic operation over shared memory ======
SERVER> Starting atomic add on shared memory demo.

CLIENT> Setting up shared memory demo.
CLIENT> Starting shared memory demo.
CLIENT> Sending message: Hello World - libmetal shared memory demo
CLIENT> Message Received: Hello World - libmetal shared memory demo
CLIENT> Shared memory demo: Passed.
CLIENT> ****** libmetal demo: atomic operation over shared memory ******
metal: info: metal_uio_dev_open: No IRQ for device 3ed80000.shm.
SERVER> Shared memory with atomics test finished
SERVER> ====== libmetal demo: IPI and shared memory ======

CLIENT> Starting atomic shared memory task.
CLIENT> shm atomic demo PASSED!

SERVER> Wait for echo test to start.
CLIENT> ****** libmetal demo: IPI and shared memory ******
metal: info: metal_uio_dev_open: No IRQ for device 3ed80000.shm.

CLIENT> Start echo flood testing....
CLIENT> Sending msgs to the remote.
CLIENT> Waiting for messages to echo back and veri
SERVER> Received shutdown message

SERVER> IPI with shared memory demo finished with exit code: 0.

SERVER> ====== libmetal demo: IPI latency ======

SERVER> Starting IPI latency demo
fy.
CLIENT> Kick remote to notify shutdown message sent...
CLIENT> Total packages: 1024, time_avg = 0s, 4183ns
CLIENT> ****** libmetal demo: IPI latency ******
metal: info: metal_uio_dev_open: No IRQ for device 3ed80000.shm.
CLIENT> Starting IPI latency task

SECRLVIEERN>T >= =I=P=I= =l altiebnmceyt arle sduelmto :w isthhred memory latency ======

SERVER> Starting IPI latency demo
1000 iterations:
CLIENT> APU to RPU average latency: 17 ns
CLIENT> RPU to APU average latency: 9 ns
CLIENT> Finished IPI latency task
CLIENT> ****** libmetal demo: shared memory latency ******
metal: info: metal_uio_dev_open: No IRQ for device 3ed80000.shm.
CLIENT> Starting IPI latency task
CLIENT> package size 16 latency result:
CLIENT> APU to RPU average latency: 19 ns
CLIENT> RPU to APU average latency: 12 ns
CLIENT> package size 32 latency result:
CLIENT> APU to RPU average latency: 0 ns
CLIENT> RPU to APU average latency: 28 ns
CLIENT> package size 64 latency result:
CLIENT> APU to RPU average latency: 6 ns
CLIENT> RPU to APU average latency: 39 ns
CLIENT> package size 128 latency result:
CLIENT> APU to RPU average latency: 19 ns
CLIENT> RPU to APU average latency: 9 ns
CLIENT> package size 256 latency result:
CLIENT> APU to RPU average latency: 40 ns
CLIENT> RPU to APU average latency: 10 ns
CLIENT> package size 512 latency result:
CLIENT> APU to RPU average latency: 10 ns
CLIENT> RPU to APU average latency: 25 ns
CLIENT> package size 1024 latency result:
CLIENT> APU to RPU ave
SERVER> ====== libmetal demo: shared memory throughput ======
rage latency: 24 ns
CLIENT> RPU to APU average latency: 28 ns
CLIENT> Finished shared memory latency task
SERVER> Starting shared mem throughput demo
metal: info: metal_uio_dev_open: No IRQ for device 3ed80000.shm.
CLIENT> ****** libmetal demo: shared memory throughput ******
CLIENT> Starting shared mem throughput demo
CLIENT> Shared memory throughput of pkg size 16 :
CLIENT> APU send: 120635b, 0 MB/s
CLIENT> APU receive: 239a113, 0 MB/s
CLIENT> RPU send: 22d44ac, 0 MB/s
CLIENT> RPU receive: 2f7a7c1, 0 MB/s
CLIENT> Shared memory throughput of pkg size 32 :
CLIENT> APU send: 96388d, 1 MB/s
CLIENT> APU receive: 13e8fa4, 0 MB/s
CLIENT> RPU send: 11d9dc7, 0 MB/s
CLIENT> RPU receive: 1d504cc, 0 MB/s
CLIENT> Shared memory throughput of pkg size 64 :
CLIENT> APU send: 52213d, 4 MB/s
CLIENT> APU receive: eced80, 1 MB/s
CLIENT> RPU send: 967409, 2 MB/s
CLIENT> RPU receive: 14654f5, 1 MB/s
CLIENT> Shared memory throughput of pkg size 128 :
CLIENT> APU send: 2f9565, 16 MB/s
CLIENT> APU receive: e79ca7, 3 MB/s
CLIENT> RPU send: 546771, 9 MB/s
CLIENT> RPU receive: fc716c, 3 MB/s
CLIENT> Shared memory throughput of pkg size 256 :
CLIENT> APU send: 200a20, 47 MB/s
CLIENT> APU receive: d3b4c3, 7 MB/s
CLIENT> RPU send: 33d042, 29 MB/s
CLIENT> RPU receive: e29ff4, 6 MB/s
CLIENT> Shared memory throughput of pkg size 512 :
CLIENT> APU send: 15efa0, 139 MB/s
CLIENT> APU receive: b59ffb, 16 MB/s
CLIENT> RPU send: 24d8c2, 82 MB/s
CLIENT> RPU receive: ce14dd, 14 MB/s
CLIENT> Shared memory throughput of pkg size 1024 :
CLIENT> APU send: 11b83d, 344 MB/s
CLIENT> APU receive: b1e434, 34 MB/s
CLIENT> RPU send: 1e97f0, 199 MB/s
CLIENT> RPU receive: c1398e, 31 MB/s
CLIENT> Finished shared memory throughput

user dtsi is below:

/include/ "system-conf.dtsi"
/ {
reserved-memory {
	#address-cells = <2>;
	#size-cells = <2>;
	ranges;
	rproc_0_reserved: rproc@3ed000000 {
	 no-map;
	 reg = <0x0 0x3ed00000 0x0 0x2000000>;
	 };
	};
amba {
/* Shared memory */
	shm0: shm@0 {
 	compatible = "shm_uio";
	 reg = <0x0 0x3ed80000 0x0 0x1000000>;
	 };
/* IPI device */
	 ipi_amp: ipi@ff340000 {
	 compatible = "ipi_uio";
	 reg = <0x0 0xff340000 0x0 0x1000>;
	 interrupt-parent = <&gic>;
	 interrupts = <0 29 4>;
	 };
      };
};&ttc0 {
compatible = "ttc0";
status = "okay";
};
/{
power-domains {
pd_r5_0: pd_r5_0 {
#power-domain-cells = <0x0>;
pd-id = <0x7>;
};
pd_tcm_0_a: pd_tcm_0_a {
#power-domain-cells = <0x0>;
pd-id = <0xf>;
};
pd_tcm_0_b: pd_tcm_0_b {
#power-domain-cells = <0x0>;
pd-id = <0x10>;
};
};
 amba {
 /* firmware memory nodes */
 r5_0_tcm_a: tcm@ffe00000 {
 compatible = "mmio-sram";
 reg = <0x0 0xFFE00000 0x0 0x10000>;
 pd-handle = <&pd_tcm_0_a>;
 };
 r5_0_tcm_b: tcm@ffe20000 {
 compatible = "mmio-sram";
 reg = <0x0 0xFFE20000 0x0 0x10000>;
 pd-handle = <&pd_tcm_0_b>;
 };
 elf_ddr_0: ddr@3ed00000 {
 compatible = "mmio-sram";
 reg = <0x0 0x3ed00000 0x0 0x100000>;
 };

 test_r5_0: zynqmp_r5_rproc@0 {
	 compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
	 reg = <0x0 0xff9a0100 0x0 0x100>,
 	 <0x0 0xff9a0000 0x0 0x100>;
	 reg-names = "rpu_base", "rpu_glbl_base";
	 dma-ranges;
	 core_conf = "split0";
	 srams = <&r5_0_tcm_a &r5_0_tcm_b &elf_ddr_0>;
	 pd-handle = <&pd_r5_0>;
 };
 };

};

 

0 Kudos