cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
1,941 Views
Registered: ‎11-03-2017

problem with setting CPU affinity for PCI MSI interrupts

Hi,

We have developed soft PCIe root-port IP and using Vivado 2017.2, and petalinux version 2017.2 on Zynq MPSOC Ultrascale+ on Fidus Sidewinder-100 board.

 

Looks like all the CPU cores are not picking up the MSI interrupts even after setting the irq affinity hints for individual processors.

only one CPU seems to be processing all MSI interrupts.

 

I even tried running it with recompiled irqbalance 

could you please help us enable CPU affinity to 4 available MSI interrupts to individual cores ?

 

interrupt sections on device tree file
.
.
.
.
// Interrupts
interrupt-parent = <&gic>;

interrupts = <0 89 4>; //
interrupt-controller;
#interrupt-cells = <1>;
device_type = "pci";

interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &mvbl_x4_pcie_rc_0 1>,
<0 0 0 2 &mvbl_x4_pcie_rc_0 2>,
<0 0 0 3 &mvbl_x4_pcie_rc_0 3>,
<0 0 0 4 &mvbl_x4_pcie_rc_0 4>;

};
.
.
relevant console logs

root@os8:~# cat /proc/irq/default_smp_affinity
f


root@os8:~# cat /proc/irq/21[6789]/smp_affinity_list
0
1
2
3

root@os8:~# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
2: 0 0 0 0 GICv2 29 Level arch_timer
3: 37699 130074 56775 55875 GICv2 30 Level arch_timer
10: 0 0 0 0 GICv2 67 Level zynqmp_pm
12: 0 0 0 0 GICv2 156 Level zynqmp-dma
13: 0 0 0 0 GICv2 157 Level zynqmp-dma
14: 0 0 0 0 GICv2 158 Level zynqmp-dma
15: 0 0 0 0 GICv2 159 Level zynqmp-dma
16: 0 0 0 0 GICv2 160 Level zynqmp-dma
17: 0 0 0 0 GICv2 161 Level zynqmp-dma
18: 0 0 0 0 GICv2 162 Level zynqmp-dma
19: 0 0 0 0 GICv2 163 Level zynqmp-dma
21: 0 0 0 0 GICv2 109 Level zynqmp-dma
22: 0 0 0 0 GICv2 110 Level zynqmp-dma
23: 0 0 0 0 GICv2 111 Level zynqmp-dma
24: 0 0 0 0 GICv2 112 Level zynqmp-dma
25: 0 0 0 0 GICv2 113 Level zynqmp-dma
26: 0 0 0 0 GICv2 114 Level zynqmp-dma
27: 0 0 0 0 GICv2 115 Level zynqmp-dma
28: 0 0 0 0 GICv2 116 Level zynqmp-dma
29: 1 0 0 0 GICv2 144 Level fd070000.memory-controller
31: 0 0 0 0 GICv2 49 Level cdns-i2c
32: 0 0 0 0 GICv2 50 Level cdns-i2c
33: 0 0 0 0 GICv2 42 Level ff960000.memory-controller
34: 0 0 0 0 GICv2 58 Level ffa60000.rtc
35: 0 0 0 0 GICv2 59 Level ffa60000.rtc
36: 416 0 0 0 GICv2 81 Level mmc0
37: 1141 0 0 0 GICv2 53 Level xuartps
39: 0 0 0 0 GICv2 88 Level ams-irq
216: 458370 0 0 0 Mobiveil PCIe MSI 524288 Edge nvme0q0, nvme0q1
217: 1083084 0 0 0 Mobiveil PCIe MSI 524289 Edge nvme0q2
218: 1042912 0 0 0 Mobiveil PCIe MSI 524290 Edge nvme0q3
219: 1083276 0 0 0 Mobiveil PCIe MSI 524291 Edge nvme0q4
IPI0: 1975 2347 1636 1545 Rescheduling interrupts
IPI1: 4203 676726 655010 676030 Function call interrupts
IPI2: 0 0 0 0 CPU stop interrupts
IPI3: 22201 0 16300 12508 Timer broadcast interrupts
IPI4: 0 0 0 0 IRQ work interrupts
IPI5: 0 0 0 0 CPU wake-up interrupts
Err:

 

Thanks.

0 Kudos
3 Replies
Highlighted
Xilinx Employee
Xilinx Employee
1,777 Views
Registered: ‎02-26-2013

0 Kudos
Highlighted
1,761 Views
Registered: ‎11-03-2017

@minm,

Thanks for your response.

 

This link provides irqbalance package requirement as solution; but as I have already mentioned in the original post, I already have tried that option without any success.

 

Thanks.

0 Kudos
Highlighted
Xilinx Employee
Xilinx Employee
1,745 Views
Registered: ‎02-26-2013

It looks like the Linux community had some discussions about this topic. They are not going to support irq balance for GIC in kernal automatically. On their discussion, route all IRQs to one CPU has better performance than route every IRQs to random CPU.

 

http://lists.infradead.org/pipermail/linux-arm-kernel/2011-January/037945.html 

0 Kudos