cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
10,062 Views
Registered: ‎07-15-2014

zynq linux: emacps issue

Hi,

 

I believe the 2014.2 zynq linux driver for emacps has an issue. When a TX complete interrupt is received, the interrupt status register is cleared, a tasklet is scheduled without clearing the TX status register, so that when interrupts are re-enabled, the interrupt triggers again. After happening for some time, the Linux kernel will take notice of it and offload the interrupt handler on the ksoftirqd daemon which will at last allow the tasklet to run, which clears the TX status register.

The symptoms of the problems are that when the zynq is put with a high TX load (writing on a TCP socket as fast as possible for instance), top will show that a lot of CPU time is spent in ksoftirqd.

 

A possible fix is to clear the TX status register in the interrupt handler, but it means that we still have another issue: we still have one irq by TX completion.

 

So I propose another fix, which is to disable the TX completion IRQ in the interrupt handler, re-enable it at the end of the tx_poll tasklet, and in the tx_poll tasklet, read the TX status register again before exiting in case another TX completion irq happened.

 

Please find attached a patch implementing this idea.

 

Regards.

0 Kudos
5 Replies
Highlighted
9,969 Views
Registered: ‎07-15-2014

Hi,

 

I am not so sure about this interpretation of the ksoftirqd using a lot of cpu, especially since I have not received an answer.

The problem seems rather to be the number of TX complete interrupts. In order to avoid these interrupts I have found that

disabling the interrupt when it happens and delaying the tx_poll tasklet for 1ms, allows the tasklet to handle many packets at once, greatly reduces the number of interrupts and seems to improve (a bit) the TX performances. Please find attached the patch implementing this idea.

 

I am interested by any feedback on this issue.

 

Regards.

0 Kudos
Highlighted
Xilinx Employee
Xilinx Employee
9,965 Views
Registered: ‎09-10-2008

I'm no expert, but NAPI mode is supposed to put the driver into a polling mode when data rates get high to avoid too many interrupts. It's not clear if what you describe is a real problem or not.

Thanks
John
0 Kudos
Highlighted
9,956 Views
Registered: ‎07-15-2014

The xilinx_emacps driver uses NAPI for RX, not for TX.

0 Kudos
Highlighted
9,952 Views
Registered: ‎07-15-2014

Also, without this patch, when sending as fast as possible on a gigabit link, I get something like 14000 interrupts by second. Any task which happens to be on the CPU where this happens (cpu 0) is slowed down considerably.

0 Kudos
Highlighted
Xilinx Employee
Xilinx Employee
9,947 Views
Registered: ‎09-10-2008

Thanks for clarifying about that.
0 Kudos