UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Contributor
Contributor
578 Views
Registered: ‎05-14-2018

Can't enable multiple MSI vectors on the host system with the AXI Memory Mapped to PCI Express core as endpoint

Jump to solution

Hello,

I have a xc7z045ffg900-2 connected to an Intel board over PCIe, and I am having trouble getting multiple MSI interrupts to work with the AXI Memory Mapped to PCI Express core. I am using Vivado 2018.2 and Linux Kernel 4.14.

I have the MSI Vectors Requested setting set to 5 (so 32 MSI vectors), and lspci on the host Intel system shows the following:

 

        Capabilities: [48] MSI: Enable- Count=1/32 Maskable- 64bit+                                                                                                                                                                                                                                                                                                         
                Address: 0000000000000000  Data: 0000

So the bits in the capability register are being set correctly to show that the vectors are available. However, I get a -ENOSPC return code when I call the following function during device driver intialisation in Linux on the host:

pci_alloc_irq_vectors(pdev, 8, 32, PCI_IRQ_MSI);

Meaning that the requested number of MSI vectors (8) could not be allocated. This happens for any number of vectors I request, apart from if I just request one vector. If I just request one, then it works fine, and MSI interrupts come in from the FPGA. The correct vector configuration is then also shown in lspci.

I've had a look through the "Xilinx Answer 58495 – PCI-Express Interrupt Debugging Guide" document, but I haven't found anything that could help me here.

Do you have any advice on how to proceed in getting multiple MSI vectors to work?

Thank you

 

0 Kudos
1 Solution

Accepted Solutions
Contributor
Contributor
530 Views
Registered: ‎05-14-2018

Re: Can't enable multiple MSI vectors on the host system with the AXI Memory Mapped to PCI Express core as endpoint

Jump to solution

Hi @venkata,

I have just discovered the problem: I had to enable CONFIG_IRQ_REMAP in the Linux kernel configuration. With this flag set, I can now allocate 32 MSI vectors.

Do you think this could be added to the documentation, or at least the AR 58495 debugging guide? I'm sure it would save other people a lot of time.

Thank you,

Isaac

View solution in original post

0 Kudos
2 Replies
Moderator
Moderator
559 Views
Registered: ‎02-16-2010

Re: Can't enable multiple MSI vectors on the host system with the AXI Memory Mapped to PCI Express core as endpoint

Jump to solution

Hi @isaacjt ,

You will need to set “Multiple Message Enable” bits in Message Control Register along with "MSI Enable" bit in configuration space. Have you done this?

------------------------------------------------------------------------------
Don't forget to reply, give kudo and accept as solution
------------------------------------------------------------------------------
0 Kudos
Contributor
Contributor
531 Views
Registered: ‎05-14-2018

Re: Can't enable multiple MSI vectors on the host system with the AXI Memory Mapped to PCI Express core as endpoint

Jump to solution

Hi @venkata,

I have just discovered the problem: I had to enable CONFIG_IRQ_REMAP in the Linux kernel configuration. With this flag set, I can now allocate 32 MSI vectors.

Do you think this could be added to the documentation, or at least the AR 58495 debugging guide? I'm sure it would save other people a lot of time.

Thank you,

Isaac

View solution in original post

0 Kudos