10-31-2011 04:04 PM - edited 10-31-2011 04:06 PM
I'm using the v6_pcie_v2_4 core in ISE 13.2 and simulating with Modelsim DE 10.0. I've implemented legacy interrupts according to the documentation in v6_pcie_ug_517.pdf (v5.1), page 180. Oddly, the generated core uses active-high signals instead of active-low as shown in this document, but that's easy enough to work with.
What I'm having trouble with is the core is generating an extra, unexpected pulse on cfg_interrupt_rdy_n after the initial pulse that says the core has accepted the interrupt request. Please see the attached screenshot, as it's the easiest way to describe what I'm seeing. This extra pulse triggers a fatal error that ends the simulation:
ERROR: cfg_interrupt_rdy_n asserted w/o cfg_interrupt_n.
The error is obviously accurate. I don't understand how to fix it though, since the documentation suggests different behavior from the core. Am I setting something up incorrectly?
Thanks in advance for any help!
Solved! Go to Solution.
10-31-2011 06:02 PM
Its hard to tell in the screeen shot since there is no clock, but you need to have cfg_interrupt deasserted on the next cycle following when cfg_interrupt_rdy asserted. To me it looks like it is being held one to many cycles. That could be the issue.
Also, if you are using v2.4 of the core which is the AXI interface version, UG671 is the appropriate UG. See page 177 of UG671.
If that is not it, can you add the clock and include a wider view and/or attach the wlf file.
11-01-2011 04:38 PM
You were right! That fixed it, and everything is running beautifully now. I also grabbed the appropriate documentation you pointed me to. Thanks so much for your help!
11-27-2011 11:20 PM
Can you also confirm that you receive write message in the rx.dat file at the moment the interrupt is asserted and also de-asserted?
I use a spartan 6 EP implementation in a SP605 board.
When I alligned the cfg_interrupt_rdy and cfg_interrupt_assert the core responds as in the documentation UG672, page 106-109 the core responds as documented.
However. I don't see a memory write in the rx.dat file.
The strange thing is that I do receive them when I configure the EP to use "normal" mode (cfg_msi_enable is then high) when I assert the IRQ and deassert the IRQ.
11-28-2011 06:09 AM
I am not sure I am following your problem exactly, but maybe this will help...
If you are using legacy interrupts then cfg_msienable should not be asserted and you will see "messages" in the rx.dat file. You should see an assert message and a deassert message.
If you are using MSI interrupts then cfg_msienable would be asserted. Then you would see a "memory write" TLP in the rx.dat file. Also when using MSI, teh bus master enable bit in the command register (bit 2 at offset 0x04 in cfg space) must be a 1 to allow the memory write tlp to be sent.
11-28-2011 01:46 PM
I think I know what your issue is. If you are wanting to see the legacy interrupts on the TRN RX interface of the Root port model in simulation and thus have it written to the rx.dat file, there is a parameters on the root port block that needs to be modified.
Change ENABLE_MSG_ROUTE from 11'200 to 11'208
This will then cause the legacy interrupt to show up on the TRN RX interface.
Also, note that either way it will show up on the cfg_msg_* interface.
11-29-2011 05:21 AM
Where can I find the address map where the ENABLE_MSG_ROUTE register is defined?
And how do I read and write to the rootport in VHDL?
I assume with the PROC_READ_CFG_DW and PROC_WRITE_CFG_DW.
This is what I read in the testbench
-- Direct Root Port to allow upstream traffic by enabling Mem, I/O and
-- BusMstr in the command register
writeNowToScreen(String'("Enable in Root Port bus master, mem- and io addr decode, by setting bits [2:0] in config addr X'001'."));
PROC_READ_CFG_DW ("0000000001", cfg_rdwr_int);
PROC_WRITE_CFG_DW ("0000000001", X"00000007", "1110", cfg_rdwr_int);
PROC_READ_CFG_DW ("0000000001", cfg_rdwr_int);
11-29-2011 11:34 PM - edited 11-29-2011 11:35 PM
I've changed the generic in the pcie_2_0_rport to X"208" (I'm using VHDL)
Still no message received in RX.DAT
See the picture below (zoom in with the browser to see more detail)
12-01-2011 01:26 AM
What I'm trying to say with the picture above is
- I've changed the ENABLE_MSG_ROUTE to X"208".
- In the log file you see the last message around 75000 ns
- In the big modelsim view you can see the IRQ assert and deassert are generated after 80000 ns.
- The view below is to show some more detail about the handling of the assert and deassert protocol, which is compliant to the user guide as highlighted in the message John provided in this thread.