UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Contributor
Contributor
792 Views
Registered: ‎08-25-2014

Disabled M_AXI_GP0/GP1 and interconnect issues

Jump to solution

We have a design that uses M_AXI_GP0 along with some of the S_AXI_HP and S_AXI_ACP ports.  We have M_AXI_GP1 disabled in the Zynq Processor instance.  We ran into an issue where the S_AXI_ACP writes were blocked (S_AXI_ACP_WREADY failed to assert) after 17 transactions (a 16-beat burst write followed by a single beat of the next 16-beat burst).  That is, S_AXI_ACP_WREADY was asserted for the first 17 writes, and refused to assert after that.

Now, the ARM accesses logic in the PL via the M_AXI_GP0 port, which is mapped in the 0x4000_0000 to 0x7FFF_FFFF range (similarly, the GP1 is mapped to 0x8000_0000 to 0xBFFF_FFFF).

We think it is related to the fact that the ARM code was incorrectly trying to read via GP1.  Looking at the generated code and results, with M_AXI_GPI1 disabled, the code ties off all of the GP1 AXI inputs to ground.  So any attempt to generate a transaction on GP1 will never be acknowledge.  And it appears this somehow locks up the interconnect inside the PS7.  In fact, connecting with xmd or xsdb indicate it is unable to halt the core.

If the master GP0 or GP1 interfaces are disabled, is there nothing on the ARM side to abort transactions on the GP0/GP1 AXI interfaces?  If we want to ensure this deadlock occurs, do we need some sort of PL side bus termination?

0 Kudos
1 Solution

Accepted Solutions
Xilinx Employee
Xilinx Employee
766 Views
Registered: ‎02-01-2008

Re: Disabled M_AXI_GP0/GP1 and interconnect issues

Jump to solution

HP ports only have access to OCM and DDR. But the ACP port has access to anything the SCU can access which includes the GP1 address range.

So make sure the ACP and CPUs are not trying to access the GP1 addr range. I believe it is possible to prevent the ACP from accessing that addr range. The PS config wizard contains an option to enable fine grain addressing. Once enabled, the address tab will list all of the PS internal peripherals and you could selectively exclude each peripheral addr range. And I would think that the internal peripherals listed would include the GP1 addr range. By excluding and address range, an axi interconnect would include an address 'blocker' (called an MMU & not to be confused with the CPU's MMU).

But if the CPU is trying to access the GP1 addr range, you will have to adjust your software to prevent that access. Depending on what software you are using, configuring the CPU's MMU can be configured to prevent accesses to the GP1 addr range and the MMU would look after terminating the cycle. Otherwise, Zynq7000 does not include bus timeout logic in the PS.

Now as you mentioned, enabling GP1 and adding logic that terminates the axi transaction with the appropriate axi RESP value is another alternative.

I believe the ACP port supports up to 16 outstanding transactions. So that helps explain why you see the issue on the 17th access.

View solution in original post

4 Replies
Xilinx Employee
Xilinx Employee
767 Views
Registered: ‎02-01-2008

Re: Disabled M_AXI_GP0/GP1 and interconnect issues

Jump to solution

HP ports only have access to OCM and DDR. But the ACP port has access to anything the SCU can access which includes the GP1 address range.

So make sure the ACP and CPUs are not trying to access the GP1 addr range. I believe it is possible to prevent the ACP from accessing that addr range. The PS config wizard contains an option to enable fine grain addressing. Once enabled, the address tab will list all of the PS internal peripherals and you could selectively exclude each peripheral addr range. And I would think that the internal peripherals listed would include the GP1 addr range. By excluding and address range, an axi interconnect would include an address 'blocker' (called an MMU & not to be confused with the CPU's MMU).

But if the CPU is trying to access the GP1 addr range, you will have to adjust your software to prevent that access. Depending on what software you are using, configuring the CPU's MMU can be configured to prevent accesses to the GP1 addr range and the MMU would look after terminating the cycle. Otherwise, Zynq7000 does not include bus timeout logic in the PS.

Now as you mentioned, enabling GP1 and adding logic that terminates the axi transaction with the appropriate axi RESP value is another alternative.

I believe the ACP port supports up to 16 outstanding transactions. So that helps explain why you see the issue on the 17th access.

View solution in original post

Contributor
Contributor
749 Views
Registered: ‎08-25-2014

Re: Disabled M_AXI_GP0/GP1 and interconnect issues

Jump to solution

@johnmcd wrote:

HP ports only have access to OCM and DDR. But the ACP port has access to anything the SCU can access which includes the GP1 address range.

So make sure the ACP and CPUs are not trying to access the GP1 addr range. I believe it is possible to prevent the ACP from accessing that addr range. The PS config wizard contains an option to enable fine grain addressing. Once enabled, the address tab will list all of the PS internal peripherals and you could selectively exclude each peripheral addr range. And I would think that the internal peripherals listed would include the GP1 addr range. By excluding and address range, an axi interconnect would include an address 'blocker' (called an MMU & not to be confused with the CPU's MMU).

But if the CPU is trying to access the GP1 addr range, you will have to adjust your software to prevent that access. Depending on what software you are using, configuring the CPU's MMU can be configured to prevent accesses to the GP1 addr range and the MMU would look after terminating the cycle. Otherwise, Zynq7000 does not include bus timeout logic in the PS.

Now as you mentioned, enabling GP1 and adding logic that terminates the axi transaction with the appropriate axi RESP value is another alternative.

I believe the ACP port supports up to 16 outstanding transactions. So that helps explain why you see the issue on the 17th access.


Let me clarify a bit.  We are using the ACP and HP ports to access DDR.  We are not generating transactions on the ACP that loop back through to the GP0 or GP1 ports.

We found a defect in the firmware that was trying to access GP1 (which is disabled) instead of GP0 (which has our peripherals).

What concerns us is that it appears the entire AXI interconnect inside the PS7 "locks up" if the APU tries to access any disabled master ports to the PL (e.g. GP1 in our case).  This is a catastrophic failure for which there is no recovery.  Is this accurate?  So it seems that the only way to prevent this condition is 1) use the CPU's MMU to block access to the GP1 range, or 2) include some sort of AXI termination logic on GP1 in the fabric.

0 Kudos
Xilinx Employee
Xilinx Employee
738 Views
Registered: ‎02-01-2008

Re: Disabled M_AXI_GP0/GP1 and interconnect issues

Jump to solution

I understand your concern. The AMBA AXI spec does not specify bus timeouts. For you, a watchdog could help and protect from bus timeouts.

 

Zynq UltraScale+ PS does include AXI Timeout blocks (ATB) to prevent these types of occurrences.

0 Kudos
Contributor
Contributor
729 Views
Registered: ‎08-25-2014

Re: Disabled M_AXI_GP0/GP1 and interconnect issues

Jump to solution

@johnmcd wrote:

I understand your concern. The AMBA AXI spec does not specify bus timeouts. For you, a watchdog could help and protect from bus timeouts.

 

Zynq UltraScale+ PS does include AXI Timeout blocks (ATB) to prevent these types of occurrences.


Right, I understand the lack of timeouts on AXI.  We just didn't expect a firmware access to GP1 to cause the ACP write port to lockup.  I just wanted to confirm that the behavior we are seeing is actually the root cause of the problem.  That is, when the APU issues an AXI transaction to an AXI interface that fails to terminate, it blocks other masters.

We are implementing the MMU changes to the firmware to prevent accesses to GP1.  We are further using the AXI MMU to prevent transactions outside of the DDR region on the ACP interface.

A Zynq US+ would be nice, if we could change parts at this point.  Thanks for the info.

0 Kudos