cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
jgutel
Observer
Observer
855 Views
Registered: ‎05-25-2018

PCS/PMA SGMII Errors when using Shared Logic in Example Design

Jump to solution

I have a design that uses the PCS/PMA core in SGMII mode to talk between two Zynq7045's on the same PCB. There is an additional PCS/PMA core in SGMII mode intended to go off board to communicate with another system. While testing I am using the SGMII lanes between the two Zynqs and in the constraints swapping out between testing with pma_0 (eth0 in linux) and pma_1 (eth1 in linux) by changing which core is connected to the pins.

What I'm seeing are drastically different results between the two cores. Eth0 (the core with the logic inside of the IP) is performing considerably better than Eth1. With Eth0 I get iperf3 performance with no bad transmissions at 10/Full and 100/Full but start receiving some RX Errors (overruns and frame) at 1000/Full (swapping using ethtool).

When using Eth1 I get errors at every setting and Iperf3 will basically stop receiving valid data after the first few packets go across the interface and report 0 Mbps bandwidth. 

Each Zynq has the same firmware design. The gtrefclk is a 125 MHz 50 ppm external clock. Pma_0 and pma_1 are connected to the GMII_ETHERNET_0/1 and MDIO_ETHERNET_0/1 EMIOs nets on the Zynq system block for control.

The PCS/PMA cores are attached like shown and configured as,

image.png

  • Ethernet MAC
    • Zynq PS Gigabit Ethernet Controoller
  • Standard
    • SGMII
    • Additional transceiver control and status ports
  • Core Fucntionality
    • Physical Interface : Device Specific Transceiver
    • Rx Gmii Clk Src : TXOUTCLK
    • Management Options : Autonegotiation
  • SGMII Capabilities
    • 10/100/1000 Mb/s (clk tolerance compliant with Ethernet specification)
  • SGMII Operation Mode
    • MAC Mode (SGMII PHY Mode left unchecked)
  • Shared Logic
    • pma_0 : Include Shared Logic in Core
    • pma_1 : Include Shared Logic in Example Design

In Linux running on the Zynqs I see the following at boot,

[    5.532399] macb e000b000.ethernet eth0: Cadence GEM rev 0x00020118 at 0xe000b000 irq 29 (00:0a:35:00:01:22)
[    5.537315] Xilinx PCS/PMA PHY e000b000.etherne:00: attached PHY driver [Xilinx PCS/PMA PHY] (mii_bus:phy_addr=e000b000.etherne:00, irq=)
[    5.544410] macb e000c000.ethernet: invalid hw address, using random
[    5.552339] libphy: MACB_mii_bus: probed
[    5.962373] macb e000c000.ethernet eth1: Cadence GEM rev 0x00020118 at 0xe000c000 irq 30 (46:56:05:5e:e6:b6)
[    5.967293] Generic PHY e000c000.etherne:00: attached PHY driver [Generic PHY] (mii_bus:phy_addr=e000c000.etherne:00, irq=-1)

The device tree is configured as,

64 /* https://github.com/Xilinx/linux-xlnx/blob/master/Documentation/devicetree/bindings/net/xilinx-phy.txt */
65 /* https://forums.xilinx.com/t5/Embedded-Linux/DTS-for-MAC-PHY-for-PCS-PMA-SGMII/td-p/972772 */
66 &gem0 {
67 phy-mode = "gmii";
68 status = "okay";
69 xlnx,ptp-enet-clock = <0x69f6bcb>;
70 /* local-mac-address = [00 0a 35 00 00 00]; */
71 };
72 &gem1 {
73 phy-mode = "gmii";
74 status = "okay";
75 xlnx,ptp-enet-clock = <0x69f6bcb>;
76 /* local-mac-address = [00 0a 35 00 00 01]; */
77 };

Tried with and without the following attributes made no difference,

phy-mode = "sgmii";

xlnx,phy-type = <0x4>;

 

 

 

Tags (3)
0 Kudos
1 Solution

Accepted Solutions
claytonr
Xilinx Employee
Xilinx Employee
749 Views
Registered: ‎08-15-2018

Hey @jgutel ,

Hmm, this is an interesting problem - thank you for the detailed information, let's see if we can get to the bottom of it.

 

The first thing I notice is that you don't have PHY nodes defined in your DT - taking a peek at your bootlog info we can see that the appropriate driver for eth1 is not getting loaded (but it is for eth0). Note how the second interface loads the generic PHY driver:

[ 5.532399] macb e000b000.ethernet eth0: Cadence GEM rev 0x00020118 at 0xe000b000 irq 29 (00:0a:35:00:01:22)
[ 5.537315] Xilinx PCS/PMA PHY e000b000.etherne:00: attached PHY driver [Xilinx PCS/PMA PHY] (mii_bus:phy_addr=e000b000.etherne:00, irq=)
[ 5.544410] macb e000c000.ethernet: invalid hw address, using random [ 5.552339] libphy: MACB_mii_bus: probed
[ 5.962373] macb e000c000.ethernet eth1: Cadence GEM rev 0x00020118 at 0xe000c000 irq 30 (46:56:05:5e:e6:b6)
[ 5.967293] Generic PHY e000c000.etherne:00: attached PHY driver [Generic PHY] (mii_bus:phy_addr=e000c000.etherne:00, irq=-1) 

 

Let's get this fixed first and see where that gets us. I'd suggest the following as your device tree addition in system-user,dtsi:

&gem0 {
    phy-mode = "gmii";
    status = "okay";
    phy-handle = <&phy9>;
    phy9: phy@9 {
        reg = <0x9>;
        xlnx,phy-type = <0x4>;
    };
};

&gem1 {
    phy-mode = "gmii";
    status = "okay";
    phy-handle = <&phy9>;
    phy9: phy@9 {
        reg = <0x9>;
        xlnx,phy-type = <0x4>;
    };
};

However, you'll need to replace the PHY nodes with information relevant to the address assigned to the PCS/PMA ip in Vivado (there is a dialog box with the MDIO Address). I believe the default address for the PCS/PMA IP is 0x9 but it's worth double checking in your design

For example if the address was 0xA instead:

phyA: phy@A {
    reg = <0xA>;
    xlnx,phy-type = <0x4>;
};

Hopefully this helps get things on the right track!

 

Thanks,

Clayton

View solution in original post

4 Replies
jgutel
Observer
Observer
797 Views
Registered: ‎05-25-2018

Testing Update.

Connected eth0 on one zynq to eth1 on the other zynq. Configured for 1000/full mode.

Run iperf3 server mode on eth0 and client on eth1. Functionality is about the same as if I was using eth0 for both sides, which is to say quite good. I had about 0.06% rx packet errors on eth0.

Run iperf3 server mode on eth1 and client on eth0. Terrible performance, many retries, many errors. 

This would seem to me to say that the transmit on eth1 is okay but the receive is having many problems.  

All tests are using PCB traces for the SGMII connections between the two zynqs and the same traces are being used for all tests.

0 Kudos
claytonr
Xilinx Employee
Xilinx Employee
750 Views
Registered: ‎08-15-2018

Hey @jgutel ,

Hmm, this is an interesting problem - thank you for the detailed information, let's see if we can get to the bottom of it.

 

The first thing I notice is that you don't have PHY nodes defined in your DT - taking a peek at your bootlog info we can see that the appropriate driver for eth1 is not getting loaded (but it is for eth0). Note how the second interface loads the generic PHY driver:

[ 5.532399] macb e000b000.ethernet eth0: Cadence GEM rev 0x00020118 at 0xe000b000 irq 29 (00:0a:35:00:01:22)
[ 5.537315] Xilinx PCS/PMA PHY e000b000.etherne:00: attached PHY driver [Xilinx PCS/PMA PHY] (mii_bus:phy_addr=e000b000.etherne:00, irq=)
[ 5.544410] macb e000c000.ethernet: invalid hw address, using random [ 5.552339] libphy: MACB_mii_bus: probed
[ 5.962373] macb e000c000.ethernet eth1: Cadence GEM rev 0x00020118 at 0xe000c000 irq 30 (46:56:05:5e:e6:b6)
[ 5.967293] Generic PHY e000c000.etherne:00: attached PHY driver [Generic PHY] (mii_bus:phy_addr=e000c000.etherne:00, irq=-1) 

 

Let's get this fixed first and see where that gets us. I'd suggest the following as your device tree addition in system-user,dtsi:

&gem0 {
    phy-mode = "gmii";
    status = "okay";
    phy-handle = <&phy9>;
    phy9: phy@9 {
        reg = <0x9>;
        xlnx,phy-type = <0x4>;
    };
};

&gem1 {
    phy-mode = "gmii";
    status = "okay";
    phy-handle = <&phy9>;
    phy9: phy@9 {
        reg = <0x9>;
        xlnx,phy-type = <0x4>;
    };
};

However, you'll need to replace the PHY nodes with information relevant to the address assigned to the PCS/PMA ip in Vivado (there is a dialog box with the MDIO Address). I believe the default address for the PCS/PMA IP is 0x9 but it's worth double checking in your design

For example if the address was 0xA instead:

phyA: phy@A {
    reg = <0xA>;
    xlnx,phy-type = <0x4>;
};

Hopefully this helps get things on the right track!

 

Thanks,

Clayton

View solution in original post

jgutel
Observer
Observer
697 Views
Registered: ‎05-25-2018

Hi @claytonr 

We are using an older version of the tool (2017.4) and in that version it doesn't look like the MDIO address is configurable. I checked in the PCS/PMA logic as well as the Vivado System. Without being able to set the phy reg to something unique the dtc tool fails to compile, if this is a must change then we can look at updating the build system. 

However we did make good progress. Previously only one interface could be connected at a time eth0 or eth1. We modified our test cables so that they can both be tested simultaneously. As soon as we did that they both functioned identically. Average ~500 Mbps using iperf on the Zynqs.

What concerns me is why did that make a difference? Since once of those ethernets (eth1) is intended to go to a different unit and each Zynq should be able to be run independently I'm concerned that if one zynq goes down (bringing down one of the ethernet interfaces, eth0) will that impact the operation of the second ethernet (eth1)?

I was pretty surprised that this "fixed" it. Thanks for your time.

0 Kudos
claytonr
Xilinx Employee
Xilinx Employee
666 Views
Registered: ‎08-15-2018

Hey @jgutel,

It looks like you're right - I wasn't sure off the top of my head so I had to go back and check on 17.4. We can still snag the PHY addresses from U-Boot using MII tool, however.

 

Try running:

mii info

 

from the U-Boot command line. This will query every PHY address on the active device's MDIO bus, and print out some basic info for every PHY that responds. In your case, it looks like the two PCS/PMA IP are on separate busses. You can switch the active device by using:

mii device

to list the available devices, and 

mii device <device name>

to switch to the device given by <device name>.

 

Once you have both of the addresses (they'll be the same, but it's good to confirm that MDIO is behaving as expected) I'd update your device tree to explicitly call out the PHY nodes like I listed above.

 

Let me know what you see with the mii tool results and how it goes with the device tree changes. Hopefully once we are loading the PCS/PMA driver properly instead of using the generic PHY driver some of the strange behavior will go away.

 

Thanks,

Clayton