cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
barrygmoss
Contributor
Contributor
320 Views
Registered: ‎03-20-2018

Rx AXIS clocking for 100G Ethernet Subsystem

I've run into an issue with the clocking for the Rx AXIS interface on the 100G Ethernet Subsystem that is confusing me. 

First the background. I'm putting a cmac_usplus core into an XCVU9P using Vivado 2019.2 (I'm constrained to 2019.2 because later in the design I'll be using some third party IP that has problems with the 2020 versions of Vivado). I've connected the Rx AXI-Stream bus to a minimal amount of logic to swap the source and destination MAC addresses, buffer frames in an AXIS FIFO (Xilinx IP) and then send it back out the TX AXIS port. Externally the Ethernet signals are connected to a local PC with a 100G Ethernet card which allows me to send packets and check the looped back data. 

When I tested the design, I found that sometimes we were getting garbage data coming back and other times good data. So I rebuilt the FPGA with some ILAs. I noticed that my logic sampling the Rx AXIS interface seemed to be suffering from metastability, despite this logic being directly clocked by the gt_rxusrclk output, which is described in PG203 as the RX User Clock. I generated an example design for the cmac core and there I noticed that it was the gt_txusrclk2 output that was used to clock the packet monitor. This seems to be contrary to both the naming convention of the cmac I/O and the brief pin description, but I decided to change my design to match. When I rebuilt, the metastability problem went away, so clearly, this is the correct clock to use. But I'm still concerned about why I need to use the tx clock to clock the receive logic and ignore the rx clock. Is there a good explanation for this? I hate having something in my design that works contrary to the documentation without a good explanation. I did search through PG203 and I'm not seeing a clear explanation for this anywhere. 

0 Kudos
0 Replies