UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Reply

four lane PCS/PMA problem

Highlighted
Adventurer
Posts: 80
Registered: ‎08-26-2013

four lane PCS/PMA problem

Hi everybody,

I have a successful experience in a 10gbe interface linking my custom board and the pc. For a 40G link implementation, following pg068 datasheet explanations, i have above mentioned master core and 3 slave(in synchronization) cores simultaneously with described required shared logic parts that hardware ila show the same results for any of four lanes. The result in PC but is different and only first lane receives in PC same as first experience and there is not any connection with the other lanes properties like mac address. My question is that is this method of using 4 lanes(independent lanes) correct and acceptable and if yes what is my problem? And if this way is not correct, what is the true format for this link configuration? And where i can find some information about 40G and 100G PC interfaces and related connections and block diagrams?

 

Re

mhmontazeri61

Xilinx Employee
Posts: 2,291
Registered: ‎02-16-2010

Re: four lane PCS/PMA problem

Are you doing aggregation of data to make 4 independent 10G links to operate as a 40G link?

If a single 40G design is required, you may chose to use 40G Ethernet core.
------------------------------------------------------------------------------
Don't forget to reply, give kudo and accept as solution
------------------------------------------------------------------------------
Moderator
Posts: 3,220
Registered: ‎02-06-2013

Re: four lane PCS/PMA problem

Hi

 

How is the link connected to your PC.

 

Is this a 40G link or 4 10G links.

 

Refer below links for details about 40G and 100G cores.

 

https://www.xilinx.com/support/documentation/ip_documentation/l_ethernet/v2_2/pg211-50g-ethernet.pdf

 

https://www.xilinx.com/support/documentation/ip_documentation/cmac/v2_2/pg165-cmac.pdf

Regards,

Satish

--------------------------------------------------​--------------------------------------------
Kindly note- Please mark the Answer as "Accept as solution" if information provided is helpful.

Give Kudos to a post which you think is helpful.
--------------------------------------------------​-------------------------------------------
Observer
Posts: 20
Registered: ‎09-08-2015

Re: four lane PCS/PMA problem

Hi ,

 

As explained, By customizing one 10G PCS/PMA IP includes shared logic in core and other three 10G PCS/PMA IPs  include shared logic in example design you will have 4x10G Ethernet lanes, therefore you need to have 4 individual Ethernet ports. They are fully independent on data path and data will not be aggregated. For a single 40G Ethernet lane, you need to use 40G/50G Ethernet Subsystem.

 

Kind regards,

Pedram Kermani

Adventurer
Posts: 80
Registered: ‎08-26-2013

Re: four lane PCS/PMA problem

@yenigal,

my NIC link is a 40G link verified with another NIC that shows a 40G link on the OS and of course when i have an individual active 10G link on one lanes of qsfp connector, the result is a verified 10G link. It consists of 4 10G links not a 40G connection. Is this way wrong at all? or is there another method for this purpose?

 

Re

mhmontazeri61

Adventurer
Posts: 80
Registered: ‎08-26-2013

Re: four lane PCS/PMA problem

@pd.kermani,

So is there any way for aggregating these 4 lanes traffic at all? I have an unsolved problem with XAUI core that was failed using same hardware and same data path but 10G PCS/PMA was successful. After this experience, may one result that my custom hardware or NIC can not support 4 lanes managed in xaui structure or the other similar structures? Of course, after reading your comment, i need more studies about it too. Can you propose some useful texts about existing 10G ecosystem?

 

Best Regards

mhmontazeri61

Moderator
Posts: 3,220
Registered: ‎02-06-2013

Re: four lane PCS/PMA problem

Hi

 

If your NIC have 4 10G ports then you can use 4 10G cores and should be able to link it.

 

However if it is 40G link then you need to use the 40G core.

 

Can you give more details on the NIC or the linkpartner in use.

Regards,

Satish

--------------------------------------------------​--------------------------------------------
Kindly note- Please mark the Answer as "Accept as solution" if information provided is helpful.

Give Kudos to a post which you think is helpful.
--------------------------------------------------​-------------------------------------------
Adventurer
Posts: 80
Registered: ‎08-26-2013

Re: four lane PCS/PMA problem

@yenigal,

My NIC has a 56G-support QSFP port and implies that can support 10G and 40G too. So may i result that with a 4 lane qsfp connector and 4 independent 10G on mentioned link, i can not have a 40G link except in the form of using 4 10G port NIC and using a breakout cable? Then if it's true, the link has the ability transferring same traffic in the form of 40G link so this means that i can reconstruct my traffic(assume) in 40G format(that i don't know about it) and then i have a 40G link? If it's true, please help me about it if it's possible or betray me about having a 40G connection with mentioned limitations!.

 

Regards

mhmontazeri61

Xilinx Employee
Posts: 36
Registered: ‎05-01-2013

Re: four lane PCS/PMA problem

1. Try GT near end loopback and make your 4 10G design can work.

2. In the interconnection failure, check IP core status signals, such as block_lock, status_vector and XGMII data via ila. What's the result?