08-12-2017 05:43 AM
I have a successful experience in a 10gbe interface linking my custom board and the pc. For a 40G link implementation, following pg068 datasheet explanations, i have above mentioned master core and 3 slave(in synchronization) cores simultaneously with described required shared logic parts that hardware ila show the same results for any of four lanes. The result in PC but is different and only first lane receives in PC same as first experience and there is not any connection with the other lanes properties like mac address. My question is that is this method of using 4 lanes(independent lanes) correct and acceptable and if yes what is my problem? And if this way is not correct, what is the true format for this link configuration? And where i can find some information about 40G and 100G PC interfaces and related connections and block diagrams?
08-16-2017 09:15 AM
08-16-2017 08:55 PM
How is the link connected to your PC.
Is this a 40G link or 4 10G links.
Refer below links for details about 40G and 100G cores.
08-18-2017 07:43 AM
As explained, By customizing one 10G PCS/PMA IP includes shared logic in core and other three 10G PCS/PMA IPs include shared logic in example design you will have 4x10G Ethernet lanes, therefore you need to have 4 individual Ethernet ports. They are fully independent on data path and data will not be aggregated. For a single 40G Ethernet lane, you need to use 40G/50G Ethernet Subsystem.
08-19-2017 03:42 AM
my NIC link is a 40G link verified with another NIC that shows a 40G link on the OS and of course when i have an individual active 10G link on one lanes of qsfp connector, the result is a verified 10G link. It consists of 4 10G links not a 40G connection. Is this way wrong at all? or is there another method for this purpose?
08-19-2017 03:53 AM
So is there any way for aggregating these 4 lanes traffic at all? I have an unsolved problem with XAUI core that was failed using same hardware and same data path but 10G PCS/PMA was successful. After this experience, may one result that my custom hardware or NIC can not support 4 lanes managed in xaui structure or the other similar structures? Of course, after reading your comment, i need more studies about it too. Can you propose some useful texts about existing 10G ecosystem?
08-20-2017 05:13 AM
If your NIC have 4 10G ports then you can use 4 10G cores and should be able to link it.
However if it is 40G link then you need to use the 40G core.
Can you give more details on the NIC or the linkpartner in use.
08-20-2017 10:13 PM
My NIC has a 56G-support QSFP port and implies that can support 10G and 40G too. So may i result that with a 4 lane qsfp connector and 4 independent 10G on mentioned link, i can not have a 40G link except in the form of using 4 10G port NIC and using a breakout cable? Then if it's true, the link has the ability transferring same traffic in the form of 40G link so this means that i can reconstruct my traffic(assume) in 40G format(that i don't know about it) and then i have a 40G link? If it's true, please help me about it if it's possible or betray me about having a 40G connection with mentioned limitations!.
08-24-2017 11:58 PM
1. Try GT near end loopback and make your 4 10G design can work.
2. In the interconnection failure, check IP core status signals, such as block_lock, status_vector and XGMII data via ila. What's the result?