02-01-2018 04:06 PM
I am getting very poor performance results for TCP or UDP. I get lot of CRC errors. Xilinx advertises the following results for 1500 MTU.
|MTU||TCP Tx||TCP Rx||UDP Tx||UDP Rx|
|1500||2.6 Gbps||1.4 Gbps||2Gbps||800|
The results I get are as follows. The TCP results are bad because of lot of CRC errors but even UDP results are poor. Has anyone else measured the actual performance? If any results are available for jumbo frames will appreciate if you could share those too.
TCP TX: 161 Mbits/sec
TCP RX: 1.11 Gbits/sec
UDP TX: 1.62 Gbits/sec (With lot of Errors)
UDP RX: 355 Mbits/sec
02-02-2018 06:57 AM
02-02-2018 09:20 AM
I tested both scenarios and the results were fairly similar. For the Board to NIC Test, I used a high end Aberdeen Server that we have used for years for 10G links. Using Third Party TG, I could verify that the Aberdeen server could sink 9.8 Gbps easily so my Test configurations were up to the mark.
BTW, I used iperf3 for making the measurements (I don't remember if I mentioned that before). I used a Third party inline tool to determine that massive CRC errors are being seen. I also used both the Ready-To-Test solution as well as the one that I generated to include iperf3 in the image.
02-05-2018 11:14 AM
Can you please clarify what you will like me to verify? I am just working with your Ready-To-Test-Solution as per Application Note 1305. Only thing I did was to add iperf3 to the image to allow me to carry out the performance evaluation. Also my FPGA engineer was able to reproduce bit errors using IBERT which is what is making my TCP performance really terrible. But I think that even after the bit errors get fixed, I would not be able to get 2.6 Gbps TCP performance that you claim in this link - http://www.wiki.xilinx.com/Zynq+mp+Ethernet+Performance+2016.4
BTW, I am using ZCU102 board and not ZC706 board.
02-05-2018 12:07 PM
Your project zip (10G_AXI_Ethernet.zip) file link in your pdf seems to be broken, can you re-create the link.
02-07-2018 11:50 AM
thanks for the info.
I happen to have a Xilinx EZmove account active, can you send the file through EZMove,
my email: David.Zhang@teledyne.com
02-07-2018 11:20 PM
Are you using the XAPP1305 reference design ( http://www.wiki.xilinx.com/PS+and+PL+based+Ethernet+in+Zynq+MPSoC) available to download thru the below direct link?
02-08-2018 01:21 AM
We are using netperf/netserver attached and NIC card for testing the 10G performance.
NIC card usually achieves better performance as said.
You will have to load on these files to the sd card and mount them after booting using “mount /dev/mmcblk0p1/mnt”, move to the mnt folder and run below commands-
On tera term : ./netserver –D -4
On the Host PC : netperf -H 192.168.1.4 -c -C -t TCP_STREAM
For detailed info on performance, please refer to http://www.wiki.xilinx.com/Performance+tests+procedure+and+results+with+LWIP
This should give them similar results as we have.
02-08-2018 10:27 AM
I am using the July 2017 solution as seen below in the image. BTW, I used iperf3 to make my measurements but I will make measurements with netperf. If I read the data that you referred to in your later post, the results that I should shoot for tcp uplink is about 3 Gbps. Please confirm.
BTW, I get lot of Bit Errors with both my ZC102 Evals even at bare metal. Any suggestison to deal with that?
oot@localhost xhdpssa]# netperf -H 192.168.1.4 -c -C -t TCP_STREAM MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.4 () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Send Recv Send Recv Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 16384 10.00 2956.76 1.62 27.39 0.359 3.036
02-08-2018 07:39 PM
I made netperf performance measurement board-to-board (I didn't have a Linux machine that could run netserver to allow board to NIC measurements) and the results are as follows
root@plnx_aarch64:/media# ./netperf -H 192.168.255.21 -c -C -D 5 -I 99 -l 300 -- -m 1472 s64K -S 64K
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.255.21 () port 0 AF_INET : +/-49.500% @ 99% conf.
Recv Send Send Utilization Service Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
131072 16384 1472 300.03 148.51 9.11 11.29 20.098 24.900
root@plnx_aarch64:/media# ./netperf -H 192.168.255.21 -c -C -t UDP_STREAM -l 300
Socket Message Elapsed Messages CPU Service
Size Size Time Okay Errors Throughput Util Demand
bytes bytes secs # # 10^6bits/sec % SS us/KB
229376 65507 300.00 1225181 0 2140.2 25.69 3.934
229376 300.00 904119 1579.4 44.86 6.868
02-13-2018 03:31 AM - edited 02-13-2018 03:31 AM
I guess UDP here is 2.5Gbps and TCP is 148 Mbps. Is that right?
02-14-2018 02:14 PM
UDP is roughly at 2.1 Gbps. I was able to push UDP to as high as 4.2 Gbps by using several parallel streams. My first main issue is a lot of CRC errors in both TCP and UDP modes which kills the TCP totally.
03-26-2018 10:06 PM
We have tested the performance at our end and achieved the results published on wiki. Also attached the detailed document on how to test this.
It is recommended that, users must use the hardware as specified in the attached screenshots, to achieve similar performance.
I hope we have provided all the relevant details provided to close the thread now.
09-28-2018 01:30 AM