cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Contributor
Contributor
8,418 Views
Registered: ‎05-14-2008

Ethernet Tradeoff in Linux

Hi

 

In a project using xilinx linux on ppc405, I have about 1000-1500 slices remaining for ethernet. So I can't use xps_lltemac. Is there any option between xps_lltemac and ethernetlite, with performance and resources in between? 

 

0 Kudos
7 Replies
Highlighted
Xilinx Employee
Xilinx Employee
8,382 Views
Registered: ‎09-10-2008

Hi,

 

The EMAC lite and the LL TEMAC are the two choices, others are not recommended and are being obsoleted.  

 

I have heard that the EMAC Lite performance on Powerpc is good, but it does consume more of the CPU since it doesn't use DMA.  You should also built it with the ping pong buffers to get better performance.  I have not tested the performance in Linux, but if I hear some numbers I'll get back to you on them.

 

-- John

0 Kudos
Highlighted
Visitor
Visitor
8,255 Views
Registered: ‎12-14-2008

Hi John,

 

We had some problems with the LL TEMAC performance. We were not able to get more than 50 Mb/sec with UDP. We are now considering developing/buying a hardware core foraccelerating the network interface.


Nadav Rotem

 

 

 

0 Kudos
Highlighted
Xilinx Employee
Xilinx Employee
8,245 Views
Registered: ‎09-10-2008

Hi Nadav,

 

Thanks for the feedback. That feedback concerns me. Can you give more details on the configuration of the system where you saw that as we believe it should be much better.

 

Is that on powerpc 405 or 440, what bus frequency and processor frequency?  Are you using DMA as it's necessary to get good throughput?  

 

You should also be using checksum offload in the hardware and the size of the FIFOs in the LL TEMAC might make a difference depending on your data.

 

You could just attach an EDK MHS file that describes your system as it would answer all the questions.

 

We don't have (to my knowledge) an app note on doing this on Open Source Linux (yet), but there is one for a MontaVista Linux at the following location.

 

http://www.xilinx.com/support/documentation/application_notes/xapp1127.pdf

 

Thanks,

John

 

 

0 Kudos
Highlighted
Participant
Participant
8,028 Views
Registered: ‎06-16-2008

 Hello

 

  If you want to increase the UPD performance, move to Gigabit ethernet and increase the MTU. We have obtained values up to 491 Mega BITS per second with an mtu of 5500 bytes.

 

  With an mtu of 1500 in a 100 Mbps connection the max speed we have otained is 94 Mbps.

 

 

             Best regards

 

ps: our system is based on Virtex 5 fx (ppc440) and ll_temac

Message Edited by ribalda on 05-05-2009 01:24 AM
0 Kudos
Highlighted
Visitor
Visitor
6,384 Views
Registered: ‎08-23-2010

Hello

 

I have some questions regarding throughput.

 

1. This 491 Mbps is between two gigabit ports on MTU 5500?

2. What are how long FIFOs are you using?

3. Did you manage to build 32k FIFO in XPS?

4. Which LL TEMAC driver are you using? Hardware / software, LLTEMAC(old one) / LL TEMAT(new one)

 

We are trying to get the performance you have, but without any luck.

Stuck on 30Mbps(measured in both directions) between two gigabit ports on 100Mbps auto-negotiated with 16k FIFOs on both copper and fiber with checksum offload using old LLTEMAC driver and master from git.xilinx.com

 

Any suggestions?

 

Regards, Tim

0 Kudos
Highlighted
Observer
Observer
6,371 Views
Registered: ‎06-15-2010

Hi

 

We have reached the following throughput values on the following platforms:

 

1- ML403, PPC405, CPU running at 300MHz and bus at 100MHz

We reached a throughput of 350MBits/s

2- AVNET Minimodule, again PPC405, ( but on a small XC4VFX12 device ) , we reached a bandwidth of 150MBits/s

 

Linux kernel version was 2.6.24 ( or older ) and EDK version was 10 ( or older ), you can find a complete log of our posts and messages regarding this topic in PPC-dev mailing list.

 

I did personally never have the chance to test the performance on a PPC440 CPU but I'm sure the 491MBits/s is some thing normal.

 

My suggestions for improving performance:

1- Increase MTU ( Enable jumbo frames )

you should perform this task on both sides of your link, the received and the transmitter, so for example use the following command to set your MTU value to 8192

ifconfig eth0 mtu 8192

 

2- Update your gigabit ethernet drivers ( on PC side make sure you are using the latest driver )

 

3- Use proper application for bandwidth test , we used netperf in our case. Later we developed our own application cosidering all of the required methos and optimizations for an efficient network based application.

 

try to use TCP_SENDFILE (mainly to use zero copy) when using netperf . it may improve prformance. usually TCP_STREAM is OK.

 

4- increase your LL TEMAC buffer sizes as much as possible. We used as i remember a buffer size of 32Kbytes for TX and RX

 

5- make sure Checksum offload is enabled. This one is a simple check mark in the LL TEMAC configuration page

 

6- in LL TEMAC driver, use local memory as buffer descriptor, instead of DRAM. ( i am not sure about this one, it is now more than a year that i have not have a look at LL TEMAC driver's source code )

 

7- make sure your network equippment support bandwidths over 300MBits/s. (check the switches and ...)

 

8- Make sure that powerpc caches are enabled and working properly. ( by default the linux kernel enables caches itself )

 

9- increase the clock frequency of your CPU and buses as much as possible

 

hope helped,

Regards

Mohammad Sadegh Sadri.

 

 

0 Kudos
Highlighted
Visitor
Visitor
6,350 Views
Registered: ‎08-23-2010

Thank You very much for a quick answer.

 

1. Ok, this is a big hint. I'm only afraid of how the network devices (routers/PC) can handle DIX with MTU for eg. 5500 (as you tested)

2/3. We are using JDSU SmartClasss ethernet devices to test our network bandwidth which can take care of gigabit traffic

with the latest firmware.

4. Already at 32k. But performance is the same if using 16k (max 30Mbps). Maybe with MTU > 1500 will be better. Will test after 1.

5. It's done.

6. I'll check source code for LL TEMAC. I think you use old driver, cause new one is available only in newer kernel version (for now it's called LLTEMAC, and new one LL TEMAC, different is very small)

7. Checked, two gigabit switches with fiber. Can do 300Mbps.

8. Both enabled.

9. Not possible. Already at max 100Mhz. Not using speed-grade version of Virtex.

 

So I'll check 1/6 suggestions and will come back later.

 

Our hardware is 16MB memory. Xilinx kernel (from git.xilinx.com) < 2.6.34 uses over 16MB with high network traffic.

Version 2.6.34 (latest from master) uses only 8MB, but the throughput is 3 times slower.

 

kernel 2.6.34 - memory used 8MB, throughput 30Mbps

kernel 2.6.29 - memory used > 16MB, throughput 80Mbps

 

So for now we stuck with 2.6.34 cause of hardware limits. This kernel already has new LL TEMAC driver.

 

How much memory your boards are equipped with?

 

Thank you again

0 Kudos