02-25-2011 07:33 AM
I have an application on an ML405 derivative that needs to save large buffers as fast as possible. We use the 1Gbps LLTEMAC GMII but the speed is abysmal (85Mbps max). I just profiled the system and the results are clear: the system spends most of its time in __copy_tofrom_user(), in other words copying my buffers to the network driver.
The buffers are saved with fwrite to an NFS mounted drive, without C bufferisation (setvbuf).
Is there a way to tell the driver to NOT buffer the data ? I'd rather wait for the call to return.
Is there another network protocol that doesn't use buffers ? Is there another forum where I should rather ask this (something more network oriented).
02-28-2011 04:21 PM
03-01-2011 03:09 AM
To avoid copying large data from kernel space to user space the splice function might be of interest to you.
Here is an interesting video:
also map() would enhance performance compared to write() and read()
Hope that helps,
03-01-2011 07:27 AM
Yeah, I just implemented a socket call sequence only to discover that SO_SND_COPYAVOID is nowhere to be found in Linux...
I just took a look at vmsplice and it might have fit the job only that it writes to pipes instead of sockets...
Can't someone point me to a Linux method for sending data across the network in zero-copy mode ?!?
03-02-2011 02:56 AM
I'm trying to use pipe/vmsplice/splice but I can't even compile it: does that even work in a usermode program ? All the info I've read seems to indicate so, but splice.h is a kernel header. If it's only a kernel function, it doesn't make sense to me as a way to speed up writing user buffers...
03-25-2011 08:03 AM
I played around with vmsplice and splice as well. I think the glibc which is on the Microblaze rootfs is too old and doesn't have the vmsplice syscall wrapper included.
I did this:
#include <syscall.h> /* remove when using newer glibc */
# define SPLICE_F_GIFT 8 /* Pages passed in are a gift. */
# define SPLICE_F_MOVE 1 /* Move pages instead of copying. */
/* Our call to splice (no header currently). */
static inline int splice(int fdin, loff_t *off_in, int fdout, loff_t *off_out,
size_t len, unsigned int flags)
return syscall(SYS_splice, fdin, off_in, fdout, off_out, len, flags);
static inline long vmsplice(int fd, const struct iovec *iov,
unsigned long nr_segs, unsigned int flags)
return syscall(SYS_vmsplice, fd, iov, nr_segs, flags);
However, I am having problems with the way my driver uses the mmap method, vmsplice complains about bad address.
I haven't found out why yet.
06-08-2011 07:34 AM
I'm getting back on this problem after months working on something else. I still need to improve the network speed... I looked at that video, unfortunately it contains preciously few details. After contacting the author from Petalogix, I got a rather cryptic message:
[...] Basically I think the problem is because you are trying to vmsplice from a memory area which was mmap()ed from /dev/mem.
In our work we created a custom UIO driver, which marked the memory regions as iomem, and therefore they had an ioremap() done on them at the driver level. We are then able to mmap() this device, and vmsplice() from there.
Now I'd like some help decoding what this means. It seems that sending a block of mem to the network without copy requires a large amount of jumping through hoops. I understand that the fact that TCP headers need to be added basically requires the data to be copied, but I don't know if that's the only one or if that can be avoided.
If I can save the data from my driver itself, then fine (but I thought you couldn't/shouldn't do that); currently I'm saving from the user-mode program that uses my driver.
Also, what is this map() function you mention ? I know mmap(), but not map()...
05-22-2018 07:09 AM
trying to do the same thing...and endlessly frustrated about it...
linux manpages say "it works with everything that can be mmapd".../dev/mem can certainly be mmaped...but vmsplice, splice or sendfile does NOT work with mmap/socket/fd on /dev/mem...
did you try the aporach with the iomap device?