[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Another rfe: "cp" this time

From: Bruce Korb
Subject: Re: Another rfe: "cp" this time
Date: Tue, 1 May 2012 08:43:07 -0700

On Tue, May 1, 2012 at 1:17 AM, Pádraig Brady <address@hidden> wrote:
> On 05/01/2012 05:15 AM, H. Peter Anvin wrote:
>> On 04/27/2012 08:35 AM, Pádraig Brady wrote:
>>> 32KiB buffer which it serially reads to and writes from.
>> Why this that, though?  At least for file-to-file copy, it could
>> certainly do much better with mmap() + write().
>> 32K is so 1990.
> Well 2009 which is when we changed from 4K (blksize) :)
> Just yesterday we bumped to 64K to minimize system
> call overhead, which on modern machines give another
> 10% speed increase when copy files from cache.
> Though it should be emphasised that the bottle neck is
> usually in the devices/network and so optimizations at
> this level do not help in the general case, and your

My particular whole issue is the network bottle neck.  Doing
the copy as sequential reads of any size results in an empty
pipe while each piece gets acked.  I don't know what would
happen were the file mmaped.  Would the fs layer do sequential
readahead or would it send out requests for more data before
earlier requests are satisfied?  It'd likely take reading code to
find out, but I'm guessing it will wind up serialized just like
the read 32K at a time basic copy.  64K would help if it caused
however many concurrent requests for pieces of the 64K, but
that isn't likely either.  The big deal (for me) is to get enough
concurrent requests going so the long wide pipe can be kept
full of data.  However that needs to happen.

Cheers - Bruce

reply via email to

[Prev in Thread] Current Thread [Next in Thread]