coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] copy, dd: simplify and optimize NUL bytes detection


From: Eric Blake
Subject: Re: [PATCH] copy, dd: simplify and optimize NUL bytes detection
Date: Thu, 22 Oct 2015 10:02:46 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0

On 10/22/2015 09:31 AM, Paolo Bonzini wrote:

> Only if your machine cannot do unaligned loads.  If it can, you can
> align the length instead of the buffer.  memcmp will take care of
> aligning the buffer (with some luck it won't have to, e.g. if buf is
> 0x12340002 and length = 4094).  On x86 unaligned "unsigned long" loads
> are basically free as long as they don't cross a cache line.
> 
>> BTW Rusty has a benchmark framework for this as referenced from:
>> http://rusty.ozlabs.org/?p=560
> 
> I missed his benchmark framework so I wrote another one, here it is:
> https://gist.githubusercontent.com/bonzini/9a95b0e02d1ceb60af9e/raw/7bc42ddccdb6c42fea3db58e0539d0443d0e6dc6/memeqzero.c

I see a bug in there:

static __attribute__((noinline)) bool memeqzero4_paolo(const void *data,
size_t length)
{
    const unsigned char *p = data;
    unsigned long word;

    while (__builtin_expect(length & (sizeof(word) - 1), 0)) {
        if (*p)
            return false;
        p++;
        length--;
    }
    while (__builtin_expect(length & (16 - sizeof(word)), 0)) {
        memcpy(&word, p, sizeof(word));
        if (word)
            return false;
        p += sizeof(word);
        length -= sizeof(word);
    }

     /* Now we know that's zero, memcmp with self. */
     return length == 0 || memcmp(data, p, length) == 0;
}

If length is already aligned on entry, then you are calling memcmp(data,
data, length) which is trivially 0 for all input, rather than checking
for actual NUL bytes.  You MUST check at least one byte manually before
handing off to memcmp(), and having the distance between data and p be a
multiple of a cache-line (well, blindly picking 16 as Rusty did is a
close approximation) will probably let the libc memcmp() run a lot
faster than if memcmp() has to deal with unaligned pointers (where it
can optimize for one of the two reads to be aligned, but the other read
is unaligned - even if the two reads are close enough to be hitting the
same cache line, you are still suffering from some performance slowdowns).

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]