[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Align file accesses with cache=off (O_DIRECT)

From: Laurent Vivier
Subject: Re: [Qemu-devel] [PATCH] Align file accesses with cache=off (O_DIRECT)
Date: Wed, 30 Apr 2008 11:59:34 +0200

Le mercredi 30 avril 2008 à 11:21 +0200, Kevin Wolf a écrit :
> Laurent Vivier schrieb:
> >> Hm, yes. We could call raw_pread in raw_aio_read when O_DIRECT is used 
> >> and the request is not properly aligned. Is this what you meant?
> > 
> > No, it was just a (stupid) comment. I think we must not convert
> > asynchronous I/O to synchronous I/O.
> Why not? If I'm not mistaken (but if you think it's wrong, probably I 
> am) the only possible drawback should be performance. And we're not 
> talking about normal guest IO, these accesses are aligned anyway.

I think there can be side effects if the caller of the function thinks
the I/O is asynchronous and it is in fact synchronous (but perhaps we
can call that a "bug"...).

> >> I think we agree that it's mostly item 3 why one would use O_DIRECT with 
> >> qemu. In terms of reliability, it is important that the data really is 
> >> written to the disk when the guest OS thinks so. But when for example 
> >> qemu crashes, I don't think it's too important if 40% or 50% of a 
> >> snapshot have already been written - it's unusable anyway. A sync 
> >> afterwards could be enough there.
> > 
> > I don't speak about "qemu crashes" but about "host crashes".
> Well, I've never seen a host crash where qemu survived. ;-)
> What I wanted to say: If the snapshot is incomplete and not usable 
> anyway, why bother if some bytes more or less have been written?

What I mean is with O_DIRECT we increase reliability because we are sure
data are on disk when qemu has finished to save snapshot, without
O_DIRECT qemu can have finished to save but the snapshot is not really
on the disk but in the host cache. And if the host crashes between these
two points, It's 'bad' (user thinks snapshot has been saved but in fact
it is not). But perhaps it's just a detail...

> > I'm not in the spirit "my patch is better than yours" (and I don't think
> > so); but could you try to test my patch ? Because if I remember
> > correctly I think It manages all cases and this can help you to find a
> > solution (or perhaps you can add to your patch the part of my patch
> > about block-qcow2.c)
> I really didn't want to say that your code is bad or it wouldn't work or 
> something like that. I just tried it and it works fine as well.
> But the approaches are quite different. Your patch makes every user of 
> the block driver functions aware that O_DIRECT might be in effect. My 
> patch tries to do things in one common place, even though possibly at 
> the cost of performance (however, I'm not sure anymore about the bad 
> performance now that I use your fcntl method).
> So what I like about my patch is that it is one change in one place 
> which should make everything work. Your patch could be still of use to 
> speed things up, e.g. by making qcow2 aware that there is something like 
> O_DIRECT (would have to do a real benchmark with both patches) and have 
> it align its buffers in the first place.

OK, I agree with you.

> I'll attach the current version of my patch which emulates AIO through 
> synchronous requests for unaligned buffer. In comparison, with your 
> patch the bootup of my test VM was slightly faster but loading/saving of 
> snapshots was much faster with mine. Perhaps I'll try to combine them to 
> get the best of both.

just a comment on the patch: perhaps you can call your field
"open_flags" instead of "flags", and perhaps you can merge your field
with "fd_open_flags" ?

Some comments from Qemu maintainers ?
Is there someone ready to commit it in SVN ?

I personally think this functionality must be included...

------------- address@hidden ---------------
"The best way to predict the future is to invent it."
- Alan Kay

reply via email to

[Prev in Thread] Current Thread [Next in Thread]