qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Disk integrity in QEMU


From: Avi Kivity
Subject: Re: [Qemu-devel] [RFC] Disk integrity in QEMU
Date: Fri, 10 Oct 2008 14:56:17 +0200
User-agent: Thunderbird 2.0.0.16 (X11/20080723)

Anthony Liguori wrote:
>>
>> For server partitioning, data integrity and performance are
>> critical.  The host page cache is significantly smaller than the
>> guest page cache; if you have spare memory, give it to your guests.
>
> I don't think this wisdom is bullet-proof.  In the case of server
> partitioning, if you're designing for the future then you can assume
> some form of host data deduplification either through qcow
> deduplification, a proper content addressable storage mechanism, or
> file system level deduplification.  It's becoming more common to see
> large amounts of homogeneous consolidation either because of cloud
> computing, virtual appliances, or just because most x86 virtualization
> involves Windows consolidation and there aren't that many versions of
> Windows.
>
> In this case, there is an awful lot of opportunity for increasing
> overall system throughput by caching common data access across virtual
> machines.

That's true.  But is the OS image a significant image of I/O in a
running system?

My guess is that it is not.

In any case, deduplication is far enough into the future to not attempt
to solve it now.  The solution may be part of the deduplication solution
itself, for example it may choose to cache shared data (since they are
read-only anyway) even with O_DIRECT.

>
>> O_DIRECT is practically mandataed here; the host page cache does
>> nothing except to impose an additional copy.
>>
>> Given the rather small difference between O_DSYNC and O_DIRECT, I
>> favor not adding O_DSYNC as it will add only marginal value.
>
> The difference isn't small.  Our fio runs are defeating the host page
> cache on write so we're adjusting the working set size.  But the
> difference in read performance between dsync and direct is many
> factors when the data can be cached.
>

That's because you're leaving host memory idle.  That's not a realistic
scenario.  What happens if you assign free host memory to the guest?

>> Regarding choosing the default value, I think we should change the
>> default to be safe, that is O_DIRECT.  If that is regarded as too
>> radical, the default should be O_DSYNC with options to change it to
>> O_DIRECT or writeback.  Note that some disk formats will need
>> updating like qcow2 if they are not to have abyssal performance.
>
> I think qcow2 will be okay because the only issue is image expansion
> and that is a relatively uncommon case that is amortized throughout
> the life time of the VM.  So far, while there is objection to using
> O_DIRECT by default, I haven't seen any objection to O_DSYNC by
> default so as long as no one objects in the next few days, I think
> that's what we'll end up doing.

I don't mind that as long as there is a way to request O_DIRECT (which I
think is cache=off under your proposal).

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]