qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Disk integrity in QEMU


From: Avi Kivity
Subject: Re: [Qemu-devel] [RFC] Disk integrity in QEMU
Date: Sun, 19 Oct 2008 22:16:43 +0200
User-agent: Thunderbird 2.0.0.16 (X11/20080723)

Jens Axboe wrote:
(it seems I can't turn off the write cache even without losing my data:
Use hdparm, it's an ATA drive even if Linux currently uses the scsi
layer for it. Or use sysfs, there's a "cache_type" attribute in the scsi
disk sysfs directory.

Ok.  It's moot anyway.

"Policy" doesn't mean you shouldn't choose good defaults.

Changing the hardware settings for this kind of behaviour IS most
certainly policy.

Leaving bad hardware settings is also policy. But in light of FUA, the SCSI write cache is not a bad thing, so we should definitely leave it on.

I guess this is the crux.  According to my understanding, you shouldn't
see such a horrible drop, unless the application does synchronous writes
explicitly, in which case it is probably worried about data safety.

Then you need to adjust your understanding, because you definitely will
see a big drop in performance.


Can you explain why?  This is interesting.

O_DIRECT should just use FUA writes, there are safe with write back
caching. I'm actually testing such a change just to gauge the
performance impact.
You mean, this is not in mainline yet?

It isn't.

What is the time frame for this? 2.6.29?

Some googling shows that Windows XP introduced FUA for O_DIRECT and
metadata writes as well.

There's a lot of other background information to understand to gauge the
impact of using eg FUA for O_DIRECT in Linux as well. MS basically wrote
the FUA for ATA proposal, and the original usage pattern (as far as I
remember) was indeed meta data. Hence it also imposes a priority boost
in most (all?) drive firmwares, since it's deemed important. So just
using FUA vs non-FUA is likely to impact performance of other workloads
in fairly unknown ways. FUA on non-queuing drives will also likely suck
for performance, since you're basically going to be blowing a drive rev
for each IO. And that hurts.

Let's assume queueing drives, since these are fairly common these days. So qemu issuing O_DIRECT which turns into FUA writes is safe but suboptimal. Has there been talk about exposing the difference between FUA writes and cached writes to userspace? What about barriers?

With a rich enough userspace interface, qemu can communicate the intentions of the guest and not force the kernel to make a performance/correctness tradeoff.


What about the users who aren't on qemu-devel?

It may be news to you, but it has been debated on lkml in the past as
well. Not even that long ago, and I'd be surprised of lwn didn't run
some article on it as well.

Let's postulate the existence of a user that doesn't read lkml or even lwn.

But I agree it's important information, but
realize that until just recently most people didn't really consider it a
likely scenario in practice...

I wrote and committed the original barrier implementation in Linux in
2001, and just this year XFS made it a default mount option. After the
recent debacle on this on lkml, ext4 made it the default as well.

So let me turn it around a bit - if this issue really did hit lots of
people out there in real life, don't you think there would have been
more noise about this and we would have made this the default years ago?
So while we both agree it's a risk, it's not a huuuge risk...

I agree, not a huge risk. I guess compared to the rest of the suckiness involved (took a long while just to get journalling), this is really a minor issue. It's interesting though that Windows supported this in 2001, seven years ago, so at least they considered it important.

I guess I'm sensitive to this because in my filesystemy past QA would jerk out data and power cables while running various tests and act surprised whenever data was lost. So I'm allergic to data loss.

With qemu (at least when used with a hypervisor) we have to be extra safe since we have no idea what workload is running and how critical data safety is. Well, we have hints (whether FUA is set or not) when using SCSI, but right now we don't have a way of communicating these hints to the kernel.

One important takeaway is to find out whether virtio-blk supports FUA, and if not, add it.

However, with your FUA change, they should be safe.

Yes, that would make O_DIRECT safe always. Except when it falls back to
buffered IO, woops...


Woops.

Any write latency is buffered by the kernel.  Write speed is main memory
speed.  Disk speed only bubbles up when memory is tight.

That's a nice theory, in practice that is completely wrong. You end up
waiting on writes for LOTS of other reasons!


Journal commits?  Can you elaborate?

In the filesystem I worked on, one would never wait on a write to disk unless memory was full. Even synchronous writes were serviced immediately, since the system had a battery-backed replicated cache. I guess the situation with Linux filesystems is different.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]