qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/8] block: Support to keep track of I/O status


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH 3/8] block: Support to keep track of I/O status
Date: Tue, 12 Jul 2011 16:25:22 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.1.10-1.fc15 Thunderbird/3.1.10

Am 05.07.2011 20:17, schrieb Luiz Capitulino:
> This commit adds support to the BlockDriverState type to keep track
> of the last I/O status. That is, at every I/O operation we update
> a status field in the BlockDriverState instance. Valid statuses are:
> OK, FAILED and ENOSPC.
> 
> ENOSPC is distinguished from FAILED because an management application
> can use it to implement thin-provisioning.
> 
> This feature has to be explicit enabled by buses/devices supporting it.
> 
> Signed-off-by: Luiz Capitulino <address@hidden>

I'm not sure how this is meant to work with devices that can have
multiple requests in flight. If a request fails, one of the things that
are done before sending a monitor event is qemu_aio_flush(), i.e.
waiting for all in-flight requests to complete. If the last one of them
is successful, your status will report BDRV_IOS_OK.

If you don't stop the VM on I/O errors, the status is useless anyway,
even if only one request is active at the same point.

I think it would make more sense if we only stored the last error (that
is, don't clear the field on success). What is the use case, would this
be enough for it?

By the way, I'm not sure how it fits in, but I'd like to have a block
layer function that format drivers can use to tell qemu that the image
is corrupted. Maybe that's another case in which we should stop the VM
and have an appropriate status for it. It should probably have
precedence over an ENOSPC happening at the same time, so maybe we'll
also need a way to tell that some status is more important and may
overwrite a less important status, but not the other way round.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]