qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/8] block: Support to keep track of I/O status


From: Luiz Capitulino
Subject: Re: [Qemu-devel] [PATCH 3/8] block: Support to keep track of I/O status
Date: Tue, 12 Jul 2011 11:56:05 -0300

On Tue, 12 Jul 2011 16:25:22 +0200
Kevin Wolf <address@hidden> wrote:

> Am 05.07.2011 20:17, schrieb Luiz Capitulino:
> > This commit adds support to the BlockDriverState type to keep track
> > of the last I/O status. That is, at every I/O operation we update
> > a status field in the BlockDriverState instance. Valid statuses are:
> > OK, FAILED and ENOSPC.
> > 
> > ENOSPC is distinguished from FAILED because an management application
> > can use it to implement thin-provisioning.
> > 
> > This feature has to be explicit enabled by buses/devices supporting it.
> > 
> > Signed-off-by: Luiz Capitulino <address@hidden>
> 
> I'm not sure how this is meant to work with devices that can have
> multiple requests in flight. If a request fails, one of the things that
> are done before sending a monitor event is qemu_aio_flush(), i.e.
> waiting for all in-flight requests to complete. If the last one of them
> is successful, your status will report BDRV_IOS_OK.

We're more interested in states that the device can not recover from or
that are not temporary. So, if something really bad happens I'd expect
all in-flight requests to fail the same way. Am I wrong?

> If you don't stop the VM on I/O errors, the status is useless anyway,
> even if only one request is active at the same point.

Right, that's a good point. A mngt application can only trust that the
status won't change in the next second if the VM is stopped.

> I think it would make more sense if we only stored the last error (that
> is, don't clear the field on success). What is the use case, would this
> be enough for it?

Yes, it would, but there's a problem. If the management application manages
to correct the error and put the VM to run again, we need to clear the status,
otherwise the management application could get confused if the status is read
at a later time.

The most effective way I found to do this was to let the device report its
own current status. But I see two other ways of doing this:

 1. We could only report the status if the VM is paused. This doesn't change
    much the implementation though

 2. We could allow the mngt app to clear the status

> By the way, I'm not sure how it fits in, but I'd like to have a block
> layer function that format drivers can use to tell qemu that the image
> is corrupted. Maybe that's another case in which we should stop the VM
> and have an appropriate status for it. It should probably have
> precedence over an ENOSPC happening at the same time, so maybe we'll
> also need a way to tell that some status is more important and may
> overwrite a less important status, but not the other way round.

Yes, seems to make sense.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]