qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 4/5] disk_deadlines: add control of requests tim


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 4/5] disk_deadlines: add control of requests time expiration
Date: Mon, 28 Sep 2015 14:55:44 +0100
User-agent: Mutt/1.5.24 (2015-08-30)

* Stefan Hajnoczi (address@hidden) wrote:
> On Fri, Sep 25, 2015 at 01:34:22PM +0100, Dr. David Alan Gilbert wrote:
> > * Stefan Hajnoczi (address@hidden) wrote:
> > > On Tue, Sep 08, 2015 at 04:48:24PM +0200, Kevin Wolf wrote:
> > > > Am 08.09.2015 um 16:23 hat Denis V. Lunev geschrieben:
> > > > > On 09/08/2015 04:05 PM, Kevin Wolf wrote:
> > > > > >Am 08.09.2015 um 13:27 hat Denis V. Lunev geschrieben:
> > > > > >>interesting point. Yes, it flushes all requests and most likely
> > > > > >>hangs inside waiting requests to complete. But fortunately
> > > > > >>this happens after the switch to paused state thus
> > > > > >>the guest becomes paused. That's why I have missed this
> > > > > >>fact.
> > > > > >>
> > > > > >>This (could) be considered as a problem but I have no (good)
> > > > > >>solution at the moment. Should think a bit on.
> > > > > >Let me suggest a radically different design. Note that I don't say 
> > > > > >this
> > > > > >is necessarily how things should be done, I'm just trying to 
> > > > > >introduce
> > > > > >some new ideas and broaden the discussion, so that we have a larger 
> > > > > >set
> > > > > >of ideas from which we can pick the right solution(s).
> > > > > >
> > > > > >The core of my idea would be a new filter block driver 'timeout' that
> > > > > >can be added on top of each BDS that could potentially fail, like a
> > > > > >raw-posix BDS pointing to a file on NFS. This way most pieces of the
> > > > > >solution are nicely modularised and don't touch the block layer core.
> > > > > >
> > > > > >During normal operation the driver would just be passing through
> > > > > >requests to the lower layer. When it detects a timeout, however, it
> > > > > >completes the request it received with -ETIMEDOUT. It also completes 
> > > > > >any
> > > > > >new request it receives with -ETIMEDOUT without passing the request 
> > > > > >on
> > > > > >until the request that originally timed out returns. This is our 
> > > > > >safety
> > > > > >measure against anyone seeing whether or how the timed out request
> > > > > >modified data.
> > > > > >
> > > > > >We need to make sure that bdrv_drain() doesn't wait for this request.
> > > > > >Possibly we need to introduce a .bdrv_drain callback that replaces 
> > > > > >the
> > > > > >default handling, because bdrv_requests_pending() in the default
> > > > > >handling considers bs->file, which would still have the timed out
> > > > > >request. We don't want to see this; bdrv_drain_all() should complete
> > > > > >even though that request is still pending internally (externally, we
> > > > > >returned -ETIMEDOUT, so we can consider it completed). This way the
> > > > > >monitor stays responsive and background jobs can go on if they don't 
> > > > > >use
> > > > > >the failing block device.
> > > > > >
> > > > > >And then we essentially reuse the rerror/werror mechanism that we
> > > > > >already have to stop the VM. The device models would be extended to
> > > > > >always stop the VM on -ETIMEDOUT, regardless of the error policy. In
> > > > > >this state, the VM would even be migratable if you make sure that the
> > > > > >pending request can't modify the image on the destination host any 
> > > > > >more.
> > > > > >
> > > > > >Do you think this could work, or did I miss something important?
> > > > > >
> > > > > >Kevin
> > > > > could I propose even more radical solution then?
> > > > > 
> > > > > My original approach was based on the fact that
> > > > > this could should be maintainable out-of-stream.
> > > > > If the patch will be merged - this boundary condition
> > > > > could be dropped.
> > > > > 
> > > > > Why not to invent 'terror' field on BdrvOptions
> > > > > and process things in core block layer without
> > > > > a filter? RB Tree entry will just not created if
> > > > > the policy will be set to 'ignore'.
> > > > 
> > > > 'terror' might not be the most fortunate name... ;-)
> > > > 
> > > > The reason why I would prefer a filter driver is so the code and the
> > > > associated data structures are cleanly modularised and we can keep the
> > > > actual block layer core small and clean. The same is true for some other
> > > > functions that I would rather move out of the core into filter drivers
> > > > than add new cases (e.g. I/O throttling, backup notifiers, etc.), but
> > > > which are a bit harder to actually move because we already have old
> > > > interfaces that we can't break (we'll probably do it anyway eventually,
> > > > even if it needs a bit more compatibility code).
> > > > 
> > > > However, it seems that you are mostly touching code that is maintained
> > > > by Stefan, and Stefan used to be a bit more open to adding functionality
> > > > to the core, so my opinion might not be the last word.
> > > 
> > > I've been thinking more about the correctness of this feature:
> > > 
> > > QEMU cannot cancel I/O because there is no Linux userspace API for doing
> > > so.  Linux AIO's io_cancel(2) syscall is a nop since file systems don't
> > > implement a kiocb_cancel_fn.  Sending a signal to a task blocked in
> > > O_DIRECT preadv(2)/pwritev(2) doesn't work either because the task is in
> > > uninterruptible sleep.
> > 
> > There are things that work on some devices, but nothing generic.
> > For NBD/iSCSI/(ceph?) you should be able to issue a shutdown(2) on the 
> > socket
> > that connects to the server and that should call all existing IO to fail
> > quickly.  Then you could do a drain and be done.    This would
> > be very useful for the fault-tolerant uses (e.g. Wen Congyang's block 
> > replication).
> > 
> > There are even ways of killing hard NFS mounts; for example adding
> > a unreachable route to the NFS server (ip route add unreachable hostname),
> > and then umount -f  seems to cause I/O errors to tasks.   (I can't find
> > a way to do a remount to change the hard flag).  This isn't pretty but
> > it's a reasonable way of getting your host back to useable if one NFS
> > server has died.
> 
> If you just throw away a socket, you don't know the state of the disk
> since some requests may have been handled by the server and others were
> not handled.
> 
> So I doubt these approaches work because cleanly closing a connection
> requires communication between the client and server to determine that
> the connection was closed and which pending requests were completed.
> 
> The trade-off is that the client no longer has DMA buffers that might
> get written to, but now you no longer know the state of the disk!

Right, you dont know what the last successfull IOs really were, but if
you know that the NBD/iSCSI/NFS server is dead and is going to need to
get rebooted/replaced anyway then your current state is that you have
some QEMUs that are running fine except for one disk, but are now very
delicate because anything that tries to a drain will hang.  There's no
way that you can recover that knowledge about which IOs completed, but
you can recover all your guests that aren't critical on that device.

Dave
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]