qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 01/13] block: Add op blocker type "device IO"


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH v6 01/13] block: Add op blocker type "device IO"
Date: Thu, 28 May 2015 13:24:39 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 28.05.2015 um 13:00 hat Paolo Bonzini geschrieben:
> On 28/05/2015 12:55, Fam Zheng wrote:
> > > Indeed. blk_pause/resume would handle everything in one central place
> > > in the block layer instead of spreading the logic across all the block
> > > layer users.
> >
> > Sorry, I'm confused. Do you mean there is a way to implement blk_pause
> > completely in block layer, without the necessity of various notifier 
> > handlers
> > in device models?
> 
> How would you do that?  Do you have to keep a queue of pending requests
> in the BlockBackend?  Since bdrv_drain_all may never return (e.g. stuck
> NFS connection with nfs=hard), the guest can force unbounded allocation
> in the host, which is bad.

We already queue requests for things like I/O throttling or
serialisation. Why would this be any worse?

> In addition, the BDS doesn't have a list of BlockBackends attached to
> it.  So you need the BlockBackends to register themselves for
> pause/resume in some way---for example with a notifier list.
> 
> Then it's irrelevant whether it's the device model or the BB that
> attaches itself to the notifier list.  You can start with doing it in
> the device models (those that use ioeventfd), and later it can be moved
> to the BB.  The low-level implementation remains the same.

The reason for doing it in the block layer is that it's in one place and
we can be sure that it's applied. We can still in addition modify
specific users to avoid even trying to send requests, but I think it's
good to have the single place that always ensures correct functionality
of the drain instead of making it dependent on the user.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]