qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/3] v4 Decouple block device removal from devic


From: Ryan Harper
Subject: Re: [Qemu-devel] [PATCH 0/3] v4 Decouple block device removal from device removal
Date: Tue, 2 Nov 2010 09:22:01 -0500
User-agent: Mutt/1.5.6+20040907i

* Michael S. Tsirkin <address@hidden> [2010-11-02 08:59]:
> On Tue, Nov 02, 2010 at 08:46:22AM -0500, Ryan Harper wrote:
> > * Markus Armbruster <address@hidden> [2010-11-02 04:40]:

> > > >> >> I'd like to have some consistency among net, block and char device
> > > >> >> commands, i.e. a common set of operations that work the same for 
> > > >> >> all of
> > > >> >> them.  Can we agree on such a set?
> > > >> >
> > > >> > Yeah; the current trouble (or at least what I perceive to be 
> > > >> > trouble) is
> > > >> > that in the case where the guest responds to device_del induced ACPI
> > > >> > removal event; the current qdev code already does the host-side 
> > > >> > device
> > > >> > tear down.  Not sure if it is OK to do a blockdev_del() immediately
> > > >> > after the device_del.  What happens when we do:
> > > >> >
> > > >> > device_del
> > > >> > ACPI to guest
> > > >> > blockdev_del /* removes host-side device */
> > > >> 
> > > >> Fails in my tree, because the blockdev's still in use.  See below.
> > > >> 
> > > >> > guest responds to ACPI
> > > >> > qdev calls pci device removal code
> > > >> > qemu attempts to destroy the associated host-side block
> > > >> >
> > > >> > That may just work today; and if not, it shouldn't be hard to fix up 
> > > >> > the
> > > >> > code to check for NULLs
> > > >> 
> > > >> I hate the automatic deletion of host part along with the guest part.
> > > >> device_del should undo device_add.  {block,net,char}dev_{add,del} 
> > > >> should
> > > >> be similarly paired.
> > > >
> > > > Agreed.
> > > >> 
> > > >> In my blockdev branch, I keep the automatic delete only for backwards
> > > >> compatibility: if you create the drive with drive_add, it gets
> > > >> auto-deleted, but if you use blockdev_add, it stays around.
> > > >
> > > > But what to do about the case where we're doing drive_add and then a
> > > > device_del()  That's the urgent situation that needs to be resolved.
> > > 
> > > What's the exact problem we need to solve urgently?
> > > 
> > > Is it "provide means to cut the connection to the host part immediately,
> > > even with an uncooperative guest"?
> > 
> > Yes, need to ensure that if the mgmt layer (libvirt) has done what it
> > believes should have disassociated the host block device from the guest,
> > we want to ensure that the host block device is no longer accessible
> > from the guest.
> > 
> > > 
> > > Does this need to be separate from device_del?
> > 
> > no, it doesn't have to be.  Honestly, I didn't see a clear way to do
> > something like unplug early in the device_del because that's all pci
> > device code which has no knowledge of host block devices; having it
> > disconnect seemed like a layering violation.
> 
> We invoke the cleanup callback, isn't that enough?

Won't that look a bit strange?  on device_del, call the cleanup callback
first;, then notify the guest, if the guest responds, I suppose as long
as the cleanup callback can handle being called a second time that'd
work.

I like the idea of disconnect; if part of the device_del method was to
invoke a disconnect method, we could implement that for block, net, etc;

I'd think we'd want to send the notification, then disconnect.
Struggling with whether it's worth having some reasonable timeout
between notification and disconnect.  




-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
address@hidden



reply via email to

[Prev in Thread] Current Thread [Next in Thread]