qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 04/14] pc: prepare for multi stage hotplug ha


From: Igor Mammedov
Subject: Re: [Qemu-devel] [PATCH v4 04/14] pc: prepare for multi stage hotplug handlers
Date: Thu, 14 Jun 2018 11:16:54 +0200

On Thu, 14 Jun 2018 08:14:05 +0200
David Hildenbrand <address@hidden> wrote:

> On 14.06.2018 00:05, Michael S. Tsirkin wrote:
> > On Wed, Jun 13, 2018 at 09:37:54PM +0200, David Hildenbrand wrote:  
> >>>>>       [ ... specific to machine_foo wiring ...]
> >>>>>
> >>>>>       virtio_mem_plug() {
> >>>>>          [... some machine specific wiring ...]
> >>>>>
> >>>>>          bus_hotplug_ctrl = qdev_get_bus_hotplug_handler()
> >>>>>          bus_hotplug_ctrl->plug(bus_hotplug_ctrl, dev)
> >>>>>
> >>>>>          [... some more machine specific wiring ...]
> >>>>>       }
> >>>>>
> >>>>>       [ ... specific to machine_foo wiring ...]
> >>>>>
> >>>>> i.e. device itself doesn't participate in attaching to external 
> >>>>> entities,
> >>>>> those entities (machine or bus controller virtio device is attached to)
> >>>>> do wiring on their own within their own domain.  
> >>>>
> >>>> I am fine with this, but Michael asked if this ("virtio_mem_plug()")
> >>>> could go via new DeviceClass functions. Michael, would that also work
> >>>> for your use case?  
> >>>
> >>> It's not virtio specifically, I'm interested in how this will work for
> >>> PCI generally.  Right now we call do_pci_register_device which
> >>> immediately makes it guest visible.  
> >>
> >> So you're telling me that a PCI device exposes itself to the system in
> >> pci_qdev_realize() instead of letting a hotplug handler handle that? My
> >> assumption is that the PCI bridge hotplug handler handles this.  
> > 
> > Well given how things work in qemu that's not exactly
> > the case. See below.
> >   
> >> What am
> >> I missing?
> >>
> >> I can see that e.g. for a virtio device the realize call chain is
> >>
> >> pci_qdev_realize() -> virtio_pci_realize() -> virtio_XXX__pci_realize ->
> >> virtio_XXX_realize()
> >>
> >> If any realization in pci_qdev_realize() fails, we do a
> >> do_pci_unregister_device().
> >>
> >> So if it is true what you're saying than we're already exposing
> >> partially realized (and possibly unrealized again) devices via PCI. I
> >> *guess* because we're holding the iothread mutex this is okay and
> >> actually not visible.  
> > 
> > For now but we need ability to have separate new commands for
> > realize and plug, so we will drop the mutex.  
> 
> If you want to actually drop the mutex, I assume that quite some rework
> will be necessary (not only for this specific PCI hotplug handler case),
> most probably also in other code parts (to) - all of the hotplug/realize
> code seems to rely on the mutex being locked and not being dropped
> temporarily.
yep, all monitor actions and reactions from guest via mmio are now
protected by global lock that guaranties not parallel action could
executed at the same time. So it's save for now and dropping global
lock would require some refactoring (probably a lot).

> >   
> >> And we only seem to be sending events in the PCI
> >> bridge hotplug handlers, so my assumption is that this is fine.  
> > 
> > For core PCI, it's mostly just this line:
> > 
> >     bus->devices[devfn] = pci_dev;  
> 
> This should go into the hotplug handler if I am not wrong. From what I
> learned from Igor, this is a layer violation. Resource assignment should
> happen during pre_plug / plug. But then you might need a different way
> to "reserve" a function (if there could be races then with the lock
> temporarily dropped).
I agree, but it's a separate refactoring and it isn't pre-requisite for
virtio-mem work, so it shouldn't hold it.

> > which makes it accessible to pci config cycles.
> > 
> > But failover also cares about vfio, which seems to set up
> > e.g. irqfs on realize.  
Do we have a thread for failover somewhere on the list that discusses
ideas and requirements for it, where we can discuss it?
Otherwise this discussion will get buried in this unrelated thread.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]