qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 10/10] qdev: fix create in place obj's life cycl


From: Avi Kivity
Subject: Re: [Qemu-devel] [PATCH 10/10] qdev: fix create in place obj's life cycle problem
Date: Mon, 27 Aug 2012 11:52:30 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120717 Thunderbird/14.0

On 08/27/2012 11:39 AM, Jan Kiszka wrote:
> On 2012-08-27 20:20, Avi Kivity wrote:
> > On 08/27/2012 11:17 AM, Jan Kiszka wrote:
> >> On 2012-08-27 20:09, Avi Kivity wrote:
> >>> On 08/27/2012 10:14 AM, Jan Kiszka wrote:
> >>>>>
> >>>>> Deregistration is fine, the problem is destruction.
> >>>>>
> >>>>
> >>>> It isn't as you access memory region states that can change after
> >>>> deregistration. Devices can remove memory regions from the mapping,
> >>>> alter and then reinsert them. The last to steps must not happen while
> >>>> anyone is still using a reference to that region.
> >>>>
> >>>
> >>> Why not?  If the guest is accessing an mmio region while reconfiguring
> >>> it in a way that changes its meaning, either the previous or the next
> >>> meaning is valid.
> >>
> >> If the memory region owner sets the content to zero or even releases it
> >> (nothing states a memory region can only live inside a device
> >> structure), we will crash. Restricting how a memory region can be
> >> created and handled after it was once registered somewhere is an
> >> unnatural interface, waiting to cause subtle bugs.
> > 
> > Using an Object * allows the simple case to be really simple (object ==
> > device) and the hard cases to be doable.
> > 
> > What would you suggest as a better interface?
>
> To protect the life cycle of the object we manage in the memory layer:
> regions. We don't manage devices there. If there is any implementation
> benefit in having a full QOM object, then make memory regions objects.

I see and sympathise with this point of view.

The idea to use opaque and convert it to Object * is that often a device
has multiple regions but no interest in managing them separately.  So
managing regions will cause the device model authors to create
boilerplate code to push region management to device management.

I had this experience with the first iterations of the region interface,
where instead of opaque we had MemoryRegion *.  Device code would use
container_of() in the simple case, but the non-simple case caused lots
of pain.  I believe it is the natural interface, but in practice it
turned out to cause too much churn.

> I simply don't like this indirection, having the memory layer pick up
> the opaque value of the region and interpret it. 

That's just an intermediate step.  The idea is that eventually opaque
will be changed to Object *.

> Even worse, apply
> restrictions on how the dispatched objects, the regions, have to be
> treated because of this.

Please elaborate.

> Also, using memory regions to control the locking behaviour allows for
> more fine-grained control. A device may expose certain regions with
> self-managed locking while others, less time critical ones, can still be
> handled under BQL for simplicity reasons.

You can still aquire the bql in one callback and your local lock in
another if you choose (but that would be most confusing IMO).

> Example: regions that translate MMIO to PIO (alpha-pci.c, every user of
> isa_mmio.c, ...). If PIO dispatching runs outside of the BQL, these
> regions must not be protected by the BQL anymore. At the same time, we
> may not want to convert the device exposing the region to its own
> locking scheme (yet). And we surely don't want to take the per-device
> lock for this state-less PIO dispatching.

We're diverging, but eventually PIO dispatch will share the same code as
mmio dispatch.  PIO-to-MMIO bridges will simply be containers or
aliases, they will not call cpu_inb() and relatives.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]