qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 10/10] qdev: fix create in place obj's life cycl


From: liu ping fan
Subject: Re: [Qemu-devel] [PATCH 10/10] qdev: fix create in place obj's life cycle problem
Date: Mon, 3 Sep 2012 15:44:47 +0800

On Sat, Sep 1, 2012 at 4:46 PM, Avi Kivity <address@hidden> wrote:
> On 08/30/2012 12:08 AM, Jan Kiszka wrote:
>> >>>
>> >>> We are dispatching according to the memory region (parameters, op
>> >>> handlers, opaques). If we end up in device object is not decided at this
>> >>> level. A memory region describes a dispatchable area - not to confuse
>> >>> with a device that may only partially be able to receive such requests.
>> >>
>> > But I think the meaning of the memory region is for dispatching. If no
>> > dispatching associated with mr, why need it exist in the system?
>>
>> Where did I say that memory regions should no longer be used for
>> dispatching? The point is to keep the clean layer separation between
>> memory regions and device objects instead of merging them together.
>
> Believe me, that's exactly how I feel.  I guess no author of a module
> wants to see it swallowed by a larger framework and see its design
> completely changed.  Modules should be decoupled as much as possible.
> But I don't see a better way yet.
>
>>
>> Given
>>
>>        Object
>>        ^    ^
>>        |    |
>> Region 1    Region 2
>>
>> you protect the object during dispatch, implicitly (and that is bad)
>> requiring that no region must change in that period.
>
> We only protect the object against removal.  After object_ref() has run,
> the region may still be disabled.
>
>>  I say what rather
>> needs protection are the regions so that Region 2 can pass away and
>> maybe reappear independent of Region 1.
>
> Region 2 can go away until the device-supplied dispatch code takes the
> device lock.  If the device chooses to use finer-grained locking, it can
> allow region 2 (or even region 1) to change during dispatch.  The only
> requirement is that region 1 is not destroyed.
>
>>  And: I won't need to know the
>> type of that object the regions are referring to in this model. That's
>> the difference.
>
> Sorry, I lost the reference.  What model?  Are you referring to my
> broken MemoryRegionImpl split?
>
>> >  And
>> > could you elaborate that who will be the ref holder of mr?
>>
>> The memory subsystem while running a memory region handler. If that will
>> be a reference counter or a per-region lock like Avi suggested, we still
>> need to find out.
>>
>
> If we make the refcount/lock internal to the region, we must remove the
> opaque, since the region won't protect it.
>
> Replacing the opaque with container_of(mr) doesn't help, since we can't
> refcount mr, only mr->impl.
>
I think you mean if using MemoryRegionImpl, then at this level, we had
better not touch the refcnt of container_of(mr) and should face the
mr->impl->refcnt. Right?

> We could externalize the refcounting and push it into device code.  This
> means:
>
>    memory_region_init_io(&s->mem, dev)
>
>    ...
>
>    object_ref(dev)
>    memory_region_add_subregion(..., &dev->mr)
>
>    ...
>
>    memory_region_del_subregion(..., &dev->mr)  // implied flush
>    object_unref(dev)
>
I think "object_ref(dev)" just another style to push
MemoryRegionOps::object() to device level.  And I think we can bypass
it.   The caller (unplug, pci-reconfig ) of
memory_region_del_subregion() ensure the @dev is valid.
If the implied flush is implemented in synchronize,  _del_subregion()
will guarantee no reader for @dev->mr and @dev exist any longer. So I
think we can save both object_ref/unref(dev) for memory system.
The only problem is that whether we can implement it as synchronous or
not,  is it possible that we can launch a  _del_subregion(mr-X) in
mr-X's dispatcher?

Regards,
pingfan

> er, this must be wrong, since it's so simple
>
> --
> I have a truly marvellous patch that fixes the bug which this
> signature is too narrow to contain.
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]