[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 0/6] proposal to make hostmem listener RAM un

From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH v2 0/6] proposal to make hostmem listener RAM unplug safe
Date: Mon, 06 May 2013 08:17:55 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130311 Thunderbird/17.0.4

Il 06/05/2013 03:42, liu ping fan ha scritto:
> On Sat, May 4, 2013 at 5:53 PM, Paolo Bonzini <address@hidden> wrote:
>> Il 03/05/2013 04:45, Liu Ping Fan ha scritto:
>>> v1->v2:
>>>   1.split RCU prepared style update and monitor the RAM-Device refcnt into 
>>> two patches (patch 2,4)
>>>   2.introduce AddrSpaceMem, which is similar to HostMem, but based on 
>>> address space, while
>>>     the original HostMem only server system memory address space
>> This looks suspiciously similar to FlatView, doesn't it?
> FlatView is used for all the listeners, including for mmio dispatching,
> which aims to mapping from hwaddr to DeviceState for dispatching service.
> While here, we mapping from hwaddr to hva.
>> Perhaps the right thing to do is to add the appropriate locking and
>> RCU-style updating to address_space_update_topology and
> RCU implementation is data struct related,  and each listener has its
> local table, so I think it is more reasonable to implement them
> separately.

I mentioned address_space_update_topology simply because it is where the
FlatView is replaced.

>> memory_region_find.   (And replacing flatview_destroy with ref/unref
>> similar to HostMem in your patch 2).  Then just switch dataplane to use
>> memory_region_find...
> In fact, I think, HostMem listener can be an substitute for
> cpu_physical_memory_map(),  the main issue can be the migration
> support.  But before getting big patches, I hope to have this smaller
> and simpler one.

I think replacing HostMem with FlatView is a smaller patch than these
ones.  I'll try to make a prototype.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]