qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] add migration capability to bypass the shared m


From: Lai Jiangshan
Subject: Re: [Qemu-devel] [PATCH] add migration capability to bypass the shared memory
Date: Fri, 13 Jan 2017 12:44:38 +0800

On Fri, Jan 13, 2017 at 3:19 AM, Jianjun Duan <address@hidden> wrote:
> I have a question related to interplay of bypassing the shared memory in
> migration and memory hotplugging. If on the source guest a big chunk of
> memory is plugged in, will the shared memory still be mapped the same
> way on the guest? i.e, the mapping from guest physical address to the
> host virtual address be the same?

I don't understand the question, the patch doesn't change
the memory hotplugging nor the way how the pages are mapped
in the guest physical.

>
> Thanks,
> Jianjun
>
>
> On 08/29/2016 09:11 PM, Lai Jiangshan wrote:
>> On Wed, Aug 10, 2016 at 5:03 PM, Juan Quintela <address@hidden> wrote:
>>> Lai Jiangshan <address@hidden> wrote:
>>>
>>> Hi
>>>
>>> First of all, I like a lot the patchset, but I would preffer to split it
>>> to find "possible" bugs along the lines, especially in postcopy, but not 
>>> only.
>>
>> Hello, thanks for review and comments
>>
>> I tried to make the patch be sane and tight.
>> I don't see any strong reason to split it without complicating the patch.
>>
>>>
>>> [very nice description of the patch]
>>>
>>> Nothing to say about the QMP and shared memory detection, looks correct
>>> to me.
>>>
>>>> diff --git a/migration/ram.c b/migration/ram.c
>>>> index 815bc0e..880972d 100644
>>>> --- a/migration/ram.c
>>>> +++ b/migration/ram.c
>>>> @@ -605,6 +605,28 @@ static void migration_bitmap_sync_init(void)
>>>>      num_dirty_pages_period = 0;
>>>>      xbzrle_cache_miss_prev = 0;
>>>>      iterations_prev = 0;
>>>> +    migration_dirty_pages = 0;
>>>> +}
>>>> +
>>>> +static void migration_bitmap_init(unsigned long *bitmap)
>>>> +{
>>>> +    RAMBlock *block;
>>>> +
>>>> +    bitmap_clear(bitmap, 0, last_ram_offset() >> TARGET_PAGE_BITS);
>>>> +    rcu_read_lock();
>>>> +    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>>>> +        if (!migrate_bypass_shared_memory() || 
>>>> !qemu_ram_is_shared(block)) {
>>>> +            bitmap_set(bitmap, block->offset >> TARGET_PAGE_BITS,
>>>> +                       block->used_length >> TARGET_PAGE_BITS);
>>>> +
>>>> +            /*
>>>> +             * Count the total number of pages used by ram blocks not 
>>>> including
>>>> +             * any gaps due to alignment or unplugs.
>>>> +             */
>>>> +         migration_dirty_pages += block->used_length >> TARGET_PAGE_BITS;
>>>> +     }
>>>> +    }
>>>> +    rcu_read_unlock();
>>>>  }
>>>
>>> We can split this function in a different patch.
>>
>> it calls the new function migrate_bypass_shared_memory().
>> it is no a good idea to split it out.
>>
>>> I haven't fully search
>>> if we care about taking the rcu lock here.  The thing that I am more
>>> interested is in knowing what happens when we don't set
>>> migration_dirty_pages as the full "possible" memory pages.
>>
>> I hadn't tested it with postcopy, I don't know how to use postcopy.
>> From my review I can't find obvious bugs about it.
>>
>> I don't think there is any good reason to use migrate_bypass
>> and postcopy together,  I can disable the migrate_bypass
>> when postcopy==true if you want.
>>
>>>
>>> Once here, should we check for ROM regions?
>>>
>>> BTW, could'nt we use:
>>>
>>> int qemu_ram_foreach_block(RAMBlockIterFunc func, void *opaque)
>>> {
>>>     RAMBlock *block;
>>>     int ret = 0;
>>>
>>>     rcu_read_lock();
>>>     QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>>>         ret = func(block->idstr, block->host, block->offset,
>>>                    block->used_length, opaque);
>>>         if (ret) {
>>>             break;
>>>         }
>>>     }
>>>     rcu_read_unlock();
>>>     return ret;
>>> }
>>>
>>
>> the patch only introduces only one "QLIST_FOREACH_RCU(ram_list.blocks)"
>> but
>> # git grep 'QLIST_FOREACH_RCU.*ram_list'  | wc -l
>> #       16
>>
>> I don't want to introduce qemu_ram_foreach_block()
>> and touch another 15 places.
>> I hope someone do it after merged.
>>
>>
>>>
>>>
>>>>
>>>>  static void migration_bitmap_sync(void)
>>>> @@ -631,7 +653,9 @@ static void migration_bitmap_sync(void)
>>>>      qemu_mutex_lock(&migration_bitmap_mutex);
>>>>      rcu_read_lock();
>>>>      QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
>>>> -        migration_bitmap_sync_range(block->offset, block->used_length);
>>>> +        if (!migrate_bypass_shared_memory() || 
>>>> !qemu_ram_is_shared(block)) {
>>>> +            migration_bitmap_sync_range(block->offset, 
>>>> block->used_length);
>>>> +        }
>>>>      }
>>>>      rcu_read_unlock();
>>>>      qemu_mutex_unlock(&migration_bitmap_mutex);
>>>
>>> Oops, another place where we were not using qemu_ram_foreach_block :p
>>>
>>>
>>>> @@ -1926,19 +1950,14 @@ static int ram_save_setup(QEMUFile *f, void 
>>>> *opaque)
>>>>      ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
>>>>      migration_bitmap_rcu = g_new0(struct BitmapRcu, 1);
>>>>      migration_bitmap_rcu->bmap = bitmap_new(ram_bitmap_pages);
>>>> -    bitmap_set(migration_bitmap_rcu->bmap, 0, ram_bitmap_pages);
>>>> +    migration_bitmap_init(migration_bitmap_rcu->bmap);
>>>>
>>>>      if (migrate_postcopy_ram()) {
>>>>          migration_bitmap_rcu->unsentmap = bitmap_new(ram_bitmap_pages);
>>>> -        bitmap_set(migration_bitmap_rcu->unsentmap, 0, ram_bitmap_pages);
>>>> +        bitmap_copy(migration_bitmap_rcu->unsentmap,
>>>> +                 migration_bitmap_rcu->bmap, ram_bitmap_pages);
>>>>      }
>>>
>>> I think that if we go this route, we should move the whole if inside the
>>> migration_bitmap_init?
>>
>> good! I will do it when I update the patch.
>>
>> Thanks,
>> Lai
>>
>>>
>>>>
>>>> -    /*
>>>> -     * Count the total number of pages used by ram blocks not including 
>>>> any
>>>> -     * gaps due to alignment or unplugs.
>>>> -     */
>>>> -    migration_dirty_pages = ram_bytes_total() >> TARGET_PAGE_BITS;
>>>> -
>>>>      memory_global_dirty_log_start();
>>>>      migration_bitmap_sync();
>>>>      qemu_mutex_unlock_ramlist();
>>>
>>>
>>> As said, very happy with the patch.  And it got much simpler that I
>>> would have expected.
>>>
>>> Thanks, Juan.
>>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]