qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/1] Count used RAMBlock pages for migration_dir


From: Juan Quintela
Subject: Re: [Qemu-devel] [PATCH 1/1] Count used RAMBlock pages for migration_dirty_pages
Date: Fri, 21 Mar 2014 14:11:54 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux)

"Dr. David Alan Gilbert (git)" <address@hidden> wrote:
> From: "Dr. David Alan Gilbert" <address@hidden>
>
> This is a fix for a bug* triggered by a migration after hot unplugging
> a few virtio-net NICs, that caused migration never to converge, because
> 'migration_dirty_pages' is incorrectly initialised.

Good catch.

> 'migration_dirty_pages' is used as a tally of the number of outstanding
> dirty pages, to give the migration code an idea of how much more data
> will need to be transferred, and thus whether it can end the iterative
> phase.
>
> It was initialised to the total size of the RAMBlock address space,
> however hotunplug can leave this space sparse, and hence
> migration_dirty_pages ended up too large.
>
> Note that the code tries to be careful when counting to deal with
> RAMBlocks that share the same end/start page - I don't know
> if this is actually possible and it does complicate the code,
> but since there was other code that dealt with unaligned RAMBlocks
> it seemed possible.

Couldn't we just check at block addition that it dont' overlap?

What code do you mean?

My understanding is that the "normal" way of creating new RAMBlocks is
with qemu_ram_alloc_from_ptr(), and my reading is that block never
overlap.  (Important words of the sentence: "my reading").

>
> Signed-off-by: Dr. David Alan Gilbert <address@hidden>
>
> (* https://bugzilla.redhat.com/show_bug.cgi?id=1074913 )
> ---
>  arch_init.c | 41 +++++++++++++++++++++++++++++++++++++----
>  1 file changed, 37 insertions(+), 4 deletions(-)
>
> diff --git a/arch_init.c b/arch_init.c
> index f18f42e..ef0e98d 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -727,11 +727,8 @@ static void reset_ram_globals(void)
>  static int ram_save_setup(QEMUFile *f, void *opaque)
>  {
>      RAMBlock *block;
> -    int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> +    int64_t ram_bitmap_pages;
>  
> -    migration_bitmap = bitmap_new(ram_pages);
> -    bitmap_set(migration_bitmap, 0, ram_pages);
> -    migration_dirty_pages = ram_pages;
>      mig_throttle_on = false;
>      dirty_rate_high_cnt = 0;
>  
> @@ -770,6 +767,42 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>      bytes_transferred = 0;
>      reset_ram_globals();
>  
> +    ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> +    migration_bitmap = bitmap_new(ram_bitmap_pages);
> +    bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
> +    /*
> +     * Count the total number of pages used by ram blocks. We clear the dirty
> +     * bit for the start/end of each ramblock as we go so that we don't 
> double
> +     * count ramblocks that have overlapping pages - at entry the whole dirty
> +     * bitmap is set.
> +     */
> +    migration_dirty_pages = 0;
> +    QTAILQ_FOREACH(block, &ram_list.blocks, next) {
> +        uint64_t block_pages = 0;
> +        ram_addr_t saddr, eaddr;
> +
> +        saddr = block->mr->ram_addr;
> +        eaddr = saddr + block->length - 1;

If my assumtpion is true:  block->lenght-1 / TARGET_PAGE_SIZE (rounded
up) should be enough, no?

Reason for this is that migration bitmap handling is already slow, and
we are adding a whole two passes here?

Later, Juan.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]