qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/1] Count used RAMBlock pages for migration_dir


From: 陈梁
Subject: Re: [Qemu-devel] [PATCH 1/1] Count used RAMBlock pages for migration_dirty_pages
Date: Fri, 21 Mar 2014 21:44:57 +0800

> From: "Dr. David Alan Gilbert" <address@hidden>
> 
> This is a fix for a bug* triggered by a migration after hot unplugging
> a few virtio-net NICs, that caused migration never to converge, because
> 'migration_dirty_pages' is incorrectly initialised.
> 
> 'migration_dirty_pages' is used as a tally of the number of outstanding
> dirty pages, to give the migration code an idea of how much more data
> will need to be transferred, and thus whether it can end the iterative
> phase.
> 
> It was initialised to the total size of the RAMBlock address space,
> however hotunplug can leave this space sparse, and hence
> migration_dirty_pages ended up too large.
> 
> Note that the code tries to be careful when counting to deal with
> RAMBlocks that share the same end/start page - I don't know
> if this is actually possible and it does complicate the code,
> but since there was other code that dealt with unaligned RAMBlocks
> it seemed possible.
> 
> Signed-off-by: Dr. David Alan Gilbert <address@hidden>
> 
> (* https://bugzilla.redhat.com/show_bug.cgi?id=1074913 )
> ---
> arch_init.c | 41 +++++++++++++++++++++++++++++++++++++----
> 1 file changed, 37 insertions(+), 4 deletions(-)
> 
> diff --git a/arch_init.c b/arch_init.c
> index f18f42e..ef0e98d 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -727,11 +727,8 @@ static void reset_ram_globals(void)
> static int ram_save_setup(QEMUFile *f, void *opaque)
> {
>     RAMBlock *block;
> -    int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> +    int64_t ram_bitmap_pages;
> 
> -    migration_bitmap = bitmap_new(ram_pages);
> -    bitmap_set(migration_bitmap, 0, ram_pages);
> -    migration_dirty_pages = ram_pages;
>     mig_throttle_on = false;
>     dirty_rate_high_cnt = 0;
> 
> @@ -770,6 +767,42 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
>     bytes_transferred = 0;
>     reset_ram_globals();
> 
> +    ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> +    migration_bitmap = bitmap_new(ram_bitmap_pages);
> +    bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
> +    /*
> +     * Count the total number of pages used by ram blocks. We clear the dirty
> +     * bit for the start/end of each ramblock as we go so that we don't 
> double
> +     * count ramblocks that have overlapping pages - at entry the whole dirty
> +     * bitmap is set.
> +     */
> +    migration_dirty_pages = 0;
> +    QTAILQ_FOREACH(block, &ram_list.blocks, next) {
> +        uint64_t block_pages = 0;
> +        ram_addr_t saddr, eaddr;
> +
> +        saddr = block->mr->ram_addr;
> +        eaddr = saddr + block->length - 1;
> +        saddr /= TARGET_PAGE_SIZE;
> +        eaddr /= TARGET_PAGE_SIZE;
> +
> +        /* Do the end pages of the range, that might be shared with others */
> +        block_pages += test_and_clear_bit(saddr, migration_bitmap);
> +        block_pages += test_and_clear_bit(eaddr, migration_bitmap);
> +
> +        if ((saddr + 1) < eaddr) {
> +            block_pages += eaddr - (saddr + 1);
> +        }
> +        migration_dirty_pages += block_pages;
> +        /*fprintf(stderr, "ram_save_setup: %s s/e=%zx/%zx page s/e=%zx/%zx"
> +                " bp=%zu\n",
> +                block->idstr, block->mr->ram_addr,
> +                block->mr->ram_addr+block->length-1,
> +                saddr, eaddr, block_pages); */
> +    }
> +    /* and set the bits again that got used for our overlap check */
> +    bitmap_set(migration_bitmap, 0, ram_bitmap_pages);

maybe it should like this:
bitmap_set(migration_bitmap, 0, last_ram_offset() >> TARGET_PAGE_BITS);
Because there are some space sparse.
> +
>     memory_global_dirty_log_start();
>     migration_bitmap_sync();
>     qemu_mutex_unlock_iothread();
> -- 
> 1.8.5.3
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]