qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2 2/3] fine grained qemu_mutex locking for


From: Marcelo Tosatti
Subject: Re: [Qemu-devel] [RFC PATCH v2 2/3] fine grained qemu_mutex locking for migration
Date: Tue, 2 Aug 2011 13:30:49 -0300
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Jul 29, 2011 at 04:57:25PM -0400, Umesh Deshpande wrote:
> In the migration thread, qemu_mutex is released during the most time consuming
> part. i.e. during is_dup_page which identifies the uniform data pages and 
> during
> the put_buffer. qemu_mutex is also released while blocking on select to wait 
> for
> the descriptor to become ready for writes.
> 
> Signed-off-by: Umesh Deshpande <address@hidden>
> ---
>  arch_init.c |   14 +++++++++++---
>  migration.c |   11 +++++++----
>  2 files changed, 18 insertions(+), 7 deletions(-)
> 
> diff --git a/arch_init.c b/arch_init.c
> index 484b39d..cd545bc 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -110,7 +110,7 @@ static int is_dup_page(uint8_t *page, uint8_t ch)
>  static RAMBlock *last_block;
>  static ram_addr_t last_offset;
>  
> -static int ram_save_block(QEMUFile *f)
> +static int ram_save_block(QEMUFile *f, int stage)
>  {
>      RAMBlock *block = last_block;
>      ram_addr_t offset = last_offset;
> @@ -131,6 +131,10 @@ static int ram_save_block(QEMUFile *f)
>                                              current_addr + TARGET_PAGE_SIZE,
>                                              MIGRATION_DIRTY_FLAG);
>  
> +            if (stage != 3) {
> +                qemu_mutex_unlock_iothread();
> +            }
> +
>              p = block->host + offset;
>  
>              if (is_dup_page(p, *p)) {
> @@ -153,6 +157,10 @@ static int ram_save_block(QEMUFile *f)
>                  bytes_sent = TARGET_PAGE_SIZE;
>              }
>  
> +            if (stage != 3) {
> +                qemu_mutex_lock_iothread();
> +            }
> +

Batching multiple pages (instead of a single page per lock/unlock cycle)
is probably worthwhile.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]