qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 2/9] migration: Fix a potential issue


From: Amit Shah
Subject: Re: [Qemu-devel] [PATCH v2 2/9] migration: Fix a potential issue
Date: Fri, 10 Jun 2016 19:09:24 +0530

On (Thu) 05 May 2016 [15:32:52], Liang Li wrote:
> At the end of live migration and before vm_start() on the destination
> side, we should make sure all the decompression tasks are finished, if
> this can not be guaranteed, the VM may get the incorrect memory data,
> or the updated memory may be overwritten by the decompression thread.
> Add the code to fix this potential issue.
> 
> Suggested-by: David Alan Gilbert <address@hidden>
> Suggested-by: Juan Quintela <address@hidden>
> Signed-off-by: Liang Li <address@hidden>
> ---
>  migration/ram.c | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 7ab6ab5..8a59a08 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2220,6 +2220,24 @@ static void *do_data_decompress(void *opaque)
>      return NULL;
>  }
>  
> +static void wait_for_decompress_done(void)
> +{
> +    int idx, thread_count;
> +
> +    if (!migrate_use_compression()) {
> +        return;
> +    }
> +
> +    thread_count = migrate_decompress_threads();
> +    qemu_mutex_lock(&decomp_done_lock);
> +    for (idx = 0; idx < thread_count; idx++) {
> +        while (!decomp_param[idx].done) {
> +            qemu_cond_wait(&decomp_done_cond, &decomp_done_lock);
> +        }
> +    }
> +    qemu_mutex_unlock(&decomp_done_lock);

Not sure how this works: in the previous patch, done is set to false
under the decomp_done_lock.  Here, we take the lock, and wait for done
to turn false.  That can't happen because this thread holds the lock.
My reading is this is going to lead to a deadlock.  What am I missing?


                Amit



reply via email to

[Prev in Thread] Current Thread [Next in Thread]