qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v12 17/21] migration: Create ram_multifd_page


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH v12 17/21] migration: Create ram_multifd_page
Date: Thu, 26 Apr 2018 16:18:15 +0800
User-agent: Mutt/1.9.1 (2017-09-22)

On Wed, Apr 25, 2018 at 01:27:19PM +0200, Juan Quintela wrote:
> The function still don't use multifd, but we have simplified
> ram_save_page, xbzrle and RDMA stuff is gone.  We have added a new
> counter.
> 
> Signed-off-by: Juan Quintela <address@hidden>
> 
> --
> Add last_page parameter
> Add commets for done and address
> Remove multifd field, it is the same than normal pages
> Merge next patch, now we send multiple pages at a time
> Remove counter for multifd pages, it is identical to normal pages
> Use iovec's instead of creating the equivalent.
> Clear memory used by pages (dave)
> Use g_new0(danp)
> define MULTIFD_CONTINUE
> now pages member is a pointer
> Fix off-by-one in number of pages in one packet
> Remove RAM_SAVE_FLAG_MULTIFD_PAGE
> s/multifd_pages_t/MultiFDPages_t/
> ---
>  migration/ram.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 93 insertions(+)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 398cb0af3b..862ec53d32 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -54,6 +54,7 @@
>  #include "migration/block.h"
>  #include "sysemu/sysemu.h"
>  #include "qemu/uuid.h"
> +#include "qemu/iov.h"
>  
>  /***********************************************************/
>  /* ram save/restore */
> @@ -692,8 +693,65 @@ struct {
>      QemuSemaphore sem_sync;
>      /* global number of generated multifd packets */
>      uint32_t seq;
> +    /* send channels ready */
> +    QemuSemaphore channels_ready;
>  } *multifd_send_state;
>  
> +static void multifd_send_pages(void)
> +{
> +    int i;
> +    static int next_channel;
> +    MultiFDSendParams *p = NULL; /* make happy gcc */
> +    MultiFDPages_t *pages = multifd_send_state->pages;
> +
> +    qemu_sem_wait(&multifd_send_state->channels_ready);

This sem is posted when a thread has finished its work.  However this
is called in the main migration thread.  If with this line, are the
threads really sending things in parallel?  Since it looks to me that
this function (and the main thread) won't send the 2nd page array if
the 1st hasn't finished, and won't send the 3rd if the 2nd hasn't,
vice versa...

Maybe I misunderstood something.  Please feel free to correct me.

> +    for (i = next_channel;; i = (i + 1) % migrate_multifd_channels()) {
> +        p = &multifd_send_state->params[i];
> +
> +        qemu_mutex_lock(&p->mutex);
> +        if (!p->pending_job) {
> +            p->pending_job++;
> +            next_channel = (i + 1) % migrate_multifd_channels();
> +            break;
> +        }
> +        qemu_mutex_unlock(&p->mutex);
> +    }
> +    p->pages->used = 0;
> +    multifd_send_state->seq++;
> +    p->seq = multifd_send_state->seq;
> +    p->pages->block = NULL;
> +    multifd_send_state->pages = p->pages;
> +    p->pages = pages;

Here we directly replaced MultiFDSendParams.pages with
multifd_send_state->pages.  Then are we always using a single
MultiFDPages_t struct?  And if so, will all the initial
MultiFDSendParams.pages memory leaked without freed?

> +    qemu_mutex_unlock(&p->mutex);
> +    qemu_sem_post(&p->sem);
> +}

Thanks,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]