[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v9 11/12] migration: Flush receive queue
From: |
Juan Quintela |
Subject: |
Re: [Qemu-devel] [PATCH v9 11/12] migration: Flush receive queue |
Date: |
Mon, 11 Dec 2017 10:40:49 +0100 |
User-agent: |
Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) |
"Dr. David Alan Gilbert" <address@hidden> wrote:
> * Juan Quintela (address@hidden) wrote:
>> +/* We are getting low on pages flags, so we start using combinations
>> + When we need to flush a page, we sent it as
>> + RAM_SAVE_FLAG_MULTIFD_PAGE | RAM_SAVE_FLAG_COMPRESS_PAGE
>> + We don't allow that combination
>> +*/
>> +#define RAM_SAVE_FLAG_MULTIFD_SYNC \
>> + (RAM_SAVE_FLAG_MULTIFD_PAGE | RAM_SAVE_FLAG_ZERO)
>
> Good that's better than last time; note you're using FLAG_ZERO where
> the comment says COMPRESS_PAGE and the commit message says COMPRESSED.
Fixed.
>
>> +
>> static inline bool is_zero_range(uint8_t *p, uint64_t size)
>> {
>> return buffer_is_zero(p, size);
>> @@ -194,6 +202,9 @@ struct RAMState {
>> uint64_t iterations_prev;
>> /* Iterations since start */
>> uint64_t iterations;
>> + /* Indicates if we have synced the bitmap and we need to assure that
>> + target has processeed all previous pages */
>> + bool multifd_needs_flush;
>> /* number of dirty bits in the bitmap */
>> uint64_t migration_dirty_pages;
>> /* protects modification of the bitmap */
>> @@ -614,9 +625,11 @@ struct MultiFDRecvParams {
>> QIOChannel *c;
>> QemuSemaphore ready;
>> QemuSemaphore sem;
>> + QemuCond cond_sync;
>> QemuMutex mutex;
>> /* proteced by param mutex */
>> bool quit;
>> + bool sync;
>> multifd_pages_t pages;
>> bool done;
>> };
>> @@ -669,6 +682,7 @@ int multifd_load_cleanup(Error **errp)
>> qemu_thread_join(&p->thread);
>> qemu_mutex_destroy(&p->mutex);
>> qemu_sem_destroy(&p->sem);
>> + qemu_cond_destroy(&p->cond_sync);
>> socket_recv_channel_destroy(p->c);
>> g_free(p->name);
>> p->name = NULL;
>> @@ -707,6 +721,10 @@ static void *multifd_recv_thread(void *opaque)
>> return NULL;
>> }
>> p->done = true;
>> + if (p->sync) {
>> + qemu_cond_signal(&p->cond_sync);
>> + p->sync = false;
>> + }
>> qemu_mutex_unlock(&p->mutex);
>> qemu_sem_post(&p->ready);
>> continue;
>> @@ -752,9 +770,11 @@ void multifd_new_channel(QIOChannel *ioc)
>> qemu_mutex_init(&p->mutex);
>> qemu_sem_init(&p->sem, 0);
>> qemu_sem_init(&p->ready, 0);
>> + qemu_cond_init(&p->cond_sync);
>> p->quit = false;
>> p->id = msg.id;
>> p->done = false;
>> + p->sync = false;
>> multifd_init_pages(&p->pages);
>> p->c = ioc;
>> multifd_recv_state->count++;
>> @@ -819,6 +839,27 @@ static void multifd_recv_page(uint8_t *address,
>> uint16_t fd_num)
>> qemu_sem_post(&p->sem);
>> }
>>
>> +static int multifd_flush(void)
>> +{
>> + int i, thread_count;
>> +
>> + if (!migrate_use_multifd()) {
>> + return 0;
>> + }
>> + thread_count = migrate_multifd_channels();
>> + for (i = 0; i < thread_count; i++) {
>> + MultiFDRecvParams *p = &multifd_recv_state->params[i];
>> +
>> + qemu_mutex_lock(&p->mutex);
>> + while (!p->done) {
>> + p->sync = true;
>> + qemu_cond_wait(&p->cond_sync, &p->mutex);
>> + }
>> + qemu_mutex_unlock(&p->mutex);
>> + }
>> + return 0;
>> +}
pD>
> I wonder if we need some way of terminating this on error
> (e.g. if terminate_multifd_recev_threads is called for an error
> case).
It could be, I have to think about this.
Later, Juan.
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- Re: [Qemu-devel] [PATCH v9 11/12] migration: Flush receive queue,
Juan Quintela <=