[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] long irresponsibility or stuck in the migration code

From: Denis V. Lunev
Subject: [Qemu-devel] long irresponsibility or stuck in the migration code
Date: Mon, 25 Apr 2016 16:48:57 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1

Hello, Amit!

We have faced very interesting issue with QEMU migration code.
Migration thread performs the following operation:

#0  0x00007f61abe9978d in sendmsg () at ../sysdeps/unix/syscall-template.S:81
#1  0x00007f61b2942055 in do_send_recv (address@hidden, address@hidden,
    iov_cnt=<optimized out>, address@hidden) at util/iov.c:104
#2  0x00007f61b2942528 in iov_send_recv (sockfd=104, address@hidden, 
    offset=27532, address@hidden, bytes=5236, address@hidden, address@hidden) 
at util/iov.c:181
#3  0x00007f61b287724a in socket_writev_buffer (opaque=0x7f61b6ec8070, 
iov=0x7f61b71a8030, iovcnt=1,
    pos=<optimized out>) at migration/qemu-file-unix.c:43
#4  0x00007f61b2875caa in qemu_fflush (address@hidden) at 
#5  0x00007f61b2875e1a in qemu_put_buffer (f=0x7f61b71a0000, address@hidden "", 
    at migration/qemu-file.c:323
#6  0x00007f61b287674f in qemu_put_buffer (size=842, buf=0x7f61b662e030 "", 
f=0x7f61b2875caa <qemu_fflush+74>)
---Type <return> to continue, or q <return> to quit---
    at migration/qemu-file.c:589
#7  qemu_put_qemu_file (address@hidden, f_src=0x7f61b662e000) at 
#8  0x00007f61b26fab01 in compress_page_with_multi_thread 
(bytes_transferred=0x7f61b2dfe578 <bytes_transferred>,
    offset=2138677280, block=0x7f61b51e9b80, f=0x7f61b71a0000) at 
#9  ram_save_compressed_page (bytes_transferred=0x7f61b2dfe578 
<bytes_transferred>, last_stage=true,
    offset=2138677280, block=0x7f61b51e9b80, f=0x7f61b71a0000) at 
#10 ram_find_and_save_block (address@hidden, address@hidden,
    bytes_transferred=0x7f61b2dfe578 <bytes_transferred>) at 
#11 0x00007f61b26faed5 in ram_save_complete (f=0x7f61b71a0000, opaque=<optimized 
    at /usr/src/debug/qemu-2.3.0/migration/ram.c:1280
#12 0x00007f61b26ff241 in qemu_savevm_state_complete_precopy (f=0x7f61b71a0000,
    address@hidden) at /usr/src/debug/qemu-2.3.0/migration/savevm.c:976
#13 0x00007f61b2872ecb in migration_completion (start_time=<synthetic pointer>, 
old_vm_running=<synthetic pointer>,
    current_active_state=<optimized out>, s=0x7f61b2d8bfc0 
<current_migration.37181>) at migration/migration.c:1212
#14 migration_thread (opaque=0x7f61b2d8bfc0 <current_migration.37181>) at 
#15 0x00007f61abe92dc5 in start_thread (arg=0x7f6117ff8700) at 
#16 0x00007f61abbc028d in clone () at 

which can take really BIG period of time.

The problem is that we have taken qemu_global_mutex

static void migration_completion(MigrationState *s, int current_active_state,
                                 bool *old_vm_running,
                                 int64_t *start_time)
    int ret;

    if (s->state == MIGRATION_STATUS_ACTIVE) {
        *start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
        *old_vm_running = runstate_is_running();
        ret = global_state_store();

        if (!ret) {
            ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
            if (ret >= 0) {
                qemu_file_set_rate_limit(s->file, INT64_MAX);
                qemu_savevm_state_complete_precopy(s->file, false);

and thus QEMU process is irresponsible for any management requests.
Here we have some misconfiguration and the data is not read, but
this could happen in other cases.

From my point of view we should drop qemu_mutex_unlock_iothread()
before any socket operation but doing this in a straight way (just
drop the lock) seems improper.

Do you have any opinion on the problem?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]