qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/4] QEMUFile improvements and simplifications


From: Liuji (Jeremy)
Subject: Re: [Qemu-devel] [PATCH 0/4] QEMUFile improvements and simplifications
Date: Thu, 11 Apr 2013 12:38:27 +0000

Hi, Juan

Thanks for your reply.

Yesterday, my disk has no space. So, the core-dump file not saved completely.
The info of core-dump file is:
#0  0x00007f7a0dbff341 in migration_thread (opaque=0x7f7a0e16cbc0) at 
migration.c:545
545                 double bandwidth = transferred_bytes / time_spent;
(gdb) bt
#0  0x00007f7a0dbff341 in migration_thread (opaque=0x7f7a0e16cbc0) at 
migration.c:545
#1  0x00007f7a0becad14 in start_thread (arg=0x7f7957fff700) at 
pthread_create.c:309
#2  0x00007f7a07cf267d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:115
(gdb) l
540             }
541             current_time = qemu_get_clock_ms(rt_clock);
542             if (current_time >= initial_time + BUFFER_DELAY) {
543                 uint64_t transferred_bytes = qemu_ftell(s->file) - 
initial_bytes;
544                 uint64_t time_spent = current_time - initial_time - 
sleep_time;
545                 double bandwidth = transferred_bytes / time_spent;
546                 max_size = bandwidth * migrate_max_downtime() / 1000000;
547     
548                 DPRINTF("transferred %" PRIu64 " time_spent %" PRIu64
549                         " bandwidth %g max_size %" PRId64 "\n",
(gdb) p time_spent
$1 = 0
(gdb) p current_time
$2 = 23945934
(gdb) p initial_time
$3 = 23945833
(gdb) p sleep_time
$4 = 101
(gdb) p s->file->last_error
$5 = 0

I tested three times. And the value of sleep_time are: 101,100,101

I think that the transfer may be so fast(use a very little time, the bytes_xfer 
> xfer_limit), 
and the "g_usleep" function may not be very accurate. So the value of 
sleep_time may be 
100(BUFFER_DELAY) or just a bit more than 100(BUFFER_DELAY).
I don't know whether my understanding is correct?

Below is my simple patch for evade this problem. Is that correct?
But I don't know why using your patch may trigger the problem.


diff --git a/migration.c b/migration.c
index 3b4b467..58d69fb 100644
--- a/migration.c
+++ b/migration.c
@@ -503,6 +503,7 @@ static void *migration_thread(void *opaque)
     int64_t max_size = 0;
     int64_t start_time = initial_time;
     bool old_vm_running = false;
+    double bandwidth = 0;
 
     DPRINTF("beginning savevm\n");
     qemu_savevm_state_begin(s->file, &s->params);
@@ -542,7 +543,13 @@ static void *migration_thread(void *opaque)
         if (current_time >= initial_time + BUFFER_DELAY) {
             uint64_t transferred_bytes = qemu_ftell(s->file) - initial_bytes;
             uint64_t time_spent = current_time - initial_time - sleep_time;
-            double bandwidth = transferred_bytes / time_spent;
+            if (time_spent > 0) {
+                bandwidth = transferred_bytes / time_spent;
+            }
+            else {
+                //when time_spent <= 0, don't change the value of bandwidth.
+                DPRINTF("time_spent=%" PRIu64 " is too small.\n",time_spent);
+            }
             max_size = bandwidth * migrate_max_downtime() / 1000000;
 
             DPRINTF("transferred %" PRIu64 " time_spent %" PRIu64
@@ -550,7 +557,7 @@ static void *migration_thread(void *opaque)
                     transferred_bytes, time_spent, bandwidth, max_size);
             /* if we haven't sent anything, we don't want to recalculate
                10000 is a small enough number for our purposes */
-            if (s->dirty_bytes_rate && transferred_bytes > 10000) {
+            if (s->dirty_bytes_rate && transferred_bytes > 10000 && bandwidth 
> 0) {
                 s->expected_downtime = s->dirty_bytes_rate / bandwidth;
             }



>  Re: [PATCH 0/4] QEMUFile improvements and simplifications
> 
> "Liuji (Jeremy)" <address@hidden> wrote:
> > Hi, Paolo
> >
> > I tested your 4 patches in the latest version of qemu.git/master(commit:
> > 93b48c201eb6c0404d15550a0eaa3c0f7937e35e,2013-04-09).
> > These patches resolve the "savevm hanging" problem, which is detailedly
> described
> > in my preceding mail:"After executing "savevm", the QEMU process is
> hanging".
> >
> > But, I found two other problem:
> > 1、My VM's OS is winxp. After the execution of "savevm" is completed,
> > I exec "loadvm".
> > But the winxp change to "blue screen", and then restart. I tested 3
> > times, but the results are same.
> >
> > 2、The block migration is not OK. The qemu-system-x86_64 process of
> > source host is core-dump.
> > In the latest version of
> >
> qemu.git/master(commit:93b48c201eb6c0404d15550a0eaa3c0f7937e35e,201
> 3-04-09),
> > the block migration is OK.
> >
> >
> > The info of core-dump file:
> > #0 0x00007f8a44cec341 in migration_thread (opaque=0x7f8a45259bc0) at
> > migration.c:545
> > 545             double bandwidth = transferred_bytes / time_spent;
> > (gdb) bt
> > #0 0x00007f8a44cec341 in migration_thread (opaque=0x7f8a45259bc0) at
> > migration.c:545
> > #1  0x00007f8a42fb7d14 in ?? ()
> > #2  0x0000000000000000 in ?? ()
> >
> 
> Could you recompile with -g to see what is going on?
> This really makes no sense :p  It looks like the source file and the
> compiled version don't agree.
> 
> Paolo,  any clue?
> 
> /me re-reads: block-migration,  ok,  testing goes.
> 
> Later,  Juan.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]