qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Fix off-by-1 error in RAM migration code


From: Juan Quintela
Subject: Re: [Qemu-devel] [PATCH] Fix off-by-1 error in RAM migration code
Date: Sun, 04 Nov 2012 20:17:29 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux)

David Gibson <address@hidden> wrote:
> On Fri, Nov 02, 2012 at 11:58:32AM +0100, Juan Quintela wrote:
>> David Gibson <address@hidden> wrote:
>> > On Wed, Oct 31, 2012 at 01:08:16PM +0200, Orit Wasserman wrote:
>> >> On 10/31/2012 05:43 AM, David Gibson wrote:
>> 
>> Reviewed-by: Juan Quintela <address@hidden> 
>> 
>> Good catch, I missunderstood the function when fixing a different bug,
>> and never undrestood why it fixed it.
>
> Actually.. it just occurred to me that I think there has to be another
> bug here somewhere..

I am at KVM Forum and LinuxCon for this week, so I can't test anything.

For some reason, I missunderstood bitmap_set() and though this was the
value that we "initiliazed" the bitmap with.  So, I changed from 0 to 1,
and ..... I was sending half the pages over the wire.  Yes, that is
right, just half of them.  So clearly we have some bug somewhere else :-()

>
> I haven't actually observed any effects from the memory corruption -
> though it's certainly a real bug.  I found this because another effect
> of this bug is that migration_dirty_pages count was set to 1 more than
> the actual number of dirty bits in the bitmap.  That meant the dirty
> pages count was never reaching zero and so the migration/savevm never
> terminated.

I wonder what is on page 0 on an x86, problably some BIOS data that
never changes?  No clue about pseries.

> Except.. that every so often the migration *did* terminate (maybe 1
> time in 5).  Also I kind of hope somebody would have noticed this
> earlier if migrations never terminated on x86 too.  But as far as I
> can tell, if initially mismatched like this it ought to be impossible
> for the dirty page count to ever reach zero.  Which suggests there is
> another bug with the dirty count tracking :(.

We use the dirty bitmap count to know how many pages are dirty, but once
that the number is low enough, we just sent "the  rest" of the pages.
So, it would always converge (or not) independent of that bug by one.
We never test for zero dirty pages, we test "are we able to send this
many pages" over "max_downtime".  So this explains why it works for you
sometimes.

>
> It's possible the memory corruption could account for this, of course
> - since that in theory at least, could have almost any strange effect
> on the program's behavior.  But that doesn't seem particularly likely
> to me.

This depends on _what_ is on page zero, if that is differente for
whatever we put there during boot, and it we ever wrote to that page
again, we would mark that pge dirty anyways, so I would put that
"corruption" problem as highly improbable.  Not that we shouldn't fix
the bug, but I doubt that you are getting memory corruption due to this
bug.

The only way that you can get memory corruption is if you write to that
page just before you do migration, and then you never wrote to it again.
What is on hardware page zero on pseries?  or it is just a normal page?

Later, Juan.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]