qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: Introduce migration_in_completion()


From: Juan Quintela
Subject: Re: [Qemu-devel] [PATCH] migration: Introduce migration_in_completion()
Date: Thu, 29 Oct 2015 15:37:51 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux)

Pavel Fedin <address@hidden> wrote:
>  Hello!
>
>> ok, your problem here is that you modify ram.  Could you take a look at
>> how vhost manage this?  It is done at migration_bitmap_sync(), and it
>> just marks the pages that are dirty.
>
>  Hm, interesting... I see it hooks into
> memory_region_sync_dirty_bitmap(). Sorry for maybe lame question, i do
> not know the whole
> code, and it will be much faster for you to explain it to me, than for
> me to dig into it myself. At what moment is it called during
> migration?

when we call migration_bitmap_sync() we end calling all the
MEMORY_LISTENERS registered, and they can do whatever they want, like
modifying RAM, syncronize tables, whatever.

>
>  For you to better understand what is necessary... ITS is a thing
> which can be implemented in-kernel by KVM, and i work on exactly
> this. In my implementation i add an ioctl, which is called after CPUs
> are stopped. It flushes internal caches of the vITS to the
> RAM. It happens inside the kernel. I guess, dirty state tracking works
> correctly in this case, because memory gets modified by the
> kernel, and i guess from qemu's point of view it's the same as memory
> being modified by the guest. Therefore, i do not need to
> modify memory state bitmaps. I only need to tell the kernel to
> actually write out the data.
>  If we talk about making this thing iterative, we anyway need this
> ioctl. It could be modified inside the kernel to update only
> those RAM parts whose data have been modified since the last
> flush. The semantics would stay the same - it's just an ioctl telling
> the virtual device to store its data in RAM.
>  Ah, and again, these memory listeners are not prioritized too. I
> guess i could use them, but i would need a guarantee that my
> listener is called before KVMMemoryListener, which picks up changes.

Wait a bit to see if Michael answer as how it is done for vhost.
I don't remember the details O:-)


What you really need is a call before we do the completion stage for
RAM.  Thinking about how to do this, and if there are other users.

Thanks, Juan.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]