[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [RFC PATCH v3 00/30] migration: File based migration with multifd an
|
From: |
Peter Xu |
|
Subject: |
Re: [RFC PATCH v3 00/30] migration: File based migration with multifd and fixed-ram |
|
Date: |
Mon, 15 Jan 2024 14:22:47 +0800 |
On Thu, Jan 11, 2024 at 03:38:31PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
>
> > On Mon, Nov 27, 2023 at 05:25:42PM -0300, Fabiano Rosas wrote:
> >> Hi,
> >>
> >> In this v3:
> >>
> >> Added support for the "file:/dev/fdset/" syntax to receive multiple
> >> file descriptors. This allows the management layer to open the
> >> migration file beforehand and pass the file descriptors to QEMU. We
> >> need more than one fd to be able to use O_DIRECT concurrently with
> >> unaligned writes.
> >>
> >> Dropped the auto-pause capability. That discussion was kind of
> >> stuck. We can revisit optimizations for non-live scenarios once the
> >> series is more mature/merged.
> >>
> >> Changed the multifd incoming side to use a more generic data structure
> >> instead of MultiFDPages_t. This allows multifd to restore the ram
> >> using larger chunks.
> >>
> >> The rest are minor changes, I have noted them in the patches
> >> themselves.
> >
> > Fabiano,
> >
> > Could you always keep a section around in the cover letter (and also in the
> > upcoming doc file fixed-ram.rst) on the benefits of this feature?
> >
> > Please bare with me - I can start to ask silly questions.
> >
>
> That's fine. Ask away!
>
> > I thought it was about "keeping the snapshot file small". But then when I
> > was thinking the use case, iiuc fixed-ram migration should always suggest
> > the user to stop the VM first before migration starts, then if the VM is
> > stopped the ultimate image shouldn't be large either.
> >
> > Or is it about performance only? Where did I miss?
>
> Performance is the main benefit because fixed-ram enables the use of
> multifd for file migration which would otherwise not be
> parallelizable. To use multifd has been the direction for a while as you
> know, so it makes sense.
>
> A fast file migration is desirable because it could be used for
> snapshots with a stopped vm and also to replace the "exec:cat" hack
> (this last one I found out about recently, Juan mentioned it in this
> thread: https://lore.kernel.org/r/87cyx5ty26.fsf@secure.mitica).
I digged again the history, and started to remember the "live" migration
case for fixed-ram. IIUC that is what Dan mentioned in below email
regarding to the "virDomainSnapshotXXX" use case:
https://lore.kernel.org/all/ZD7MRGQ+4QsDBtKR@redhat.com/
So IIUC "stopped VM" is not always the use case?
If you agree with this, we need to document these two use cases clearly in
the doc update:
- "Migrate a VM to file, then destroy the VM"
It should be suggested to stop the VM first before triggering such
migration in this use case in the documents.
- "Take a live snapshot of the VM"
It'll be ideal if there is a portable interface to synchronously track
dirtying of guest pages, but we don't...
So fixed-ram seems to be the solution for such a portable solution for
taking live snapshot across-platforms as long as async dirty tracking
is still supported on that OS (aka KVM_GET_DIRTY_LOG). If async
tracking is not supported, snapshot cannot be done live on the OS then,
and one needs to use "snapshot-save".
For this one, IMHO it would be good to mention (from QEMU perspective)
the existance of background-snapshot even though libvirt didn't support
it for some reason. Currently background-snapshot lacks multi-thread
feature (nor O_DIRECT), though, so it may be less performant than
fixed-ram. However if with all features there I believe that's even
more performant. Please consider mention to a degree of detail on
this.
>
> The size aspect is just an interesting property, not necessarily a
> reason.
See above on the 2nd "live" use case of fixed-ram. I think in that case,
size is still a matter, then, because that one cannot stop the VM vcpus.
> It's about having the file bounded to the RAM size. So a running
> guest would not produce a continuously growing file. This is in contrast
> with previous experiments (libvirt code) in using a proxy to put
> multifd-produced data into a file.
>
> I'll add this^ information in a more organized matter to the docs and
> cover letter. Let me know what else I need to clarify.
Thanks.
>
> Some notes about fixed-ram by itself:
>
> This series also enables fixed-ram without multifd, which would only
> take benefit of the size property. That is not part of our end goal
> which is to have multifd + fixed-ram, but I kept it nonetheless because
> it helps to debug/reason about the fixed-ram format without conflating
> matters with multifd.
Yes, makes sense.
>
> Fixed-ram without multifd also allows the file migration to take benefit
> of direct io because the data portion of the file (pages) will be
> written with alignment. This version of the series does not yet support
> it, but I have a simple patch for the next version.
>
> I also had a - perhaps naive - idea that we could merge the io code +
> fixed-ram first, to expedite things and later bring in the multifd and
> directio enhancements, but the review process ended up not being that
> modular.
What's the review process issue you're talking about?
If you can split the series that'll help merging for sure to me. IIRC
there's complexity on passing the o-direct fds around, and not sure whether
that chunk can be put at the last, similarly to split the multifd bits.
One thing I just noticed is fixed-ram seems to be always preferred for
"file:" migrations. Then can we already imply fixed-ram for "file" URIs?
I'm even thinking whether we can make it the default and drop the fixed-ram
capability: fixed-ram won't work besides file, and file won't make sense if
not using offsets / fixed-ram. There's at least one problem, where we have
released 8.2 with "file:", so it means it could break users already using
"file:" there. I'm wondering whether that'll be worthwhile considering if
we can drop the (seems redundant..) capability. What do you think?
--
Peter Xu