[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 21/29] migration/multifd: Add pages to the receiving side
|
From: |
Fabiano Rosas |
|
Subject: |
Re: [PATCH v2 21/29] migration/multifd: Add pages to the receiving side |
|
Date: |
Tue, 31 Oct 2023 20:18:06 -0300 |
Peter Xu <peterx@redhat.com> writes:
> On Mon, Oct 23, 2023 at 05:36:00PM -0300, Fabiano Rosas wrote:
>> Currently multifd does not need to have knowledge of pages on the
>> receiving side because all the information needed is within the
>> packets that come in the stream.
>>
>> We're about to add support to fixed-ram migration, which cannot use
>> packets because it expects the ramblock section in the migration file
>> to contain only the guest pages data.
>>
>> Add a pointer to MultiFDPages in the multifd_recv_state and use the
>> pages similarly to what we already do on the sending side. The pages
>> are used to transfer data between the ram migration code in the main
>> migration thread and the multifd receiving threads.
>>
>> Signed-off-by: Fabiano Rosas <farosas@suse.de>
>
> If it'll be new code to maintain anyway, I think we don't necessarily
> always use multifd structs, right?
>
For the sending side, unrelated to this series, I'm experimenting with
defining a generic structure to be passed into multifd:
struct MultiFDData_t {
void *opaque;
size_t size;
bool ready;
void (*cleanup_fn)(void *);
};
The client code (ram.c) would use the opaque field to put whatever it
wants in it. Maybe we could have a similar concept on the receiving
side?
Here's a PoC I'm writing, if you're interested:
https://github.com/farosas/qemu/commits/multifd-packet-cleanups
(I'm delaying sending this to the list because we already have a
reasonable backlog of features and refactorings to merge.)
> Rather than introducing MultiFDPages_t into recv side, can we allow pages
> to be distributed in chunks of (ramblock, start_offset, end_offset) tuples?
> That'll be much more efficient than per-page. We don't need page granule
> here on recv side, we want to load chunks of mem fast.
>
> We don't even need page granule on sender side, but since only myself cared
> about perf.. and obviously the plan is to even drop auto-pause, then VM can
> be running there, so sender must do that per-page for now. But now on recv
> side VM must be stopped before all ram loaded, so there's no such problem.
> And since we'll introduce new code anyway, IMHO we can decide how to do
> that even if we want to reuse multifd.
>
> Main thread can assign these (ramblock, start_offset, end_offset) jobs to
> recv threads. If ramblock is too small (e.g. 1M), assign it anyway to one
> thread. If ramblock is >512MB, cut it into slices and feed them to multifd
> threads one by one. All the rest can be the same.
>
> Would that be better? I would expect measurable loading speed difference
> with much larger chunks and with that range-based tuples.
I need to check how that would interact with the existing recv_thread
code. Hopefully there's nothing there preventing us from using a
different data structure.
- Re: [PATCH v2 19/29] migration/multifd: Add outgoing QIOChannelFile support, (continued)
- [PATCH v2 18/29] migration/multifd: Allow multifd without packets, Fabiano Rosas, 2023/10/23
- [PATCH v2 20/29] migration/multifd: Add incoming QIOChannelFile support, Fabiano Rosas, 2023/10/23
- [PATCH v2 24/29] migration/ram: Ignore multifd flush when doing fixed-ram migration, Fabiano Rosas, 2023/10/23
- [PATCH v2 21/29] migration/multifd: Add pages to the receiving side, Fabiano Rosas, 2023/10/23
- [PATCH v2 22/29] io: Add a pwritev/preadv version that takes a discontiguous iovec, Fabiano Rosas, 2023/10/23
- [PATCH v2 23/29] migration/ram: Add a wrapper for fixed-ram shadow bitmap, Fabiano Rosas, 2023/10/23
- [PATCH v2 25/29] migration/multifd: Support outgoing fixed-ram stream format, Fabiano Rosas, 2023/10/23
- [PATCH v2 26/29] migration/multifd: Support incoming fixed-ram stream format, Fabiano Rosas, 2023/10/23
- [PATCH v2 27/29] tests/qtest: Add a multifd + fixed-ram migration test, Fabiano Rosas, 2023/10/23
- [PATCH v2 28/29] migration: Add direct-io parameter, Fabiano Rosas, 2023/10/23