[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v12 00/21] Multifd
From: |
Peter Xu |
Subject: |
Re: [Qemu-devel] [PATCH v12 00/21] Multifd |
Date: |
Thu, 26 Apr 2018 16:28:52 +0800 |
User-agent: |
Mutt/1.9.1 (2017-09-22) |
On Wed, Apr 25, 2018 at 01:27:02PM +0200, Juan Quintela wrote:
>
> Hi
>
>
> [v12]
>
> Big news, it is not RFC anymore, it works reliabely for me.
>
> Changes:
> - Locknig changed completely (several times)
> - We now send all pages through the channels. In a 2GB guest with 1 disk
> and a network card, the amount of data send for RAM was 80KB.
> - This is not optimized yet, but it shouws clear improvements over precopy.
> testing over localhost networking I can guet:
> - 2 VCPUs guest
> - 2GB RAM
> - runn stress --vm 4 --vm 500GB (i.e. dirtying 2GB or RAM each second)
>
> - Total time: precopy ~50seconds, multifd around 11seconds
> - Bandwidth usage is around 273MB/s vs 71MB/s on the same hardware
>
> This is very preleminary testing, will send more numbers when I got them.
> But looks promissing.
>
> Things that will be improved later:
> - Initial synchronization is too slow (around 1s)
> - We synchronize all threads after each RAM section, we can move to only
> synchronize them after we have done a bitmap syncrhronization
> - We can improve bitmap walking (but that is independent of multifd)
Hi, Juan,
I got some high level review comments and notes:
- This series may need to rebase after Guangrong's cleanup series.
- Looks like now we allow multifd and compression be enabled
together. Shall we restrict on that?
- Is multifd only for TCP? If so, do we check against that? E.g.,
should we fail the unix/fd/exec migrations when multifd is enabled?
- Why init sync is slow (1s)? Is there any clue of that problem?
- Currently the sync between threads are still very complicated to
me... we have these on the sender side (I didn't dig the recv side):
- two global semaphores in multifd_send_state,
- one mutex and two semaphores in each of the send thread,
So in total we'll have 2+3*N such locks/sems.
I'm thinking whether we can further simplify the sync logic a bit...
Thanks,
--
Peter Xu
- [Qemu-devel] [PATCH v12 15/21] migration: Add block where to send/receive packets, (continued)
- [Qemu-devel] [PATCH v12 15/21] migration: Add block where to send/receive packets, Juan Quintela, 2018/04/25
- [Qemu-devel] [PATCH v12 18/21] migration: Start sending messages, Juan Quintela, 2018/04/25
- [Qemu-devel] [PATCH v12 17/21] migration: Create ram_multifd_page, Juan Quintela, 2018/04/25
- [Qemu-devel] [PATCH v12 16/21] migration: Synchronize multifd threads with main thread, Juan Quintela, 2018/04/25
- [Qemu-devel] [PATCH v12 20/21] migration: Remove not needed semaphore and quit, Juan Quintela, 2018/04/25
- [Qemu-devel] [PATCH v12 19/21] migration: Wait for blocking IO, Juan Quintela, 2018/04/25
- [Qemu-devel] [PATCH v12 21/21] migration: Stop sending whole pages through main channel, Juan Quintela, 2018/04/25
- Re: [Qemu-devel] [PATCH v12 00/21] Multifd, Juan Quintela, 2018/04/25
- Re: [Qemu-devel] [PATCH v12 00/21] Multifd,
Peter Xu <=