[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PULL 16/16] migration: fix crash in when incoming clie
Re: [Qemu-devel] [PULL 16/16] migration: fix crash in when incoming client channel setup fails
Fri, 29 Jun 2018 14:41:26 +0530
On Thu, Jun 28, 2018 at 01:06:25PM +0200, Juan Quintela wrote:
> Balamuruhan S <address@hidden> wrote:
> > On Wed, Jun 27, 2018 at 02:56:04PM +0200, Juan Quintela wrote:
> >> From: Daniel P. Berrangé <address@hidden>
> > Hi Juan,
> > I tried to perform multifd enabled migration and from qemu monitor
> > enabled mutlifd capability on source and target,
> > (qemu) migrate_set_capability x-multifd on
> > (qemu) migrate -d tcp:127.0.0.1:4444
> > The migration succeeds and its cool to have the feature :)
> > (qemu) info migrate
> > globals:
> > store-global-state: on
> > only-migratable: off
> > send-configuration: on
> > send-section-footer: on
> > decompress-error-check: on
> > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off
> > zero-blocks: off compress: off events: off postcopy-ram: off x-colo:
> > off release-ram: off block: off return-path: off
> > pause-before-switchover: off x-multifd: on dirty-bitmaps: off
> > postcopy-blocktime: off late-block-activate: off
> > Migration status: completed
> > total time: 1051 milliseconds
> > downtime: 260 milliseconds
> > setup: 17 milliseconds
> > transferred ram: 8270 kbytes
> What is your setup? This value looks really small. I can see that you
I have applied this patchset to upstream qemu to test multifd migration,
qemu commandline is as below,
/home/bala/qemu/ppc64-softmmu/qemu-system-ppc64 --enable-kvm --nographic \
-vga none -machine pseries -m 4G,slots=32,maxmem=32G -smp 16,maxcpus=32 \
-device virtio-blk-pci,drive=rootdisk -drive
if=none,cache=none,format=qcow2,id=rootdisk -monitor telnet:127.0.0.1:1234,\
server,nowait -net nic,model=virtio -net user -redir tcp:2000::22
> have 4GB of RAM, it should be a bit higher. And setup time is also
> quite low from my experience.
sure, I will try with 32G mem. I am not aware about the setup time value.
> > throughput: 143.91 mbps
> I don't know what networking are you using, but my experience is that
> increasing packet_count to 64 or so helps a lot to increase bandwidth.
how do I configure packet_count to 64 ?
> What is your networking, page_count and number of channels?
I tried local host migration but need to work on multihost migration.
page_count and number of channels are default values,
> > remaining ram: 0 kbytes
> > total ram: 4194560 kbytes
> > duplicate: 940989 pages
> > skipped: 0 pages
> > normal: 109635 pages
> > normal bytes: 438540 kbytes
> > dirty sync count: 3
> > page size: 4 kbytes
> > But when I just enable the multifd in souce but not in target
> > source:
> > x-multifd: on
> > target:
> > x-multifd: off
> > when migration is triggered with,
> > migrate -d tcp:127.0.0.1:4444 (port I used)
> > The VM is lost in source with Segmentation fault.
> > I think the correct way is to enable multifd on both source and target
> > similar to postcopy, but in this negative scenario we should consider
> > the right way of handling not to loose the VM instead error out
> > appropriately.
> It is necesary to enable both sides. And it "used" to be that it
> dectected correctly when it was not enable on one of the sides. Check
> should be lost in some rebase, or any other change.
> Will take a look.
> > Please correct me if I miss something.
> Sure, thanks for the report.
> Later, Juan.
- [Qemu-devel] [PULL 08/16] migration: Synchronize multifd threads with main thread, (continued)
- [Qemu-devel] [PULL 08/16] migration: Synchronize multifd threads with main thread, Juan Quintela, 2018/06/27
- [Qemu-devel] [PULL 10/16] migration: Create ram_save_multifd_page, Juan Quintela, 2018/06/27
- [Qemu-devel] [PULL 09/16] migration: Create multifd_bytes ram_counter, Juan Quintela, 2018/06/27
- [Qemu-devel] [PULL 11/16] migration: Start sending messages, Juan Quintela, 2018/06/27
- [Qemu-devel] [PULL 12/16] migration: Wait for blocking IO, Juan Quintela, 2018/06/27
- [Qemu-devel] [PULL 13/16] migration: Remove not needed semaphore and quit, Juan Quintela, 2018/06/27
- [Qemu-devel] [PULL 14/16] migration: Stop sending whole pages through main channel, Juan Quintela, 2018/06/27
- [Qemu-devel] [PULL 16/16] migration: fix crash in when incoming client channel setup fails, Juan Quintela, 2018/06/27
- [Qemu-devel] [PULL 15/16] postcopy: drop ram_pages parameter from postcopy_ram_incoming_init(), Juan Quintela, 2018/06/27
- Re: [Qemu-devel] [PULL 00/16] Migration, Peter Maydell, 2018/06/28