qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: "guest-reset" and "invalid runstate transition" in COLO SVM


From: Jing-Wei Su
Subject: Re: "guest-reset" and "invalid runstate transition" in COLO SVM
Date: Wed, 18 Mar 2020 01:27:01 +0800

Hello,

I'm not sure whether the commit
(https://github.com/qemu/qemu/commit/f51d0b4178738bba87d796eba7444f6cdb3aa0fd)
can patch
to qemu-4.1.0 or qemu-4.2.0 directly.
After going through the COLO flow, the commit seems an individual
patch and to resolve  double-allocate colo_cache issue, right?
Or, it depends on other commits?

In my experiment, qemu-4.1.0/4.2.0 with the patch has a high
probability of the crash of SVM.

Thanks.
Sincerely,
JW

Jing-Wei Su <address@hidden> 於 2020年3月17日 週二 下午5:19寫道:
>
> Hello,
>
> I'm testing  COLO in qemu-4.2.0 with the commit
> https://github.com/qemu/qemu/commit/f51d0b4178738bba87d796eba7444f6cdb3aa0fd.
>
> The qmp of SVM sometimes show the following errors ("guest-reset"
> or/and "invalid runstate transition") .
> Does any have idea about this?
>
> {"timestamp": {"seconds": 1584435907, "microseconds": 610964},
> "event": "RESUME"}
> {"timestamp": {"seconds": 1584435927, "microseconds": 553683}, "event": 
> "STOP"}
> {"timestamp": {"seconds": 1584435980, "microseconds": 533344},
> "event": "RESUME"}
> {"timestamp": {"seconds": 1584435980, "microseconds": 579256},
> "event": "RESET", "data": {"guest": true, "reason": "guest-reset"}}
> {"timestamp": {"seconds": 1584435980, "microseconds": 588350}, "event": 
> "STOP"}
> {"timestamp": {"seconds": 1584435980, "microseconds": 801483},
> "event": "RESUME"}
> {"timestamp": {"seconds": 1584435980, "microseconds": 802061}, "event": 
> "STOP"}
> {"timestamp": {"seconds": 1584435980, "microseconds": 803988},
> "event": "RESET", "data": {"guest": true, "reason": "guest-reset"}}
> qemu-system-x86_64: invalid runstate transition: 'colo' -> 'prelaunch'
> secondary-nonshared.sh: line 25: 23457 Aborted                 (core
> dumped) qemu-system-x86_64 -name secondary -enable-kvm -cpu
> qemu64,+kvmclock -m 2048 -global kvm-apic.vapic=false -netdev
> tap,id=hn0,vhost=off,helper=/usr/local/libexec/qemu-bridge-helper
> -device e1000,id=e0,netdev=hn0 -chardev
> socket,id=red0,host=$primary_ip,port=9003,reconnect=1 -chardev
> socket,id=red1,host=$primary_ip,port=9004,reconnect=1 -object
> filter-redirector,id=f1,netdev=hn0,queue=tx,indev=red0 -object
> filter-redirector,id=f2,netdev=hn0,queue=rx,outdev=red1 -object
> filter-rewriter,id=rew0,netdev=hn0,queue=all -drive
> if=none,id=parent0,file.filename=$imagefolder/secondary.qcow2,driver=qcow2
> -drive 
> if=none,id=childs0,driver=replication,mode=secondary,file.driver=qcow2,top-id=colo-disk0,file.file.filename=$imagefolder/secondary-active.qcow2,file.backing.driver=qcow2,file.backing.file.filename=$imagefolder/secondary-hidden.qcow2,file.backing.backing=parent0
> -drive 
> if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,children.0=childs0
> -qmp unix:/tmp/qmp-svm-sock,server,nowait -qmp stdio -vnc :5 -incoming
> tcp:0.0.0.0:9998
>
> My PVM and SVM are on the same PC.
> Here are the steps to setup my testing
> (1) Start PVM
> qemu-system-x86_64 -name primary -enable-kvm -cpu qemu64,+kvmclock -m
> 2048 -global kvm-apic.vapic=false \
> -netdev tap,id=hn0,vhost=off,helper=/usr/local/libexec/qemu-bridge-helper \
> -device e1000,id=e0,netdev=hn0 \
> -drive 
> if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,children.0.file.filename=$imagefolder/primary.qcow2,children.0.driver=qcow2
> \
> -qmp stdio -vnc :4
>
> (2) Add chardevs to PVM via qmp
> {'execute': 'qmp_capabilities'}
> {'execute': 'chardev-add', 'arguments':{ 'id': 'mirror0', 'backend':
> {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': {
> 'host': '0.0.0.0', 'port': '9003' } }, 'server': true } } } }
> {'execute': 'chardev-add', 'arguments':{ 'id': 'compare1', 'backend':
> {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': {
> 'host': '0.0.0.0', 'port': '9004' } }, 'server': true } } } }
> {'execute': 'chardev-add', 'arguments':{ 'id': 'compare0', 'backend':
> {'type': 'socket', 'data': {'addr': { 'type': 'inet', 'data': {
> 'host': '127.0.0.1', 'port': '9001' } }, 'server': true } } } }
> {'execute': 'chardev-add', 'arguments':{ 'id': 'compare0-0',
> 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet',
> 'data': { 'host': '127.0.0.1', 'port': '9001' } }, 'server': false } }
> } }
> {'execute': 'chardev-add', 'arguments':{ 'id': 'compare_out',
> 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet',
> 'data': { 'host': '127.0.0.1', 'port': '9005' } }, 'server': true } }
> } }
> {'execute': 'chardev-add', 'arguments':{ 'id': 'compare_out0',
> 'backend': {'type': 'socket', 'data': {'addr': { 'type': 'inet',
> 'data': { 'host': '127.0.0.1', 'port': '9005' } }, 'server': false } }
> } }
>
> (3) Start SVM
> primary_ip=127.0.0.1
> qemu-system-x86_64 -name secondary -enable-kvm -cpu qemu64,+kvmclock
> -m 2048 -global kvm-apic.vapic=false \
> -netdev tap,id=hn0,vhost=off,helper=/usr/local/libexec/qemu-bridge-helper \
> -device e1000,id=e0,netdev=hn0 \
> -chardev socket,id=red0,host=$primary_ip,port=9003,reconnect=1 \
> -chardev socket,id=red1,host=$primary_ip,port=9004,reconnect=1 \
> -object filter-redirector,id=f1,netdev=hn0,queue=tx,indev=red0 \
> -object filter-redirector,id=f2,netdev=hn0,queue=rx,outdev=red1 \
> -object filter-rewriter,id=rew0,netdev=hn0,queue=all \
> -drive 
> if=none,id=parent0,file.filename=$imagefolder/secondary.qcow2,driver=qcow2
> \
> -drive 
> if=none,id=childs0,driver=replication,mode=secondary,file.driver=qcow2,top-id=colo-disk0,file.file.filename=$imagefolder/secondary-active.qcow2,file.backing.driver=qcow2,file.backing.file.filename=$imagefolder/secondary-hidden.qcow2,file.backing.backing=parent0
> \
> -drive 
> if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,children.0=childs0
> \
> -qmp stdio -vnc :5 -incoming tcp:0.0.0.0:9998
>
> (4) Start NBD server of SVM
> {'execute':'qmp_capabilities'}
> {'execute': 'nbd-server-start', 'arguments': {'addr': {'type': 'inet',
> 'data': {'host': '0.0.0.0', 'port': '9999'} } } }
> {'execute': 'nbd-server-add', 'arguments': {'device': 'parent0',
> 'writable': true } }
>
> (5) Invoke drive-mirror in PVM side
> {'execute': 'drive-mirror', 'arguments':{ 'device': 'colo-disk0',
> 'job-id': 'resync', 'target': 'nbd://127.0.0.2:9999/parent0', 'mode':
> 'existing', 'format': 'raw', 'sync': 'full'} }
>
> Wait until disk is synced, then:
> {'execute': 'stop'}
> {'execute': 'block-job-cancel', 'arguments':{ 'device': 'resync'} }
>
> (6) Add Filters and Start COLO Migrate
> {'execute': 'human-monitor-command', 'arguments':{ 'command-line':
> 'drive_add -n buddy
> driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.1,file.port=9999,file.export=parent0,node-name=replication0'}}
> {'execute': 'x-blockdev-change', 'arguments':{ 'parent': 'colo-disk0',
> 'node': 'replication0' } }
> {'execute': 'object-add', 'arguments':{ 'qom-type': 'filter-mirror',
> 'id': 'm0', 'props': { 'netdev': 'hn0', 'queue': 'tx', 'outdev':
> 'mirror0' } } }
> {'execute': 'object-add', 'arguments':{ 'qom-type':
> 'filter-redirector', 'id': 'redire0', 'props': { 'netdev': 'hn0',
> 'queue': 'rx', 'indev': 'compare_out' } } }
> {'execute': 'object-add', 'arguments':{ 'qom-type':
> 'filter-redirector', 'id': 'redire1', 'props': { 'netdev': 'hn0',
> 'queue': 'rx', 'outdev': 'compare0' } } }
> {'execute': 'object-add', 'arguments':{ 'qom-type': 'iothread', 'id':
> 'iothread1' } }
> {'execute': 'object-add', 'arguments':{ 'qom-type': 'colo-compare',
> 'id': 'comp0', 'props': { 'primary_in': 'compare0-0', 'secondary_in':
> 'compare1', 'outdev': 'compare_out0', 'iothread': 'iothread1' } } }
> {'execute': 'migrate-set-capabilities', 'arguments':{ 'capabilities':
> [ {'capability': 'x-colo', 'state': true } ] } }
> {'execute': 'migrate', 'arguments':{ 'uri': 'tcp:127.0.0.1:9998' } }
>
> Thanks!
> Sincerely,
> Jing-Wei



reply via email to

[Prev in Thread] Current Thread [Next in Thread]