[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI

From: Alexey Kardashevskiy
Subject: Re: [Qemu-ppc] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
Date: Fri, 31 May 2013 15:58:48 +1000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6

On 05/27/2013 05:03 PM, Paolo Bonzini wrote:
> Il 27/05/2013 08:48, Alexey Kardashevskiy ha scritto:
>>>> This is only true when the rerror and werror options have the values
>>>> "ignore" or "report".  See virtio-scsi for an example of how to save the
>>>> requests using the save_request and load_request callbacks in SCSIBusInfo.
>> Sigh.
> ?

I thought the series is ready to go but I was wrong. Furthermore when I got
to the point where I could actually test the save/restore for vscsi_req,
migration was totally broken on PPC and it took some time to fix it :-/

>> How do you test that requests are saved/restored correctly? What does
>> happen to requests which were already sent to the real hardware (real block
>> device, etc) but have not completed at the moment of the end of migration?
> They aren't saved, there is a bdrv_drain_all() in the migration code.
> This is only used for rerror=stop or werror=stop.  To test it you can
> use blkdebug (also a bit underdocumented) or hack block/raw-posix.c with
> code that makes it fail the 100th write or something like that.  Start
> the VM and migrate it while paused to a QEMU that doesn't have the hack.

I run QEMU as (this is the destination, the source just does not have
./qemu-system-ppc64 \
 -L "qemu-ppc64-bios/" \
 -device "spapr-vscsi,id=ibmvscsi0" \
 -incoming "tcp:localhost:4000" \
 -m "1024" \
 -machine "pseries" \
 -nographic \
 -vga "none" \

Am I using werror/rerror correctly?

I did not really understand how to use blkdebug or what else to hack in
raw-posix but the point is I cannot get QEMU into a state with at least one
vcsci_req.active==1, they are always inactive no matter what I do - I run
10 instances of "dd if=/def/sda of=/dev/null bs=4K" (on 8GB image with
FC18) and increase migration speed to 500MB/s, no effect.

How do you trigger the situation when there are inactive requests which
have to be migrated?

And another question (sorry I am not very familiar with terminology but
cc:Ben is :) ) - what happens with indirect requests if migration happened
in the middle of handling such a request? virtio-scsi does not seem to
handle this situation anyhow, it just reconstructs the whole request and
that's it.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]