On 06/06/2012 05:10 PM, Yonit Halperin wrote:
I would like to add some more points to Gerd's explanation:
On 06/05/2012 04:15 PM, Gerd Hoffmann wrote:
Absolutely not. This is hideously ugly and affects a bunch of code.
Spice is *not* getting a hook in migration where it gets to add
arbitrary amounts of downtime to the migration traffic. That's a
I'd like to be more constructive in my response, but you aren't
explaining the problem well enough for me to offer an alternative
solution. You need to find another way to solve this problem.
Actually, this is not the first time we address you with this issues. For
first part of the above discussion is not directly related to the
I'll try to explain in more details:
As Gerd mentioned, migrating the spice connection smoothly requires
server to keep running and send/receive data to/from the client, after
has already completed, till the client completely transfers to the
suggested patch series only delays the migration state change from
COMPLETED/ERROR/CANCELED, till spice signals it has completed its part in
As I see it, if spice connection does exists, its migration should be
a non separate part of the whole migration process, and thus, the
state shouldn't change from ACTIVE, till spice has completed its part.
don't think we should have a qmp event for signaling libvirt about
Spice client migration has nothing to do with guest migration. Trying to
abuse QEMU to support it is not acceptable.
The second challenge we are facing, which I addressed in the "plans"
part of the
cover-letter, and on which I think you (anthony) actually replied, is
tackle migrating spice data from the src server to the target server.
can be usb/smartcard packets sent from a device connected on the
client, to the
server, and that haven't reached the device. Or partial data that has
from a guest character device and that haven't been sent to the
data can be internal server-client state data we would wish to keep on
server in order to avoid establishing the connection to the target
and possibly also suffer from a slower responsiveness at start.
In the cover-letter I suggested to transfer spice migration data via
infrastructure. The other alternative which we also discussed in the
is to transfer the data via the client. The latter also requires
holding the src
process alive after migration completion, in order to manage to complete
transferring the data from the src to the client.
To summarize, since we can still use the client to transfer data from
the src to
the target (instead of using vmstate), the major requirement of spice,
keep the src running after migration has completed.
So send a QMP event and call it a day.