qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Vmchannel PCI device.


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH] Vmchannel PCI device.
Date: Sun, 14 Dec 2008 20:03:39 -0600
User-agent: Thunderbird 2.0.0.17 (X11/20080925)

Daniel P. Berrange wrote:
On Sun, Dec 14, 2008 at 04:56:49PM -0600, Anthony Liguori wrote:
Daniel P. Berrange wrote:
On Sun, Dec 14, 2008 at 01:15:42PM -0600, Anthony Liguori wrote:
One non-QEMU backend I can see being implemented is a DBus daemon,
providing a simple bus for RPC calls between guests & host.
The main problem with "external" backends is that they cannot easily participate in save/restore or live migration. If you want to have an RPC mechanism, I would suggest implementing the backend in QEMU and hooking QEMU up to dbus. Then you can implement proper save/restore.

DBus is a general purpose RPC service, which has little-to-no knowledge
of the semantics of application services running over it. Simply pushing
a backend into QEMU can't magically make sure all the application level
state is preserved across save/restore/migrate. For some protocols the
only viable option may be to explicitly give the equivalent of -EPIPE / POLLHUP to the guest and have it explicitly re-establish connectivity with the host backend and re-initialize neccessary state if desired

In the case of dbus, you actually have a shot of making save/restore transparent. If you send the RPCs, you can parse the messages in QEMU and know when you have a complete buffer. You can then dispatch the RPC from QEMU (and BTW, perfect example of security, you want the RPCs to originate from the QEMU process). When you get the RPC response, you can marshal it and make it available to the guest.

If you ever have a request or response, you should save the partial results as part of save/restore. You could use the live feature of savevm to attempt to wait until there are no pending RPCs. In fact, you have to do this because otherwise, the save/restore would be broken.

This example is particularly bad for EPIPE. If the guest sends an RPC, what happens if it gets EPIPE? Has it been completed? It would make it very difficult to program for this model.

EPIPE is the model Xen used for guest save/restore and it's been a huge hassle. You don't want guests involved in save/restore because it adds a combinatorial factor to your test matrix. You have to now test every host combination with every supported guest combination to ensure that save/restore has not regressed. It's a huge burden and IMHO is never truly necessary.

It imposes a configuration & authentication burden on the guest to
use networking. When a virtual fence device is provided directly from
the host OS, you can get zero-config deployment of clustering with
the need to configure any authentication credentials in the guest.
This is a big plus over over the traditional setup for real machines.

If you just want to use vmchannel for networking without the "configuration" burden then someone heavily involved with a distro should just preconfigure, say Fedora, to create a private network on a dedicated network interface as soon as the system starts. Then you have a dedicated, never disappearing network interface you can use for all of this stuff. And it requires no application modification to boot.

This really depends on what you define the semantics of the vmchannel
protocol to be - specifically whether you want save/restore/migrate to
be totally opaque to the guest or not. I could imagine one option is to
have the guest end of the device be given -EPIPE when the backend is
restarted for restore/migrate, and choose to re-establish its connection
if so desired. This would not require QEMU to maintain any backend state.
For stateless datagram (UDP-like) application protocols there's nothing that there's no special support required for save/restore.

It's a losing proposition because it explodes the test matrix to build anything that's even remotely robust.

What's the argument to do these things external to QEMU?

There are many potential uses cases for VMchannel, not all are going
to be general purpose things that everyone wants to use. Forcing alot
of application specific backend code into QEMU is not a good way to approach this from a maintenance point of view. Some backends may well
be well suited to living inside QEMU, while others may be better suited
as external services.

I think VMchannel is a useful concept but not for the same reasons you do :-)

Regards,

Anthony Liguori

Daniel





reply via email to

[Prev in Thread] Current Thread [Next in Thread]