qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Vmchannel PCI device.


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [PATCH] Vmchannel PCI device.
Date: Sun, 14 Dec 2008 23:33:05 +0000
User-agent: Mutt/1.4.1i

On Sun, Dec 14, 2008 at 04:56:49PM -0600, Anthony Liguori wrote:
> Daniel P. Berrange wrote:
> >On Sun, Dec 14, 2008 at 01:15:42PM -0600, Anthony Liguori wrote:
> >  
> >One non-QEMU backend I can see being implemented is a DBus daemon,
> >providing a simple bus for RPC calls between guests & host.
> 
> The main problem with "external" backends is that they cannot easily 
> participate in save/restore or live migration.  If you want to have an 
> RPC mechanism, I would suggest implementing the backend in QEMU and 
> hooking QEMU up to dbus.  Then you can implement proper save/restore.

DBus is a general purpose RPC service, which has little-to-no knowledge
of the semantics of application services running over it. Simply pushing
a backend into QEMU can't magically make sure all the application level
state is preserved across save/restore/migrate. For some protocols the
only viable option may be to explicitly give the equivalent of -EPIPE 
/ POLLHUP to the guest and have it explicitly re-establish connectivity 
with the host backend and re-initialize neccessary state if desired

> > Or on
> >a similar theme, perhaps a QPid message broker in the host OS. Yet
> >another backend is a clustering service providing a virtual fence
> >device to VMs.
> 
> Why not use virtual networking for a clustering service (as you would in 
> real machines).

It imposes a configuration & authentication burden on the guest to
use networking. When a virtual fence device is provided directly from
the host OS, you can get zero-config deployment of clustering with
the need to configure any authentication credentials in the guest.
This is a big plus over over the traditional setup for real machines.

> > All of these would live outside QEMU, and as such
> >exposing the backend using the character device infrastructure 
> >is a natural fit.
> 
> If you don't have QEMU as a broker, it makes it very hard for QEMU to 
> virtualization all of the resources exposed to the guest.  This 
> complicates things like save/restore and complicates security policies 
> since you now have things being done on behalf of a guest originating 
> from another process.  It generally breaks the model of guest-as-a-process.

This really depends on what you define the semantics of the vmchannel
protocol to be - specifically whether you want save/restore/migrate to
be totally opaque to the guest or not. I could imagine one option is to
have the guest end of the device be given -EPIPE when the backend is
restarted for restore/migrate, and choose to re-establish its connection
if so desired. This would not require QEMU to maintain any backend state.
For stateless datagram (UDP-like) application protocols there's nothing 
that there's no special support required for save/restore.

> What's the argument to do these things external to QEMU?

There are many potential uses cases for VMchannel, not all are going
to be general purpose things that everyone wants to use. Forcing alot
of application specific backend code into QEMU is not a good way to 
approach this from a maintenance point of view. Some backends may well
be well suited to living inside QEMU, while others may be better suited
as external services. 

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]