qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: Spice project is now open


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] Re: Spice project is now open
Date: Sun, 13 Dec 2009 00:23:52 +0000
User-agent: Mutt/1.4.1i

On Sat, Dec 12, 2009 at 05:46:08PM -0600, Anthony Liguori wrote:
> Dor Laor wrote:
> >On 12/12/2009 07:40 PM, Anthony Liguori wrote:
> >>If Spice can crash a guest, that indicates to me that Spice is
> >>maintaining guest visible state.  That is difficult architecturally
> >>because if we want to do something like introduce a secure sandbox for
> >>running guest visible emulation, libspice would have to be part of that
> >>sandbox which would seem to be difficult.
> >>
> >>The VNC server cannot crash a guest by comparison.
> >
> >That's not accurate:
> 
> Cannot crash the *guest*.  It can crash qemu but it's not guest 
> visible.  IOW, the guest never interacts directly with the VNC server.  
> The difference matters when it comes to security sandboxing and live 
> migration.
> 
> >If we'll break spice to components we have the following (and I'm not 
> >a spice expert):
> >1. QXL device/driver pair
> >   Is anyone debate we should have it in qemu?
> >   We should attach it SDL and vnc backend too anyway.
> >2. VDI (Virtual Desktop Interface)
> >   http://www.spice-space.org/vdi.html
> 
> FYI, www.spice-space.org is not responding for me.

There is a planned outage for a physical relocation of the server that
hosts spice-space.org, virt-manager.org, ovirt.org, etc & a lot of other
sites. It should be back online before Monday if all has gone to plan.

> Where #3 lives is purely a function of what level of integration it 
> needs with qemu.  There may be advantages to having it external to 
> qemu.  I actually think we should move the VNC server out of qemu...
> 
> Dan Berrange and I have been talking about being able to move VNC server 
> into a central process such that all of the VMs can have a single VNC 
> port that can be connected to.  This greatly simplifies the firewalling 
> logic that an administrator has to deal with.   That's a problem I've 
> already had to deal with for our management tools.  We use a private 
> network for management and we bridge the VNC traffic into the customers 
> network so they can see the VGA session.  But since that traffic can be 
> a large range of ports and we have to tunnel the traffic through a 
> central server to get into the customer network, it's very difficult to 
> setup without opening up a mess of ports.  I think we're currently 
> opening a few thousand just for VNC.

Actually my plan was to have a VNC proxy server, that sat between the
end user & the real VNC in QEMU. Specifically I wanted to allow for a
model where the VNC server end users connected to for console servers
was on a physically separate host from the VMs. I had a handful of
use cases, mostly to deal with an oVirt deployment where console users
could be from the internet, rather than an intranet.

 - Avoiding the need to open up many ports on firewalls
 - Allow on the fly switching between any VMs the currently authenticated
   user was authorized to view without opening more connections (avoids
   needing to re-authenticate for each VM)
 - Avoid needing to expose virtualization hosts to console users,
   since console users may be coming in from an untrusted network, or
   even the internet itself.
 - Allow seemless migration where proxy server simply re-connects to
   the VM on new host, without the end user VNC connection ever noticing.

> For VNC, to make this efficient we just need a shared memory transport 
> that we can use locally.  I doubt the added latency will matter as long 
> as we're not copying data.

That would preclude running it as an off-node service, but since latency
is important that's probably inevitable. In any case there'd be nothing 
to stop someone adding an off-node proxy in front of that anyway should
requirements truely require it. The first point of just getting away from
the one-TCP port per VM model is a worthwhile use case all of its own.

> Of course, Spice is a different thing altogether.  I have no idea 
> whether it makes sense for Spice like it would for VNC.  But I'd like to 
> understand if the option is available.

I believe Spice shares the same needs as VNC in this regard, since when
spawning a VM with Spice, each must be given a pair of unique ports (one
runs cleartext, one with TLS/SSL). 

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]