[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: vnc clipboard support

From: Daniel P . Berrangé
Subject: Re: vnc clipboard support
Date: Fri, 29 Jan 2021 12:21:37 +0000
User-agent: Mutt/1.14.6 (2020-07-11)

On Fri, Jan 29, 2021 at 03:58:00PM +0400, Marc-André Lureau wrote:
> Hi
> On Fri, Jan 29, 2021 at 3:24 PM Daniel P. Berrangé <berrange@redhat.com> 
> wrote:
> >
> > On Fri, Jan 29, 2021 at 12:18:19AM +0400, Marc-André Lureau wrote:
> > can have QEMU open the vsock device internally, it feels a bit silly to
> > have one part of QEMU writing to a vsock device, and then another part
> > of QEMU reading back from the very same device. Especially because you
> > have now introduced the extra problem of having to allocate unique
> > vsock addresses for each instance and deal with possibility of external
> > apps maliciously trying to interact with your vsock backend.
> >
> > As above though, I think the way spice used virtio-serial was suboptimal
> > and it should have had one extra virtio-serial device per seat.
> And per service? and libvirt to hotplug stuff? Sounds insane to me.
> And what about services that need to handle several connections in the
> guest. For example, the way spice-webdavd works is really a pain, it
> has to multiplex guest connections over virtio-serial... All of this
> would be so much simpler with a single vsock connection and some kind
> of bus.

I wasn't really suggesting it for something like spice-webdavd, just
the spice agent functionality, which is not really connection-oriented
stuff. ie its a simple RPC like service where there's only ever one
client and server endpoint.  spice-webdavd would clearly be better
based on vsock, since it is inherantly a multi-connection architecture.

|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

reply via email to

[Prev in Thread] Current Thread [Next in Thread]