qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v8 1/2] block/vxhs.c: Add support for a new bloc


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [PATCH v8 1/2] block/vxhs.c: Add support for a new block device type called "vxhs"
Date: Wed, 8 Mar 2017 18:11:58 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

On Wed, Mar 08, 2017 at 09:59:32AM -0800, ashish mittal wrote:
> On Wed, Mar 8, 2017 at 5:04 AM, Ketan Nilangekar
> <address@hidden> wrote:
> >
> >
> >> On Mar 8, 2017, at 1:13 AM, Daniel P. Berrange <address@hidden> wrote:
> >>
> >>> On Tue, Mar 07, 2017 at 05:27:55PM -0800, ashish mittal wrote:
> >>> Thanks! There is one more input I need some help with!
> >>>
> >>> VxHS network library opens a fixed number of connection channels to a
> >>> given host, and all the vdisks (that connect to the same host) share
> >>> these connection channels.
> >>>
> >>> Therefore, we need to open secure channels to a specific target host
> >>> only once for the first vdisk that connects to that host. All the
> >>> other vdisks that connect to the same target host will share the same
> >>> set of secure communication channels.
> >>>
> >>> I hope the above scheme is acceptable?
> >>
> >> I don't think I'm in favour of such an approach, as it forces a single
> >> QEMU process to use the same privileges for all disks it uses.
> >>
> >> Consider an example where a QEMU process has two disks, one shared
> >> readonly disk and one exclusive writable disk, both on the same host.
> >>
> >
> > This is not a use case for VxHS as a solution. We do not support sharing of 
> > vdisks across QEMU instances.
> >
> > Vxhs library was thus not designed to open different connections for 
> > individual vdisks within a QEMU instance.
> >
> > Implementing this will involve rewrite of significant parts of libvxhs 
> > client and server. Is this a new requirement for acceptance into QEMU?
> >
> >
> >> It is reasonable as an administrator to want to use different credentials
> >> for each of these. ie, they might have a set of well known credentials to
> >> authenticate to get access to the read-only disk, but have a different set
> >> of strictly limited access credentials to get access to the writable disk
> >>
> >> Trying to re-use the same connection for multiple cause prevents QEMU from
> >> authenticating with different credentials per disk, so I don't think that
> >> is a good approach - each disk should have totally independant state.
> >>
> 
> libvxhs does not make any claim to fit all the general purpose
> use-cases. It was purpose-built to be the communication channel for
> our block device service. As such, we do not need/support all the
> general use-cases. For the same reason we changed the name of the
> library from linqnio to libvxhs (v8 changelog, #2).

I raise these kind of points because they are relevant to apps like
OpenStack, against which you've proposed VHXS support. OpenStack
intends to allow a single volume to be shared by multiple guests,
so declare that out of scope you're crippling certain use cases
within openstack. Of course you're free to make such a decision,
but it makes VHXS a less compelling technology to use IMHO.

> Having dedicated communication channels for each device, or sharing
> the channels between multiple devices, should both be acceptable
> choices. The latter, IO multiplexing, is also a widely adopted IO
> model. It just happens to fit our use-cases better.
> 
> Binding access control to a communication channel would prevent
> anybody from using the latter approach. Having a separate way to allow
> access-control would permit the use of latter also.

Sharing access control across multiple disks does not fit in effectively
with the model used by apps that manage QEMU. Libvirt, and apps above
libvirt, like OpenStack, oVirt, and things like Kubernetes all represent
the information required to connect to a network block device, on a
per-disk basis - there's no sense of having some information that is
shared across all disks associated with a VM.

So from the POV of modelling this in QEMU, all information needs to be
specified against the individual -drive / -blockdev. If you really must,
you will just have to reject configurations which imply talking to the
same host, with incompatible parameters. Better would be to dynamically
determine if you can re-use connections, vs spawn new connection based
on the config.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]