[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PULL 18/19] virtio: add vhost-user-fs base device
From: |
Michael S. Tsirkin |
Subject: |
Re: [PULL 18/19] virtio: add vhost-user-fs base device |
Date: |
Mon, 7 Oct 2019 03:28:09 -0400 |
On Mon, Oct 07, 2019 at 07:57:04AM +0200, Igor Mammedov wrote:
> On Sat, 5 Oct 2019 18:00:09 -0400
> "Michael S. Tsirkin" <address@hidden> wrote:
>
> > From: "Dr. David Alan Gilbert" <address@hidden>
> >
> > The virtio-fs virtio device provides shared file system access using
> > the FUSE protocol carried over virtio.
> > The actual file server is implemented in an external vhost-user-fs device
> > backend process.
> >
> > Signed-off-by: Stefan Hajnoczi <address@hidden>
> > Signed-off-by: Sebastien Boeuf <address@hidden>
> > Signed-off-by: Dr. David Alan Gilbert <address@hidden>
> > Message-Id: <address@hidden>
> > Acked-by: Peter Maydell <address@hidden>
> > Reviewed-by: Michael S. Tsirkin <address@hidden>
> > Signed-off-by: Michael S. Tsirkin <address@hidden>
>
> Michael,
>
> looks like your script for sending pull req is still broken,
>
> this patch in pull req has the same
> Message-ID: <address@hidden>
> as the original email
> [PATCH v4 2/3] virtio: add vhost-user-fs base device
> even though body differs from original,
> which breaks threading on email client side.
>
> Could you fix it please?
Finally figured it out. It was picking up the last
Message-ID ignoring the mail header boundaries.
Fixed, thanks for the patience!
>
>
> > ---
> > configure | 13 ++
> > hw/virtio/Makefile.objs | 1 +
> > hw/virtio/vhost-user-fs.c | 299 ++++++++++++++++++++++++++++++
> > include/hw/virtio/vhost-user-fs.h | 45 +++++
> > 4 files changed, 358 insertions(+)
> > create mode 100644 hw/virtio/vhost-user-fs.c
> > create mode 100644 include/hw/virtio/vhost-user-fs.h
> >
> > diff --git a/configure b/configure
> > index 8f8446f52b..7bd01b201c 100755
> > --- a/configure
> > +++ b/configure
> > @@ -381,6 +381,7 @@ vhost_crypto=""
> > vhost_scsi=""
> > vhost_vsock=""
> > vhost_user=""
> > +vhost_user_fs=""
> > kvm="no"
> > hax="no"
> > hvf="no"
> > @@ -1293,6 +1294,10 @@ for opt do
> > ;;
> > --enable-vhost-vsock) vhost_vsock="yes"
> > ;;
> > + --disable-vhost-user-fs) vhost_user_fs="no"
> > + ;;
> > + --enable-vhost-user-fs) vhost_user_fs="yes"
> > + ;;
> > --disable-opengl) opengl="no"
> > ;;
> > --enable-opengl) opengl="yes"
> > @@ -2236,6 +2241,10 @@ test "$vhost_crypto" = "" && vhost_crypto=$vhost_user
> > if test "$vhost_crypto" = "yes" && test "$vhost_user" = "no"; then
> > error_exit "--enable-vhost-crypto requires --enable-vhost-user"
> > fi
> > +test "$vhost_user_fs" = "" && vhost_user_fs=$vhost_user
> > +if test "$vhost_user_fs" = "yes" && test "$vhost_user" = "no"; then
> > + error_exit "--enable-vhost-user-fs requires --enable-vhost-user"
> > +fi
> >
> > # OR the vhost-kernel and vhost-user values for simplicity
> > if test "$vhost_net" = ""; then
> > @@ -6377,6 +6386,7 @@ echo "vhost-crypto support $vhost_crypto"
> > echo "vhost-scsi support $vhost_scsi"
> > echo "vhost-vsock support $vhost_vsock"
> > echo "vhost-user support $vhost_user"
> > +echo "vhost-user-fs support $vhost_user_fs"
> > echo "Trace backends $trace_backends"
> > if have_backend "simple"; then
> > echo "Trace output file $trace_file-<pid>"
> > @@ -6873,6 +6883,9 @@ fi
> > if test "$vhost_user" = "yes" ; then
> > echo "CONFIG_VHOST_USER=y" >> $config_host_mak
> > fi
> > +if test "$vhost_user_fs" = "yes" ; then
> > + echo "CONFIG_VHOST_USER_FS=y" >> $config_host_mak
> > +fi
> > if test "$blobs" = "yes" ; then
> > echo "INSTALL_BLOBS=yes" >> $config_host_mak
> > fi
> > diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
> > index 964ce78607..47ffbf22c4 100644
> > --- a/hw/virtio/Makefile.objs
> > +++ b/hw/virtio/Makefile.objs
> > @@ -11,6 +11,7 @@ common-obj-$(CONFIG_VIRTIO_PCI) += virtio-pci.o
> > common-obj-$(CONFIG_VIRTIO_MMIO) += virtio-mmio.o
> > obj-$(CONFIG_VIRTIO_BALLOON) += virtio-balloon.o
> > obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
> > +obj-$(CONFIG_VHOST_USER_FS) += vhost-user-fs.o
> > obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) +=
> > virtio-crypto-pci.o
> > obj-$(CONFIG_VIRTIO_PMEM) += virtio-pmem.o
> > common-obj-$(call land,$(CONFIG_VIRTIO_PMEM),$(CONFIG_VIRTIO_PCI)) +=
> > virtio-pmem-pci.o
> > diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c
> > new file mode 100644
> > index 0000000000..f0df7f4746
> > --- /dev/null
> > +++ b/hw/virtio/vhost-user-fs.c
> > @@ -0,0 +1,299 @@
> > +/*
> > + * Vhost-user filesystem virtio device
> > + *
> > + * Copyright 2018-2019 Red Hat, Inc.
> > + *
> > + * Authors:
> > + * Stefan Hajnoczi <address@hidden>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or
> > + * (at your option) any later version. See the COPYING file in the
> > + * top-level directory.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include <sys/ioctl.h>
> > +#include "standard-headers/linux/virtio_fs.h"
> > +#include "qapi/error.h"
> > +#include "hw/qdev-properties.h"
> > +#include "hw/virtio/virtio-bus.h"
> > +#include "hw/virtio/virtio-access.h"
> > +#include "qemu/error-report.h"
> > +#include "hw/virtio/vhost-user-fs.h"
> > +#include "monitor/monitor.h"
> > +
> > +static void vuf_get_config(VirtIODevice *vdev, uint8_t *config)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > + struct virtio_fs_config fscfg = {};
> > +
> > + memcpy((char *)fscfg.tag, fs->conf.tag,
> > + MIN(strlen(fs->conf.tag) + 1, sizeof(fscfg.tag)));
> > +
> > + virtio_stl_p(vdev, &fscfg.num_request_queues,
> > fs->conf.num_request_queues);
> > +
> > + memcpy(config, &fscfg, sizeof(fscfg));
> > +}
> > +
> > +static void vuf_start(VirtIODevice *vdev)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
> > + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> > + int ret;
> > + int i;
> > +
> > + if (!k->set_guest_notifiers) {
> > + error_report("binding does not support guest notifiers");
> > + return;
> > + }
> > +
> > + ret = vhost_dev_enable_notifiers(&fs->vhost_dev, vdev);
> > + if (ret < 0) {
> > + error_report("Error enabling host notifiers: %d", -ret);
> > + return;
> > + }
> > +
> > + ret = k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, true);
> > + if (ret < 0) {
> > + error_report("Error binding guest notifier: %d", -ret);
> > + goto err_host_notifiers;
> > + }
> > +
> > + fs->vhost_dev.acked_features = vdev->guest_features;
> > + ret = vhost_dev_start(&fs->vhost_dev, vdev);
> > + if (ret < 0) {
> > + error_report("Error starting vhost: %d", -ret);
> > + goto err_guest_notifiers;
> > + }
> > +
> > + /*
> > + * guest_notifier_mask/pending not used yet, so just unmask
> > + * everything here. virtio-pci will do the right thing by
> > + * enabling/disabling irqfd.
> > + */
> > + for (i = 0; i < fs->vhost_dev.nvqs; i++) {
> > + vhost_virtqueue_mask(&fs->vhost_dev, vdev, i, false);
> > + }
> > +
> > + return;
> > +
> > +err_guest_notifiers:
> > + k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, false);
> > +err_host_notifiers:
> > + vhost_dev_disable_notifiers(&fs->vhost_dev, vdev);
> > +}
> > +
> > +static void vuf_stop(VirtIODevice *vdev)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > + BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(vdev)));
> > + VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
> > + int ret;
> > +
> > + if (!k->set_guest_notifiers) {
> > + return;
> > + }
> > +
> > + vhost_dev_stop(&fs->vhost_dev, vdev);
> > +
> > + ret = k->set_guest_notifiers(qbus->parent, fs->vhost_dev.nvqs, false);
> > + if (ret < 0) {
> > + error_report("vhost guest notifier cleanup failed: %d", ret);
> > + return;
> > + }
> > +
> > + vhost_dev_disable_notifiers(&fs->vhost_dev, vdev);
> > +}
> > +
> > +static void vuf_set_status(VirtIODevice *vdev, uint8_t status)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > + bool should_start = status & VIRTIO_CONFIG_S_DRIVER_OK;
> > +
> > + if (!vdev->vm_running) {
> > + should_start = false;
> > + }
> > +
> > + if (fs->vhost_dev.started == should_start) {
> > + return;
> > + }
> > +
> > + if (should_start) {
> > + vuf_start(vdev);
> > + } else {
> > + vuf_stop(vdev);
> > + }
> > +}
> > +
> > +static uint64_t vuf_get_features(VirtIODevice *vdev,
> > + uint64_t requested_features,
> > + Error **errp)
> > +{
> > + /* No feature bits used yet */
> > + return requested_features;
> > +}
> > +
> > +static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq)
> > +{
> > + /*
> > + * Not normally called; it's the daemon that handles the queue;
> > + * however virtio's cleanup path can call this.
> > + */
> > +}
> > +
> > +static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > + bool mask)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > +
> > + vhost_virtqueue_mask(&fs->vhost_dev, vdev, idx, mask);
> > +}
> > +
> > +static bool vuf_guest_notifier_pending(VirtIODevice *vdev, int idx)
> > +{
> > + VHostUserFS *fs = VHOST_USER_FS(vdev);
> > +
> > + return vhost_virtqueue_pending(&fs->vhost_dev, idx);
> > +}
> > +
> > +static void vuf_device_realize(DeviceState *dev, Error **errp)
> > +{
> > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > + VHostUserFS *fs = VHOST_USER_FS(dev);
> > + unsigned int i;
> > + size_t len;
> > + int ret;
> > +
> > + if (!fs->conf.chardev.chr) {
> > + error_setg(errp, "missing chardev");
> > + return;
> > + }
> > +
> > + if (!fs->conf.tag) {
> > + error_setg(errp, "missing tag property");
> > + return;
> > + }
> > + len = strlen(fs->conf.tag);
> > + if (len == 0) {
> > + error_setg(errp, "tag property cannot be empty");
> > + return;
> > + }
> > + if (len > sizeof_field(struct virtio_fs_config, tag)) {
> > + error_setg(errp, "tag property must be %zu bytes or less",
> > + sizeof_field(struct virtio_fs_config, tag));
> > + return;
> > + }
> > +
> > + if (fs->conf.num_request_queues == 0) {
> > + error_setg(errp, "num-request-queues property must be larger than
> > 0");
> > + return;
> > + }
> > +
> > + if (!is_power_of_2(fs->conf.queue_size)) {
> > + error_setg(errp, "queue-size property must be a power of 2");
> > + return;
> > + }
> > +
> > + if (fs->conf.queue_size > VIRTQUEUE_MAX_SIZE) {
> > + error_setg(errp, "queue-size property must be %u or smaller",
> > + VIRTQUEUE_MAX_SIZE);
> > + return;
> > + }
> > +
> > + if (!vhost_user_init(&fs->vhost_user, &fs->conf.chardev, errp)) {
> > + return;
> > + }
> > +
> > + virtio_init(vdev, "vhost-user-fs", VIRTIO_ID_FS,
> > + sizeof(struct virtio_fs_config));
> > +
> > + /* Hiprio queue */
> > + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> > +
> > + /* Request queues */
> > + for (i = 0; i < fs->conf.num_request_queues; i++) {
> > + virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output);
> > + }
> > +
> > + /* 1 high prio queue, plus the number configured */
> > + fs->vhost_dev.nvqs = 1 + fs->conf.num_request_queues;
> > + fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs);
> > + ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user,
> > + VHOST_BACKEND_TYPE_USER, 0);
> > + if (ret < 0) {
> > + error_setg_errno(errp, -ret, "vhost_dev_init failed");
> > + goto err_virtio;
> > + }
> > +
> > + return;
> > +
> > +err_virtio:
> > + vhost_user_cleanup(&fs->vhost_user);
> > + virtio_cleanup(vdev);
> > + g_free(fs->vhost_dev.vqs);
> > + return;
> > +}
> > +
> > +static void vuf_device_unrealize(DeviceState *dev, Error **errp)
> > +{
> > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > + VHostUserFS *fs = VHOST_USER_FS(dev);
> > +
> > + /* This will stop vhost backend if appropriate. */
> > + vuf_set_status(vdev, 0);
> > +
> > + vhost_dev_cleanup(&fs->vhost_dev);
> > +
> > + vhost_user_cleanup(&fs->vhost_user);
> > +
> > + virtio_cleanup(vdev);
> > + g_free(fs->vhost_dev.vqs);
> > + fs->vhost_dev.vqs = NULL;
> > +}
> > +
> > +static const VMStateDescription vuf_vmstate = {
> > + .name = "vhost-user-fs",
> > + .unmigratable = 1,
> > +};
> > +
> > +static Property vuf_properties[] = {
> > + DEFINE_PROP_CHR("chardev", VHostUserFS, conf.chardev),
> > + DEFINE_PROP_STRING("tag", VHostUserFS, conf.tag),
> > + DEFINE_PROP_UINT16("num-request-queues", VHostUserFS,
> > + conf.num_request_queues, 1),
> > + DEFINE_PROP_UINT16("queue-size", VHostUserFS, conf.queue_size, 128),
> > + DEFINE_PROP_STRING("vhostfd", VHostUserFS, conf.vhostfd),
> > + DEFINE_PROP_END_OF_LIST(),
> > +};
> > +
> > +static void vuf_class_init(ObjectClass *klass, void *data)
> > +{
> > + DeviceClass *dc = DEVICE_CLASS(klass);
> > + VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
> > +
> > + dc->props = vuf_properties;
> > + dc->vmsd = &vuf_vmstate;
> > + set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> > + vdc->realize = vuf_device_realize;
> > + vdc->unrealize = vuf_device_unrealize;
> > + vdc->get_features = vuf_get_features;
> > + vdc->get_config = vuf_get_config;
> > + vdc->set_status = vuf_set_status;
> > + vdc->guest_notifier_mask = vuf_guest_notifier_mask;
> > + vdc->guest_notifier_pending = vuf_guest_notifier_pending;
> > +}
> > +
> > +static const TypeInfo vuf_info = {
> > + .name = TYPE_VHOST_USER_FS,
> > + .parent = TYPE_VIRTIO_DEVICE,
> > + .instance_size = sizeof(VHostUserFS),
> > + .class_init = vuf_class_init,
> > +};
> > +
> > +static void vuf_register_types(void)
> > +{
> > + type_register_static(&vuf_info);
> > +}
> > +
> > +type_init(vuf_register_types)
> > diff --git a/include/hw/virtio/vhost-user-fs.h
> > b/include/hw/virtio/vhost-user-fs.h
> > new file mode 100644
> > index 0000000000..539885b458
> > --- /dev/null
> > +++ b/include/hw/virtio/vhost-user-fs.h
> > @@ -0,0 +1,45 @@
> > +/*
> > + * Vhost-user filesystem virtio device
> > + *
> > + * Copyright 2018-2019 Red Hat, Inc.
> > + *
> > + * Authors:
> > + * Stefan Hajnoczi <address@hidden>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or
> > + * (at your option) any later version. See the COPYING file in the
> > + * top-level directory.
> > + */
> > +
> > +#ifndef _QEMU_VHOST_USER_FS_H
> > +#define _QEMU_VHOST_USER_FS_H
> > +
> > +#include "hw/virtio/virtio.h"
> > +#include "hw/virtio/vhost.h"
> > +#include "hw/virtio/vhost-user.h"
> > +#include "chardev/char-fe.h"
> > +
> > +#define TYPE_VHOST_USER_FS "vhost-user-fs-device"
> > +#define VHOST_USER_FS(obj) \
> > + OBJECT_CHECK(VHostUserFS, (obj), TYPE_VHOST_USER_FS)
> > +
> > +typedef struct {
> > + CharBackend chardev;
> > + char *tag;
> > + uint16_t num_request_queues;
> > + uint16_t queue_size;
> > + char *vhostfd;
> > +} VHostUserFSConf;
> > +
> > +typedef struct {
> > + /*< private >*/
> > + VirtIODevice parent;
> > + VHostUserFSConf conf;
> > + struct vhost_virtqueue *vhost_vqs;
> > + struct vhost_dev vhost_dev;
> > + VhostUserState vhost_user;
> > +
> > + /*< public >*/
> > +} VHostUserFS;
> > +
> > +#endif /* _QEMU_VHOST_USER_FS_H */
- [PULL 12/19] tests/acpi: add empty files, (continued)
- [PULL 12/19] tests/acpi: add empty files, Michael S. Tsirkin, 2019/10/05
- [PULL 13/19] tests: allow empty expected files, Michael S. Tsirkin, 2019/10/05
- [PULL 14/19] tests: Add bios tests to arm/virt, Michael S. Tsirkin, 2019/10/05
- [PULL 15/19] tests: document how to update acpi tables, Michael S. Tsirkin, 2019/10/05
- [PULL 16/19] tests/acpi: add expected tables for arm/virt, Michael S. Tsirkin, 2019/10/05
- [PULL 17/19] virtio: Add virtio_fs linux headers, Michael S. Tsirkin, 2019/10/05
- [PULL 18/19] virtio: add vhost-user-fs base device, Michael S. Tsirkin, 2019/10/05
- Re: [PULL 18/19] virtio: add vhost-user-fs base device, Igor Mammedov, 2019/10/07
- Re: [PULL 18/19] virtio: add vhost-user-fs base device,
Michael S. Tsirkin <=
- [PULL 19/19] virtio: add vhost-user-fs-pci device, Michael S. Tsirkin, 2019/10/05
- Re: [PULL 00/19] virtio, vhost, acpi: features, fixes, tests, Peter Maydell, 2019/10/07
- Re: [PULL 00/19] virtio, vhost, acpi: features, fixes, tests, Peter Maydell, 2019/10/17