qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 09/13] vfio/migration: Implement VFIO migration protocol v


From: Alex Williamson
Subject: Re: [PATCH v8 09/13] vfio/migration: Implement VFIO migration protocol v2
Date: Wed, 25 Jan 2023 17:00:13 -0700

On Sun, 22 Jan 2023 12:31:33 +0200
Avihai Horon <avihaih@nvidia.com> wrote:

> On 21/01/2023 1:07, Alex Williamson wrote:
> > External email: Use caution opening links or attachments
> >
> >
> > On Mon, 16 Jan 2023 16:11:31 +0200
> > Avihai Horon <avihaih@nvidia.com> wrote:
> >  
> >> Implement the basic mandatory part of VFIO migration protocol v2.
> >> This includes all functionality that is necessary to support
> >> VFIO_MIGRATION_STOP_COPY part of the v2 protocol.
> >>
> >> The two protocols, v1 and v2, will co-exist and in the following patches
> >> v1 protocol code will be removed.
> >>
> >> There are several main differences between v1 and v2 protocols:
> >> - VFIO device state is now represented as a finite state machine instead
> >>    of a bitmap.
> >>
> >> - Migration interface with kernel is now done using VFIO_DEVICE_FEATURE
> >>    ioctl and normal read() and write() instead of the migration region.
> >>
> >> - Pre-copy is made optional in v2 protocol. Support for pre-copy will be
> >>    added later on.
> >>
> >> Detailed information about VFIO migration protocol v2 and its difference
> >> compared to v1 protocol can be found here [1].
> >>
> >> [1]
> >> https://lore.kernel.org/all/20220224142024.147653-10-yishaih@nvidia.com/
> >>
> >> Signed-off-by: Avihai Horon <avihaih@nvidia.com>
> >> Reviewed-by: Cédric Le Goater <clg@redhat.com>
> >> ---
> >>   include/hw/vfio/vfio-common.h |   5 +
> >>   hw/vfio/common.c              |  19 +-
> >>   hw/vfio/migration.c           | 455 +++++++++++++++++++++++++++++++---
> >>   hw/vfio/trace-events          |   7 +
> >>   4 files changed, 447 insertions(+), 39 deletions(-)
> >>
> >> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> >> index bbaf72ba00..6d7d850bfe 100644
> >> --- a/include/hw/vfio/vfio-common.h
> >> +++ b/include/hw/vfio/vfio-common.h
> >> @@ -66,6 +66,11 @@ typedef struct VFIOMigration {
> >>       int vm_running;
> >>       Notifier migration_state;
> >>       uint64_t pending_bytes;
> >> +    uint32_t device_state;
> >> +    int data_fd;
> >> +    void *data_buffer;
> >> +    size_t data_buffer_size;
> >> +    bool v2;
> >>   } VFIOMigration;
> >>
> >>   typedef struct VFIOAddressSpace {
> >> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> >> index 550b2d7ded..dcaa77d2a8 100644
> >> --- a/hw/vfio/common.c
> >> +++ b/hw/vfio/common.c
> >> @@ -355,10 +355,18 @@ static bool 
> >> vfio_devices_all_dirty_tracking(VFIOContainer *container)
> >>                   return false;
> >>               }
> >>
> >> -            if ((vbasedev->pre_copy_dirty_page_tracking == 
> >> ON_OFF_AUTO_OFF) &&
> >> +            if (!migration->v2 &&
> >> +                (vbasedev->pre_copy_dirty_page_tracking == 
> >> ON_OFF_AUTO_OFF) &&
> >>                   (migration->device_state_v1 & 
> >> VFIO_DEVICE_STATE_V1_RUNNING)) {
> >>                   return false;
> >>               }
> >> +
> >> +            if (migration->v2 &&
> >> +                (vbasedev->pre_copy_dirty_page_tracking == 
> >> ON_OFF_AUTO_OFF) &&
> >> +                (migration->device_state == VFIO_DEVICE_STATE_RUNNING ||
> >> +                 migration->device_state == 
> >> VFIO_DEVICE_STATE_RUNNING_P2P)) {
> >> +                return false;
> >> +            }
> >>           }
> >>       }
> >>       return true;
> >> @@ -385,7 +393,14 @@ static bool 
> >> vfio_devices_all_running_and_mig_active(VFIOContainer *container)
> >>                   return false;
> >>               }
> >>
> >> -            if (migration->device_state_v1 & 
> >> VFIO_DEVICE_STATE_V1_RUNNING) {
> >> +            if (!migration->v2 &&
> >> +                migration->device_state_v1 & 
> >> VFIO_DEVICE_STATE_V1_RUNNING) {
> >> +                continue;
> >> +            }
> >> +
> >> +            if (migration->v2 &&
> >> +                (migration->device_state == VFIO_DEVICE_STATE_RUNNING ||
> >> +                 migration->device_state == 
> >> VFIO_DEVICE_STATE_RUNNING_P2P)) {
> >>                   continue;
> >>               } else {
> >>                   return false;
> >> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> >> index 9df859f4d3..f19ada0f4f 100644
> >> --- a/hw/vfio/migration.c
> >> +++ b/hw/vfio/migration.c
> >> @@ -10,6 +10,7 @@
> >>   #include "qemu/osdep.h"
> >>   #include "qemu/main-loop.h"
> >>   #include "qemu/cutils.h"
> >> +#include "qemu/units.h"
> >>   #include <linux/vfio.h>
> >>   #include <sys/ioctl.h>
> >>
> >> @@ -44,8 +45,103 @@
> >>   #define VFIO_MIG_FLAG_DEV_SETUP_STATE   (0xffffffffef100003ULL)
> >>   #define VFIO_MIG_FLAG_DEV_DATA_STATE    (0xffffffffef100004ULL)
> >>
> >> +/*
> >> + * This is an arbitrary size based on migration of mlx5 devices, where 
> >> typically
> >> + * total device migration size is on the order of 100s of MB. Testing with
> >> + * larger values, e.g. 128MB and 1GB, did not show a performance 
> >> improvement.
> >> + */
> >> +#define VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE (1 * MiB)
> >> +
> >>   static int64_t bytes_transferred;
> >>
> >> +static const char *mig_state_to_str(enum vfio_device_mig_state state)
> >> +{
> >> +    switch (state) {
> >> +    case VFIO_DEVICE_STATE_ERROR:
> >> +        return "ERROR";
> >> +    case VFIO_DEVICE_STATE_STOP:
> >> +        return "STOP";
> >> +    case VFIO_DEVICE_STATE_RUNNING:
> >> +        return "RUNNING";
> >> +    case VFIO_DEVICE_STATE_STOP_COPY:
> >> +        return "STOP_COPY";
> >> +    case VFIO_DEVICE_STATE_RESUMING:
> >> +        return "RESUMING";
> >> +    case VFIO_DEVICE_STATE_RUNNING_P2P:
> >> +        return "RUNNING_P2P";
> >> +    default:
> >> +        return "UNKNOWN STATE";
> >> +    }
> >> +}
> >> +
> >> +static int vfio_migration_set_state(VFIODevice *vbasedev,
> >> +                                    enum vfio_device_mig_state new_state,
> >> +                                    enum vfio_device_mig_state 
> >> recover_state)
> >> +{
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +    uint64_t buf[DIV_ROUND_UP(sizeof(struct vfio_device_feature) +
> >> +                              sizeof(struct 
> >> vfio_device_feature_mig_state),
> >> +                              sizeof(uint64_t))] = {};
> >> +    struct vfio_device_feature *feature = (struct vfio_device_feature 
> >> *)buf;
> >> +    struct vfio_device_feature_mig_state *mig_state =
> >> +        (struct vfio_device_feature_mig_state *)feature->data;
> >> +
> >> +    feature->argsz = sizeof(buf);
> >> +    feature->flags =
> >> +        VFIO_DEVICE_FEATURE_SET | VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE;
> >> +    mig_state->device_state = new_state;
> >> +    if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) {
> >> +        /* Try to set the device in some good state */
> >> +        int err = -errno;
> >> +
> >> +        error_report(
> >> +            "%s: Failed setting device state to %s, err: %s. Setting 
> >> device in recover state %s",
> >> +                     vbasedev->name, mig_state_to_str(new_state),
> >> +                     strerror(errno), mig_state_to_str(recover_state));
> >> +
> >> +        mig_state->device_state = recover_state;
> >> +        if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) {
> >> +            err = -errno;
> >> +            error_report(
> >> +                "%s: Failed setting device in recover state, err: %s. 
> >> Resetting device",
> >> +                         vbasedev->name, strerror(errno));
> >> +
> >> +            if (ioctl(vbasedev->fd, VFIO_DEVICE_RESET)) {
> >> +                hw_error("%s: Failed resetting device, err: %s", 
> >> vbasedev->name,
> >> +                         strerror(errno));
> >> +            }
> >> +
> >> +            migration->device_state = VFIO_DEVICE_STATE_RUNNING;
> >> +
> >> +            return err;
> >> +        }
> >> +
> >> +        migration->device_state = recover_state;
> >> +
> >> +        return err;
> >> +    }
> >> +
> >> +    migration->device_state = new_state;
> >> +    if (mig_state->data_fd != -1) {
> >> +        if (migration->data_fd != -1) {
> >> +            /*
> >> +             * This can happen if the device is asynchronously reset and
> >> +             * terminates a data transfer.
> >> +             */
> >> +            error_report("%s: data_fd out of sync", vbasedev->name);
> >> +            close(mig_state->data_fd);
> >> +
> >> +            return -EBADF;
> >> +        }
> >> +
> >> +        migration->data_fd = mig_state->data_fd;
> >> +    }
> >> +
> >> +    trace_vfio_migration_set_state(vbasedev->name, 
> >> mig_state_to_str(new_state));
> >> +
> >> +    return 0;
> >> +}
> >> +
> >>   static inline int vfio_mig_access(VFIODevice *vbasedev, void *val, int 
> >> count,
> >>                                     off_t off, bool iswrite)
> >>   {
> >> @@ -260,6 +356,18 @@ static int vfio_save_buffer(QEMUFile *f, VFIODevice 
> >> *vbasedev, uint64_t *size)
> >>       return ret;
> >>   }
> >>
> >> +static int vfio_load_buffer(QEMUFile *f, VFIODevice *vbasedev,
> >> +                            uint64_t data_size)
> >> +{
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +    int ret;
> >> +
> >> +    ret = qemu_file_get_to_fd(f, migration->data_fd, data_size);
> >> +    trace_vfio_load_state_device_data(vbasedev->name, data_size, ret);
> >> +
> >> +    return ret;
> >> +}
> >> +
> >>   static int vfio_v1_load_buffer(QEMUFile *f, VFIODevice *vbasedev,
> >>                                  uint64_t data_size)
> >>   {
> >> @@ -394,6 +502,14 @@ static int vfio_load_device_config_state(QEMUFile *f, 
> >> void *opaque)
> >>       return qemu_file_get_error(f);
> >>   }
> >>
> >> +static void vfio_migration_cleanup(VFIODevice *vbasedev)
> >> +{
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +
> >> +    close(migration->data_fd);
> >> +    migration->data_fd = -1;
> >> +}
> >> +
> >>   static void vfio_migration_v1_cleanup(VFIODevice *vbasedev)
> >>   {
> >>       VFIOMigration *migration = vbasedev->migration;
> >> @@ -403,8 +519,80 @@ static void vfio_migration_v1_cleanup(VFIODevice 
> >> *vbasedev)
> >>       }
> >>   }
> >>
> >> +static int vfio_query_stop_copy_size(VFIODevice *vbasedev,
> >> +                                     uint64_t *stop_copy_size)
> >> +{
> >> +    uint64_t buf[DIV_ROUND_UP(sizeof(struct vfio_device_feature) +
> >> +                              sizeof(struct 
> >> vfio_device_feature_mig_data_size),
> >> +                              sizeof(uint64_t))] = {};
> >> +    struct vfio_device_feature *feature = (struct vfio_device_feature 
> >> *)buf;
> >> +    struct vfio_device_feature_mig_data_size *mig_data_size =
> >> +        (struct vfio_device_feature_mig_data_size *)feature->data;
> >> +
> >> +    feature->argsz = sizeof(buf);
> >> +    feature->flags =
> >> +        VFIO_DEVICE_FEATURE_GET | VFIO_DEVICE_FEATURE_MIG_DATA_SIZE;
> >> +
> >> +    if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) {
> >> +        return -errno;
> >> +    }
> >> +
> >> +    *stop_copy_size = mig_data_size->stop_copy_length;
> >> +
> >> +    return 0;
> >> +}
> >> +
> >> +/* Returns 1 if end-of-stream is reached, 0 if more data and -1 if error 
> >> */  
> >
> > Nit, actually returns -errno on error.  
> 
> Oh, right. Will change.
> 
> >  
> >> +static int vfio_save_block(QEMUFile *f, VFIOMigration *migration)
> >> +{
> >> +    ssize_t data_size;
> >> +
> >> +    data_size = read(migration->data_fd, migration->data_buffer,
> >> +                     migration->data_buffer_size);
> >> +    if (data_size < 0) {
> >> +        return -errno;
> >> +    }
> >> +    if (data_size == 0) {
> >> +        return 1;
> >> +    }
> >> +
> >> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_DATA_STATE);
> >> +    qemu_put_be64(f, data_size);
> >> +    qemu_put_buffer(f, migration->data_buffer, data_size);
> >> +    bytes_transferred += data_size;
> >> +
> >> +    trace_vfio_save_block(migration->vbasedev->name, data_size);
> >> +
> >> +    return qemu_file_get_error(f);
> >> +}
> >> +
> >>   /* 
> >> ---------------------------------------------------------------------- */
> >>
> >> +static int vfio_save_setup(QEMUFile *f, void *opaque)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +    uint64_t stop_copy_size = VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE;
> >> +
> >> +    qemu_put_be64(f, VFIO_MIG_FLAG_DEV_SETUP_STATE);
> >> +
> >> +    vfio_query_stop_copy_size(vbasedev, &stop_copy_size);
> >> +    migration->data_buffer_size = MIN(VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE,
> >> +                                      stop_copy_size);
> >> +    migration->data_buffer = g_try_malloc0(migration->data_buffer_size);
> >> +    if (!migration->data_buffer) {
> >> +        error_report("%s: Failed to allocate migration data buffer",
> >> +                     vbasedev->name);
> >> +        return -ENOMEM;
> >> +    }
> >> +
> >> +    trace_vfio_save_setup(vbasedev->name, migration->data_buffer_size);
> >> +
> >> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);  
> >
> > What's the purpose of including this empty
> > VFIO_MIG_FLAG_DEV_SETUP_STATE delimiter?  Is this just an opportunity
> > to insert a fault in the migration data stream?  But if so, isn't it
> > sufficient that the caller does qemu_file_set_error() on an error
> > return?  
> 
> Upper migration layer wraps the vfio_save_setup() call with 
> QEMU_VM_SECTION_START and QEMU_VM_SECTION_FOOTER tags.
> These tags are sent to target and target consumes them and calls 
> vfio_load_state().
> Therefore, at the very least source must send VFIO_MIG_FLAG_END_OF_STATE 
> in vfio_save_setup() so target vfio_load_state() can consume it and move on.
> 
> Sending only VFIO_MIG_FLAG_END_OF_STATE in vfio_save_setup() seems kind 
> of odd and incomplete to me, so I tend to leave it as is.
> What do you think?

No, I don't think we should send an end-of-state without a start, a
spurious end could get us out of sync.  The question was more whether
we need to send the pair at all since the loading side does nothing
with it.  I think you're suggesting that would trigger errors elsewhere
or maybe reduce our error handling capabilities if our load function
got an empty buffer?

Also, should it be concerning that we're implementing the load_setup
callback, which is called from a code block noted as "Old style".  I'd
certainly prefer if this received an API review from the QEMU migration
experts.

> >> +
> >> +    return qemu_file_get_error(f);
> >> +}
> >> +
> >>   static int vfio_v1_save_setup(QEMUFile *f, void *opaque)
> >>   {
> >>       VFIODevice *vbasedev = opaque;
> >> @@ -448,6 +636,17 @@ static int vfio_v1_save_setup(QEMUFile *f, void 
> >> *opaque)
> >>       return 0;
> >>   }
> >>
> >> +static void vfio_save_cleanup(void *opaque)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +    VFIOMigration *migration = vbasedev->migration;
> >> +
> >> +    g_free(migration->data_buffer);
> >> +    migration->data_buffer = NULL;
> >> +    vfio_migration_cleanup(vbasedev);
> >> +    trace_vfio_save_cleanup(vbasedev->name);
> >> +}
> >> +
> >>   static void vfio_v1_save_cleanup(void *opaque)
> >>   {
> >>       VFIODevice *vbasedev = opaque;
> >> @@ -456,6 +655,31 @@ static void vfio_v1_save_cleanup(void *opaque)
> >>       trace_vfio_save_cleanup(vbasedev->name);
> >>   }
> >>
> >> +/*
> >> + * Migration size of VFIO devices can be as little as a few KBs or as big 
> >> as
> >> + * many GBs. This value should be big enough to cover the worst case.
> >> + */
> >> +#define VFIO_MIG_STOP_COPY_SIZE (100 * GiB)
> >> +static void vfio_save_pending(void *opaque, uint64_t threshold_size,
> >> +                              uint64_t *res_precopy_only,
> >> +                              uint64_t *res_compatible,
> >> +                              uint64_t *res_postcopy_only)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +    uint64_t stop_copy_size = VFIO_MIG_STOP_COPY_SIZE;
> >> +
> >> +    /*
> >> +     * If getting pending migration size fails, VFIO_MIG_STOP_COPY_SIZE is
> >> +     * reported so downtime limit won't be violated.
> >> +     */
> >> +    vfio_query_stop_copy_size(vbasedev, &stop_copy_size);
> >> +    *res_precopy_only += stop_copy_size;
> >> +
> >> +    trace_vfio_save_pending(vbasedev->name, *res_precopy_only,
> >> +                            *res_postcopy_only, *res_compatible,
> >> +                            stop_copy_size);
> >> +}
> >> +
> >>   static void vfio_v1_save_pending(void *opaque, uint64_t threshold_size,
> >>                                    uint64_t *res_precopy_only,
> >>                                    uint64_t *res_compatible,
> >> @@ -523,6 +747,41 @@ static int vfio_save_iterate(QEMUFile *f, void 
> >> *opaque)
> >>       return 0;
> >>   }
> >>
> >> +static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +    enum vfio_device_mig_state recover_state;
> >> +    int ret;
> >> +
> >> +    /* We reach here with device state STOP only */
> >> +    recover_state = VFIO_DEVICE_STATE_STOP;
> >> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOP_COPY,
> >> +                                   recover_state);
> >> +    if (ret) {
> >> +        return ret;
> >> +    }
> >> +
> >> +    do {
> >> +        ret = vfio_save_block(f, vbasedev->migration);
> >> +        if (ret < 0) {
> >> +            return ret;
> >> +        }
> >> +    } while (!ret);
> >> +
> >> +    qemu_put_be64(f, VFIO_MIG_FLAG_END_OF_STATE);
> >> +    ret = qemu_file_get_error(f);
> >> +    if (ret) {
> >> +        return ret;
> >> +    }
> >> +
> >> +    recover_state = VFIO_DEVICE_STATE_ERROR;
> >> +    ret = vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_STOP,
> >> +                                   recover_state);
> >> +    trace_vfio_save_complete_precopy(vbasedev->name, ret);
> >> +
> >> +    return ret;
> >> +}
> >> +
> >>   static int vfio_v1_save_complete_precopy(QEMUFile *f, void *opaque)
> >>   {
> >>       VFIODevice *vbasedev = opaque;
> >> @@ -592,6 +851,14 @@ static void vfio_save_state(QEMUFile *f, void *opaque)
> >>       }
> >>   }
> >>
> >> +static int vfio_load_setup(QEMUFile *f, void *opaque)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +
> >> +    return vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_RESUMING,
> >> +                                   vbasedev->migration->device_state);
> >> +}
> >> +
> >>   static int vfio_v1_load_setup(QEMUFile *f, void *opaque)
> >>   {
> >>       VFIODevice *vbasedev = opaque;
> >> @@ -619,6 +886,16 @@ static int vfio_v1_load_setup(QEMUFile *f, void 
> >> *opaque)
> >>       return ret;
> >>   }
> >>
> >> +static int vfio_load_cleanup(void *opaque)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +
> >> +    vfio_migration_cleanup(vbasedev);
> >> +    trace_vfio_load_cleanup(vbasedev->name);
> >> +
> >> +    return 0;
> >> +}
> >> +
> >>   static int vfio_v1_load_cleanup(void *opaque)
> >>   {
> >>       VFIODevice *vbasedev = opaque;
> >> @@ -661,7 +938,11 @@ static int vfio_load_state(QEMUFile *f, void *opaque, 
> >> int version_id)
> >>               uint64_t data_size = qemu_get_be64(f);
> >>
> >>               if (data_size) {
> >> -                ret = vfio_v1_load_buffer(f, vbasedev, data_size);
> >> +                if (vbasedev->migration->v2) {
> >> +                    ret = vfio_load_buffer(f, vbasedev, data_size);
> >> +                } else {
> >> +                    ret = vfio_v1_load_buffer(f, vbasedev, data_size);
> >> +                }
> >>                   if (ret < 0) {
> >>                       return ret;
> >>                   }
> >> @@ -682,6 +963,17 @@ static int vfio_load_state(QEMUFile *f, void *opaque, 
> >> int version_id)
> >>       return ret;
> >>   }
> >>
> >> +static const SaveVMHandlers savevm_vfio_handlers = {
> >> +    .save_setup = vfio_save_setup,
> >> +    .save_cleanup = vfio_save_cleanup,
> >> +    .save_live_pending = vfio_save_pending,
> >> +    .save_live_complete_precopy = vfio_save_complete_precopy,
> >> +    .save_state = vfio_save_state,
> >> +    .load_setup = vfio_load_setup,
> >> +    .load_cleanup = vfio_load_cleanup,
> >> +    .load_state = vfio_load_state,
> >> +};
> >> +
> >>   static SaveVMHandlers savevm_vfio_v1_handlers = {
> >>       .save_setup = vfio_v1_save_setup,
> >>       .save_cleanup = vfio_v1_save_cleanup,
> >> @@ -696,6 +988,34 @@ static SaveVMHandlers savevm_vfio_v1_handlers = {
> >>
> >>   /* 
> >> ---------------------------------------------------------------------- */
> >>
> >> +static void vfio_vmstate_change(void *opaque, bool running, RunState 
> >> state)
> >> +{
> >> +    VFIODevice *vbasedev = opaque;
> >> +    enum vfio_device_mig_state new_state;
> >> +    int ret;
> >> +
> >> +    if (running) {
> >> +        new_state = VFIO_DEVICE_STATE_RUNNING;
> >> +    } else {
> >> +        new_state = VFIO_DEVICE_STATE_STOP;
> >> +    }
> >> +
> >> +    ret = vfio_migration_set_state(vbasedev, new_state,
> >> +                                   VFIO_DEVICE_STATE_ERROR);
> >> +    if (ret) {
> >> +        /*
> >> +         * Migration should be aborted in this case, but vm_state_notify()
> >> +         * currently does not support reporting failures.
> >> +         */
> >> +        if (migrate_get_current()->to_dst_file) {
> >> +            qemu_file_set_error(migrate_get_current()->to_dst_file, ret);
> >> +        }
> >> +    }
> >> +
> >> +    trace_vfio_vmstate_change(vbasedev->name, running, 
> >> RunState_str(state),
> >> +                              mig_state_to_str(new_state));
> >> +}
> >> +
> >>   static void vfio_v1_vmstate_change(void *opaque, bool running, RunState 
> >> state)
> >>   {
> >>       VFIODevice *vbasedev = opaque;
> >> @@ -769,12 +1089,17 @@ static void vfio_migration_state_notifier(Notifier 
> >> *notifier, void *data)
> >>       case MIGRATION_STATUS_CANCELLED:
> >>       case MIGRATION_STATUS_FAILED:
> >>           bytes_transferred = 0;
> >> -        ret = vfio_migration_v1_set_state(vbasedev,
> >> -                                          ~(VFIO_DEVICE_STATE_V1_SAVING |
> >> -                                            
> >> VFIO_DEVICE_STATE_V1_RESUMING),
> >> -                                          VFIO_DEVICE_STATE_V1_RUNNING);
> >> -        if (ret) {
> >> -            error_report("%s: Failed to set state RUNNING", 
> >> vbasedev->name);
> >> +        if (migration->v2) {
> >> +            vfio_migration_set_state(vbasedev, VFIO_DEVICE_STATE_RUNNING,
> >> +                                     VFIO_DEVICE_STATE_ERROR);
> >> +        } else {
> >> +            ret = vfio_migration_v1_set_state(vbasedev,
> >> +                                              
> >> ~(VFIO_DEVICE_STATE_V1_SAVING |
> >> +                                                
> >> VFIO_DEVICE_STATE_V1_RESUMING),
> >> +                                              
> >> VFIO_DEVICE_STATE_V1_RUNNING);
> >> +            if (ret) {
> >> +                error_report("%s: Failed to set state RUNNING", 
> >> vbasedev->name);
> >> +            }
> >>           }
> >>       }
> >>   }
> >> @@ -783,12 +1108,42 @@ static void vfio_migration_exit(VFIODevice 
> >> *vbasedev)
> >>   {
> >>       VFIOMigration *migration = vbasedev->migration;
> >>
> >> -    vfio_region_exit(&migration->region);
> >> -    vfio_region_finalize(&migration->region);
> >> +    if (!migration->v2) {
> >> +        vfio_region_exit(&migration->region);
> >> +        vfio_region_finalize(&migration->region);
> >> +    }
> >>       g_free(vbasedev->migration);
> >>       vbasedev->migration = NULL;
> >>   }
> >>
> >> +static int vfio_migration_query_flags(VFIODevice *vbasedev, uint64_t 
> >> *mig_flags)
> >> +{
> >> +    uint64_t buf[DIV_ROUND_UP(sizeof(struct vfio_device_feature) +
> >> +                                  sizeof(struct 
> >> vfio_device_feature_migration),
> >> +                              sizeof(uint64_t))] = {};
> >> +    struct vfio_device_feature *feature = (struct vfio_device_feature 
> >> *)buf;
> >> +    struct vfio_device_feature_migration *mig =
> >> +        (struct vfio_device_feature_migration *)feature->data;
> >> +
> >> +    feature->argsz = sizeof(buf);
> >> +    feature->flags = VFIO_DEVICE_FEATURE_GET | 
> >> VFIO_DEVICE_FEATURE_MIGRATION;
> >> +    if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) {
> >> +        if (errno == ENOTTY) {
> >> +            error_report("%s: VFIO migration is not supported in kernel",
> >> +                         vbasedev->name);
> >> +        } else {
> >> +            error_report("%s: Failed to query VFIO migration support, 
> >> err: %s",
> >> +                         vbasedev->name, strerror(errno));
> >> +        }  
> >
> > This looks like a future TODO, at some point we'll want migration
> > enabled by default _as_available_ and it would cause problems to
> > generate error logs when it's not available.  
> 
> Do you mean we should add a TODO comment? E.g.:
> /*
>   * TODO: Currently VFIO migration is disabled by default. In the future, if
>   * it's changed to be enabled by default as available, generating 
> errors when
>   * it's not available might cause problems. So, error_report()s should be
>   * removed.
>   */
> 
> Or should we change it now to warn_report(), or to a 
> trace_vfio_migration_query_flags(vbasedev, err) for debugging?
> So the user will be notified that migration is not supported even before 
> he tries to do migration and fails?

I don't think we need to do anything now since migration is marked
experimental and I think we only go down this path when the bool arg is
turned on.  It's arguably correct that we should generate error
messages and even fail the device initialization if we're asked to
enable migration and we can't.

But if we're looking at supportable migration, we had a previous
discussion that there wouldn't be a device option to enable it.  It
should automatically be enabled when supported by the device and device
initialization should only fail if the device lacks migration support
if the -only-migratable flag is set.  Along with that, lack of support
for migration by a device is not a condition that should generate any
errors messages unless via the -only-migratable option.  Therefore
eventually we can't have that error_report on -ENOTTY for the migration
feature.

A trace is unlikely to help a user that wants to check migration
support before they attempt it.  AIUI, there's a mechanism to query
whether the VM is migratable, ie. whether there are any registered
migration blockers, and libvirt will check this before attempting a
migration.  Thanks,

Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]