qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH QEMU v25 15/17] vfio: Add ioctl to get dirty pages bitmap dur


From: Cornelia Huck
Subject: Re: [PATCH QEMU v25 15/17] vfio: Add ioctl to get dirty pages bitmap during dma unmap.
Date: Tue, 23 Jun 2020 10:25:45 +0200

On Sun, 21 Jun 2020 01:51:24 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> With vIOMMU, IO virtual address range can get unmapped while in pre-copy
> phase of migration. In that case, unmap ioctl should return pages pinned
> in that range and QEMU should find its correcponding guest physical
> addresses and report those dirty.
> 
> Suggested-by: Alex Williamson <alex.williamson@redhat.com>
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> Reviewed-by: Neo Jia <cjia@nvidia.com>
> ---
>  hw/vfio/common.c | 85 
> +++++++++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 81 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 0518cf228ed5..a06b8f2f66e2 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -311,11 +311,83 @@ static bool vfio_devices_are_stopped_and_saving(void)
>      return true;
>  }
>  
> +static bool vfio_devices_are_running_and_saving(void)

I previously asked:
(https://lore.kernel.org/qemu-devel/20200506123125.449dbf42.cohuck@redhat.com/)

"Maybe s/are/all/ to make it sure that the scope is *all* vfio devices
here?

Is there any global state for this which we could use to check this in
a simpler way?"

Any comment?

> +{
> +    VFIOGroup *group;
> +    VFIODevice *vbasedev;
> +
> +    QLIST_FOREACH(group, &vfio_group_list, next) {
> +        QLIST_FOREACH(vbasedev, &group->device_list, next) {
> +            if ((vbasedev->device_state & VFIO_DEVICE_STATE_SAVING) &&
> +                (vbasedev->device_state & VFIO_DEVICE_STATE_RUNNING)) {
> +                continue;
> +            } else {
> +                return false;
> +            }
> +        }
> +    }
> +    return true;
> +}




reply via email to

[Prev in Thread] Current Thread [Next in Thread]