[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v12 Kernel 4/7] vfio iommu: Implementation of ioctl to for di

From: Kirti Wankhede
Subject: Re: [PATCH v12 Kernel 4/7] vfio iommu: Implementation of ioctl to for dirty pages tracking.
Date: Tue, 18 Feb 2020 11:28:53 +0530
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0


   As I understand the above algorithm, we find a vfio_dma
overlapping the request and populate the bitmap for that range.  Then
we go back and put_user() for each byte that we touched.  We could
instead simply work on a one byte buffer as we enumerate the requested
range and do a put_user() ever time we reach the end of it and have bits
set. That would greatly simplify the above example.  But I would expect
that we're a) more likely to get asked for ranges covering a single

QEMU ask for single vfio_dma during each iteration.

If we restrict this ABI to cover single vfio_dma only, then it
simplifies the logic here. That was my original suggestion. Should we
think about that again?

But we currently allow unmaps that overlap multiple vfio_dmas as long
as no vfio_dma is bisected, so I think that implies that an unmap while
asking for the dirty bitmap has even further restricted semantics.  I'm
also reluctant to design an ABI around what happens to be the current
QEMU implementation.

If we take your example above, ranges {0x0000,0xa000} and
{0xa000,0x10000} ({start,end}), I think you're working with the
following two bitmaps in this implementation:

00000011 11111111b

And we need to combine those into:

11111111 11111111b


But it seems like that would be easier if the second bitmap was instead:


Then we wouldn't need to worry about the entire bitmap being shifted by
the bit offset within the byte, which limits our fixes to the boundary
byte and allows us to use copy_to_user() directly for the bulk of the
copy.  So how do we get there?

I think we start with allocating the vfio_dma bitmap to account for
this initial offset, so we calculate bitmap_base_iova as:
    (iova & ~((PAGE_SIZE << 3) - 1))
We then use bitmap_base_iova in calculating which bits to set.

The user needs to follow the same rules, and maybe this adds some value
to the user providing the bitmap size rather than the kernel
calculating it.  For example, if the user wanted the dirty bitmap for
the range {0xa000,0x10000} above, they'd provide at least a 1 byte
bitmap, but we'd return bit #2 set to indicate 0xa000 is dirty.

Effectively the user can ask for any iova range, but the buffer will be
filled relative to the zeroth bit of the bitmap following the above
bitmap_base_iova formula (and replacing PAGE_SIZE with the user
requested pgsize).  I'm tempted to make this explicit in the user
interface (ie. only allow bitmaps starting on aligned pages), but a
user is able to map and unmap single pages and we need to support
returning a dirty bitmap with an unmap, so I don't think we can do that.

Sigh, finding adjacent vfio_dmas within the same byte seems simpler than

How does KVM do this?  My intent was that if all of our bitmaps share
the same alignment then we can merge the intersection and continue to
use copy_to_user() on either side.  However, if QEMU doesn't do the
same, it doesn't really help us.  Is QEMU stuck with an implementation
of only retrieving dirty bits per MemoryRegionSection exactly because
of this issue and therefore we can rely on it in our implementation as
well?  Thanks,

QEMU sync dirty_bitmap per MemoryRegionSection. Within MemoryRegionSection there could be multiple KVMSlots. QEMU queries dirty_bitmap per KVMSlot and mark dirty for each KVMSlot. On kernel side, KVM_GET_DIRTY_LOG ioctl calls kvm_get_dirty_log_protect(), where it uses copy_to_user() to copy bitmap of that memSlot. vfio_dma is per MemoryRegionSection. We can reply on MemoryRegionSection in our implementation. But to get bitmap during unmap, we have to take care of concatenating bitmaps.

In QEMU, in function kvm_physical_sync_dirty_bitmap() there is a comment where bitmap size is calculated and bitmap is defined as 'void __user *dirty_bitmap' which is also the concern you raised and could be handled similarly as below.

        /* XXX bad kernel interface alert
         * For dirty bitmap, kernel allocates array of size aligned to
         * bits-per-long.  But for case when the kernel is 64bits and
         * the userspace is 32bits, userspace can't align to the same
         * bits-per-long, since sizeof(long) is different between kernel
         * and user space.  This way, userspace will provide buffer which
         * may be 4 bytes less than the kernel will use, resulting in
         * userspace memory corruption (which is not detectable by valgrind
         * too, in most cases).
         * So for now, let's align to 64 instead of HOST_LONG_BITS here, in
         * a hope that sizeof(long) won't become >8 any time soon.
        if (!mem->dirty_bmap) {
hwaddr bitmap_size = ALIGN(((mem->memory_size) >> TARGET_PAGE_BITS),
                                        /*HOST_LONG_BITS*/ 64) / 8;
            /* Allocate on the first log_sync, once and for all */
            mem->dirty_bmap = g_malloc0(bitmap_size);


reply via email to

[Prev in Thread] Current Thread [Next in Thread]