qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v2] migration: clear the memory region dirty bitmap when skip


From: Wang, Wei W
Subject: RE: [PATCH v2] migration: clear the memory region dirty bitmap when skipping free pages
Date: Mon, 19 Jul 2021 05:18:29 +0000

On Friday, July 16, 2021 4:26 PM, David Hildenbrand wrote:
> >>> +    /*
> >>> +     * CLEAR_BITMAP_SHIFT_MIN should always guarantee this... this
> >>> +     * can make things easier sometimes since then start address
> >>> +     * of the small chunk will always be 64 pages aligned so the
> >>> +     * bitmap will always be aligned to unsigned long. We should
> >>> +     * even be able to remove this restriction but I'm simply
> >>> +     * keeping it.
> >>> +     */
> >>> +    assert(shift >= 6);
> >>> +
> >>> +    size = 1ULL << (TARGET_PAGE_BITS + shift);
> >>> +    start = (((ram_addr_t)page) << TARGET_PAGE_BITS) & (-size);
> >>
> >> these as well as.
> >
> > Is there any coding style requirement for this?
> 
> Don't think so. It simply results in less LOC and less occurrences of 
> variables.
> 
> > My thought was that those operations could mostly be avoided if they
> > don't pass the above if condition (e.g. just once per 1GB chunk).
> 
> Usually the compiler will reshuffle as possible to optimize. But in this 
> case, due
> to clear_bmap_test_and_clear(), it might not be able to move the
> computations behind that call. So the final code might actually differ.
> 
> Not that we really care about this micro-optimization, though.

OK, looks that's just a personal favor. I'm inclined to keeping the 
micro-optimization.

> 
> >
> >>
> >>> +    trace_migration_bitmap_clear_dirty(rb->idstr, start, size, page);
> >>> +    memory_region_clear_dirty_bitmap(rb->mr, start, size); }
> >>> +
> >>> +static void
> >>> +migration_clear_memory_region_dirty_bitmap_range(RAMState *rs,
> >>> +                                                 RAMBlock
> *rb,
> >>> +                                                 unsigned
> long
> >> start,
> >>> +                                                 unsigned
> long
> >>> +npages) {
> >>> +    unsigned long page_to_clear, i, nchunks;
> >>> +    unsigned long chunk_pages = 1UL << rb->clear_bmap_shift;
> >>> +
> >>> +    nchunks = (start + npages) / chunk_pages - start / chunk_pages
> >>> + + 1;
> >>
> >> Wouldn't you have to align the start and the end range up/down to
> >> properly calculate the number of chunks?
> >
> > No, divide will round it to the integer (beginning of the chunk to clear).
> 
> 
> nchunks = (start + npages) / chunk_pages - start / chunk_pages + 1;

I had a mistake on the right boundary, it should be [start, start + npages), 
instead of [start, start + npages].
i.e. nchunks = (start + npages - 1) / chunk_pages - start / chunk_pages + 1

But I can take your approach here, thanks.

Best,
Wei


reply via email to

[Prev in Thread] Current Thread [Next in Thread]