qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH, RESEND] kvm: Fix dirty tracking with large kern


From: David Gibson
Subject: Re: [Qemu-devel] [PATCH, RESEND] kvm: Fix dirty tracking with large kernel page size
Date: Wed, 4 Apr 2012 11:12:38 +1000
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Apr 03, 2012 at 09:08:16AM +0200, Jan Kiszka wrote:
> On 2012-04-02 06:04, David Gibson wrote:
> > From: Ben Herrenschmidt <address@hidden>
> > 
> > If the kernel page size is larger than TARGET_PAGE_SIZE, which
> > happens for example on ppc64 with kernels compiled for 64K pages,
> > the dirty tracking doesn't work.
> > 
> > Cc: Avi Kivity <address@hidden>
> > Cc: Marcelo Tosatti <address@hidden>
> > 
> > Signed-off-by: Benjamin Herrenschmidt <address@hidden>
> > Signed-off-by: David Gibson <address@hidden>
> > ---
> >  kvm-all.c |    6 ++++--
> >  1 files changed, 4 insertions(+), 2 deletions(-)
> > 
> > I've sent this a number of times now, the last couple without comment.
> > It fixes a real bug, please apply.
> > 
> > diff --git a/kvm-all.c b/kvm-all.c
> > index ba2cee1..7e44429 100644
> > --- a/kvm-all.c
> > +++ b/kvm-all.c
> > @@ -354,6 +354,7 @@ static int 
> > kvm_get_dirty_pages_log_range(MemoryRegionSection *section,
> >      unsigned long page_number, c;
> >      target_phys_addr_t addr, addr1;
> >      unsigned int len = ((section->size / TARGET_PAGE_SIZE) + 
> > HOST_LONG_BITS - 1) / HOST_LONG_BITS;
> > +    unsigned long hpratio = getpagesize() / TARGET_PAGE_SIZE;
> >  
> >      /*
> >       * bitmap-traveling is faster than memory-traveling (for addr...)
> > @@ -365,10 +366,11 @@ static int 
> > kvm_get_dirty_pages_log_range(MemoryRegionSection *section,
> >              do {
> >                  j = ffsl(c) - 1;
> >                  c &= ~(1ul << j);
> > -                page_number = i * HOST_LONG_BITS + j;
> > +                page_number = (i * HOST_LONG_BITS + j) * hpratio;
> >                  addr1 = page_number * TARGET_PAGE_SIZE;
> >                  addr = section->offset_within_region + addr1;
> > -                memory_region_set_dirty(section->mr, addr, 
> > TARGET_PAGE_SIZE);
> > +                memory_region_set_dirty(section->mr, addr,
> > +                                        TARGET_PAGE_SIZE * hpratio);
> >              } while (c != 0);
> >          }
> >      }
> 
> Ack for this, but - as proposed earlier - please add an
> assert(TARGET_PAGE_SIZE <= getpagesize()) + comment to kvm_init().

Ok.

> Also, what's about coalesced MMIO? I see that the ring definition
> depends on [TARGET_]PAGE_SIZE. What page size does the power kernel use
> for it, and does it make a relevant difference for space?

Hr, so the HV variant of Power KVM doesn't do coalesced mmio.  The PR
variant does, but I don't know enough about it to easily answer that.
If there's a bug there, it hasn't bitten yet, so how about we fix that
another day.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]