qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 02/29] migrate: Update ram_block_discard_range for


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [RFC 02/29] migrate: Update ram_block_discard_range for shared
Date: Thu, 24 Aug 2017 17:59:11 +0100
User-agent: Mutt/1.8.3 (2017-05-23)

* Peter Xu (address@hidden) wrote:
> On Wed, Jun 28, 2017 at 08:00:20PM +0100, Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" <address@hidden>
> > 
> > The choice of call to discard a block is getting more complicated
> > for other cases.   We use fallocate PUNCH_HOLE in any file cases;
> > it works for both hugepage and for tmpfs.
> > We use the DONTNEED for non-hugepage cases either where they're
> > anonymous or where they're private.
> > 
> > Care should be taken when trying other backing files.
> > 
> > Signed-off-by: Dr. David Alan Gilbert <address@hidden>
> > ---
> >  exec.c       | 28 ++++++++++++++++------------
> >  trace-events |  3 +++
> >  2 files changed, 19 insertions(+), 12 deletions(-)
> > 
> > diff --git a/exec.c b/exec.c
> > index 69fc5c9b07..4e61226a16 100644
> > --- a/exec.c
> > +++ b/exec.c
> > @@ -3557,6 +3557,7 @@ int ram_block_discard_range(RAMBlock *rb, uint64_t 
> > start, size_t length)
> >      }
> >  
> >      if ((start + length) <= rb->used_length) {
> > +        bool need_madvise, need_fallocate;
> >          uint8_t *host_endaddr = host_startaddr + length;
> >          if ((uintptr_t)host_endaddr & (rb->page_size - 1)) {
> >              error_report("ram_block_discard_range: Unaligned end address: 
> > %p",
> > @@ -3566,23 +3567,26 @@ int ram_block_discard_range(RAMBlock *rb, uint64_t 
> > start, size_t length)
> >  
> >          errno = ENOTSUP; /* If we are missing MADVISE etc */
> >  
> > -        if (rb->page_size == qemu_host_page_size) {
> > -#if defined(CONFIG_MADVISE)
> > -            /* Note: We need the madvise MADV_DONTNEED behaviour of 
> > definitely
> > -             * freeing the page.
> > -             */
> > -            ret = madvise(host_startaddr, length, MADV_DONTNEED);
> > -#endif
> > -        } else {
> > -            /* Huge page case  - unfortunately it can't do DONTNEED, but
> > -             * it can do the equivalent by FALLOC_FL_PUNCH_HOLE in the
> > -             * huge page file.
> > -             */
> > +        /* The logic here is messy;
> > +         *    madvise DONTNEED fails for hugepages
> > +         *    fallocate works on hugepages and shmem
> > +         */
> > +        need_madvise = (rb->page_size == qemu_host_page_size) &&
> > +                       (rb->fd == -1 || !(rb->flags & RAM_SHARED));
> > +        need_fallocate = rb->fd != -1;
> > +        if (ret == -1 && need_fallocate) {
> 
> (ret will always be -1 when reach here?)

Yes, I was just making the code independent of order.

> >  #ifdef CONFIG_FALLOCATE_PUNCH_HOLE
> >              ret = fallocate(rb->fd, FALLOC_FL_PUNCH_HOLE | 
> > FALLOC_FL_KEEP_SIZE,
> >                              start, length);
> >  #endif
> >          }
> > +        if (need_madvise && (!need_fallocate || (ret == 0))) {
> > +#if defined(CONFIG_MADVISE)
> > +            ret =  madvise(host_startaddr, length, MADV_DONTNEED);
> > +#endif
> > +        }
> > +        trace_ram_block_discard_range(rb->idstr, host_startaddr,
> > +                                      need_madvise, need_fallocate, ret);
> 
> How about make the check easier by:
> 
>   if (rb->page_size != qemu_host_page_size ||
>       rb->flags & RAM_SHARED) {
>       /* Either huge pages or shared memory will contain rb->fd */
>       assert(rb->fd);
>       fallocate(rb->fd, ...);
>   } else {
>       madvise();
>   }

I've reworked this.
There are situations where you want both (I think!) - for
shared memory, that's not a hugepage, you do an fallocate
to clear the underlying storage, and then do an madvise
to force the local mappings to be cleared.

Dave

> Thanks,
> 
> -- 
> Peter Xu
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]