qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Fix refcounting in hugetlbfs quota handling


From: Hugh Dickins
Subject: Re: [Qemu-devel] Fix refcounting in hugetlbfs quota handling
Date: Fri, 12 Aug 2011 12:15:21 -0700 (PDT)
User-agent: Alpine 2.00 (LSU 1167 2008-08-23)

On Fri, 12 Aug 2011, Minchan Kim wrote:
> On Fri, Aug 12, 2011 at 9:48 AM, Linus Torvalds
> <address@hidden> wrote:
> > On Wed, Aug 10, 2011 at 11:40 PM, David Gibson
> > <address@hidden> wrote:
> >>
> >> This patch, therefore, stores a pointer to the inode instead of the
> >> address_space in the page private data for hugepages.  More
> >> importantly it correctly adjusts the reference count on the inodes
> >> when they're added to the page private data.  This ensures that the
> >> inode (and therefore the super block) will not be freed before we use
> >> it from free_huge_page.
> >
> > Looks sane, but I *really* want some acks from people who use/know
> > hugetlbfs. Who would that be? I'm adding random people who have
> > acked/signed-off patches to hugetlbfs recently..
> 
> At least, code itself looks good to me but your random choice was failed.
> Maybe people you want are as follows.
> http://marc.info/?t=126928975800003&r=1&w=2
> 
> Ccing right persons.

I don't know much about hugetlbfs these days, but I think the patch
is very wrong.

The real change is where alloc_huge_page() does igrab(inode) and
free_huge_pages() does iput(inode)?

That makes me very nervous, partly because a final iput() is a complex
operation, which we wouldn't expect to be doing when "freeing" a page.

My first worry was that free_huge_page() could actually get called at
interrupt time (when it's in a pagevec of pages to be freed as a batch,
then another put_page is done at interrupt time which frees that batch):
I worried that we use spin_lock not spin_lock_irqsave on inode->i_lock.
To be honest though, I've not followed up whether that's actually a
possibility, the compound page path is too twisty for a quick answer;
and even if it's a possibility, it's one that's already ignored in the
case of hugetlb_lock.

Setting that aside, I think this thing of grabbing a reference to inode
for each page just does not work as you wish: when we unlink an inode,
all its pages should be freed; but because they are themselves holding
references to the inode, it and its pages stick around forever.

A quick experiment with your patch versus without confirmed that:
meminfo HugePages_Free stayed down with your patch, but went back to
HugePages_Total without it.  Please check, perhaps I'm just mistaken.

Sorry, I've not looked into what a constructive alternative might be;
and it's not the first time we've had this difficulty - it came up last
year when the ->freepage function was added, that the inode may be gone
by the time ->freepage(page) is called.

On a side note, very good description - thank you, but I wish you'd
split the patch into two, the fix and then the inode-instead-of-mapping
cleanup.  Though personally I'd prefer not to make that "cleanup": it's
normal for a struct address space * to be used in struct page (if I delved
I guess I'd find good reason why this one is in page->private instead of
page->mapping: perhaps because it's needed after page->mapping is reset
to NULL, perhaps because it's needed on COWed copies of hugetlbfs pages).

Hugh

reply via email to

[Prev in Thread] Current Thread [Next in Thread]