qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH QEMU] transparent hugepage support


From: Andrea Arcangeli
Subject: Re: [Qemu-devel] [PATCH QEMU] transparent hugepage support
Date: Fri, 12 Mar 2010 17:17:24 +0100

On Fri, Mar 12, 2010 at 04:04:03PM +0000, Paul Brook wrote:
> > > > $ cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size
> > > > 2097152
> > >
> > > Is "pmd" x86 specific?
> > 
> > It's linux specific, this is common code, nothing x86 specific. In
> > fact on x86 it's not called pmd but Page Directory. I've actually no
> > idea what pmd stands for but it's definitely not x86 specific and it's
> > just about the linux common code common to all archs. The reason this
> > is called hpage_pmd_size is because it's a #define HPAGE_PMD_SIZE in
> > the kernel code. So this entirely match the kernel internals
> > _common_code_.
> 
> Hmm, ok. I'm guessing linux doesn't support anything other than "huge" and 
> "normal" page sizes now, so it's a question of whether we want it to expose 
> current implementation details, or say "Align big in-memory things this much 
> for optimal TLB behavior".

hugetlbfs already exposes the implementation detail. So if you want
that it's already available. The whole point of going the extra mile
with a transparent solution is to avoid userland to increase in
complexity and to keep it as unaware of hugepages as possible. The
madvise hint basically means "this won't risk to waste memory if you
use large tlb on this mapping" and also "this mapping is more
important than others to be backed by hugepages". It's up to the
kernel what to do next. For example right now khugepaged doesn't
prioritize scanning the madvise regions first, it basically doesn't
matter for hypervisor solutions in the cloud (all anon memory in the
system is only allocated by kvm...). But later we may prioritize it
and try to be smarter from the hint given by userland.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]