[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Build times of hurd and glibc for different versions of gnumach, hu

From: Samuel Thibault
Subject: Re: Build times of hurd and glibc for different versions of gnumach, hurd and glibc
Date: Wed, 20 Apr 2016 10:52:06 +0200
User-agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)

Samuel Thibault, on Wed 20 Apr 2016 01:07:09 +0200, wrote:
> Samuel Thibault, on Tue 19 Apr 2016 12:18:01 +0200, wrote:
> > Svante Signell, on Tue 19 Apr 2016 12:08:07 +0200, wrote:
> > > > Looking at ps -feMj, it seems that it's ext2fs which consumes much more
> > > > CPU time, thus increasing overall wallclock time. It'd probably be
> > > > interesting to profile ext2fs, to see what takes longer. Perhaps it's
> > > > the hash table which is less efficient since with the new page cache
> > > > policy there are much more cached files?
> > > 
> > > Are you preparing ext2fs for profiling? .../pkg-glibc/glibc: hurd-i386: 
> > > Fix
> > > recording profiling from ext2fs
> > 
> > Yes. I just got the results, attached here.
> I have now pushed a wiki page for profiling the kernel, which is more
> involved:
> http://darnassus.sceen.net/~hurd-web/microkernel/mach/gnumach/profiling/
> The glibc build is going on, results available later :)

Here is the profiling for the new policy.

So half of the time is spent in vm_map_enter.part.0, and looking more
closely at the addresses, it's exactly the while(TRUE) loop between
lines 777 and 830.

This indeed loops over all vm_map_entries of the process, which are
plenty when we have a lot of cache (a mere find /usr on my system brings
11400 cached objects, and vminfo on ext2fs shows 3240 entries).

Most of the time, these are contiguous, so it's kind of stupid to look
again at each and every of them to find room.  I guess we can introduce
a much better algorithm, by maintaining pointers to the free areas.


Attachment: kernprof-newpolicy
Description: Text document

reply via email to

[Prev in Thread] Current Thread [Next in Thread]