lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] [PATCH] Trivial patch to avoid building unnecessary wxWidgets


From: Vadim Zeitlin
Subject: Re: [lmi] [PATCH] Trivial patch to avoid building unnecessary wxWidgets parts
Date: Tue, 10 Mar 2015 17:42:09 +0100

On Mon, 09 Mar 2015 22:18:57 +0000 Greg Chicares <address@hidden> wrote:

GC> [your result with zsd REPORTTIME:]
GC> > make $coefficiency -f install_wx.make wx_version=3.1.0  -s  57.05s user 
146.03s system 36% cpu 9:13.59 total
GC> [your result with "time":]
GC> > make $coefficiency -f install_wx.make wx_version=3.1.0  -s  62.21s user 
159.18s system 39% cpu 9:25.09 total

 FWIW I consider the two results to be "equal", IME the difference is
perfectly within the range of what you could expect when benchmarking,
especially inside a VM.

GC> What's your $coefficiency ? Mine is:
GC> /lmi/src/lmi[0]$echo $coefficiency
GC> --jobs=16

 Mine is -j4.

GC> Okay, this task takes you nine or nine and a half minutes with four
GC> logical CPUs, in a VM. I'm wondering how it can be that fast. Under
GC> native msw-xp here, with all eight physical cores in use (sixteen if
GC> we count hyperthreading, but we'll call it eight), I get:
GC>  528.68s user 243.44s system 187% cpu 6:50.74 total
GC> Seven minutes compared to your nine, but I use twice as many cores.
GC> Mine are E5520 at 2.27 GHz, and yours are rather faster IIRC,

 Yes, mine is i7 3930K with a 3.2GHz base clock
(http://ark.intel.com/products/63697/Intel-Core-i7-3930K-Processor-12M-Cache-up-to-3_80-GHz)
so this probably already explains most, if not all, of the difference.

GC> but I
GC> use twice as many--and I'm running msw native, while you're using
GC> a VM, so I'd expect a bigger difference in our timings. What am I
GC> missing? Different OS version? SSD? I'm using an HDD, but much
GC> of the time CPU utilization stays at 1600% (hyperthreaded).

 The (guest) OS could be a factor, as I'm using Windows 7. The SSD is
definitely a factor for me when linking, I could move the VM to an HDD to
try to quantify this but subjectively the wx builds (under Linux, this
time) much faster on my primary Linux VM which is on an SSD than on some
other ones that I use more rarely and so keep on an HDD.

GC> > GC> It's about half that speed in my VM:
GC> > GC>  541.79s user 821.04s system 165% cpu 13:41.95 total
GC> 
GC> Compared to native msw, that's about half speed.

 This seems to be in the right ballpark. I've never understood why building
was so much slower in the VM, it looks like the compilation process
shouldn't suffer too much from the virtualization penalty -- but it does.

GC> FWIW, building
GC> lmi runs at about five-eighths the speed of native ('configure'
GC> imposes a larger drag on the wx build). My VM uses a raw file;
GC> qemu's qcow2 format is much slower, and a physical partition
GC> would probably be faster. The VM file is a limiting factor: when
GC> I build lmi in the VM, CPU utilization sometimes hits 1600%, but
GC> it doesn't stay there constantly.

 It's very surprising if it really stays at max CPU use all the time. It
should be at 100% (i.e. 1/16) for a relatively long (~1 minute?) time in
the beginning while configure is running as it's completely single-threaded
and it should also be lower during linking as GNU ld is single-threaded as
well to the best of my knowledge -- and while make could be linking several
libraries at the same time, in practice, the core one takes by far the
longest time to compile and to link, so the CPU use is typically 100% (of
one CPU) while linking it.

 Regards,
VZ

reply via email to

[Prev in Thread] Current Thread [Next in Thread]