lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] Retooling: timing comparison


From: Greg Chicares
Subject: Re: [lmi] Retooling: timing comparison
Date: Mon, 8 Feb 2016 17:37:46 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.5.0

On 02/08/2016 02:49 AM, Vadim Zeitlin wrote:
> On Mon, 8 Feb 2016 02:19:33 +0000 Greg Chicares <address@hidden> wrote:
[...]
> GC> log-20160131T1406Z-gwc 20160131T1500Z  54  68% --jobs=2'
> GC> log-20160130T2335Z-OLD 20160131T0039Z  64  81% --jobs=2
> ...
> GC> The "gwc" machine is 2 x E5-2630 v3 with a 1TB Samsung 850 pro SSD.
> GC> The "OLD" (2009) machine is 2 x E5520 with a 2TB WDC WD2003FZEX HDD.
> GC> Both these machines were running msw-xp in a kvm-qemu VM, hosted by
> GC> debian-7.
> 
>  I just don't understand these results at all :-( Leaving aside the office
> machines which have much slower disks, the fact that you gain only 10
> minutes or ~15% when switching from "OLD" to "gwc" with -j2 is just
> incomprehensible. Whether the bottleneck is IO or CPU, you should have
> gained more than this.

Better hardware, but the same 32-bit msw-xp VM. The gain is much
greater for cross-compiling with no VM...

> GC> To round out the comparison, here are timings for cross-building
> GC> msw binaries. Same compiler; same script and makefiles as above,
> GC> with just enough adjustments to get them to run in a debian-8 chroot:
> ...
> GC>     --jobs=32
> GC>   3238.23s user 175.19s system 1605% cpu 3:32.59 total
> 
>  This, at least, is reasonable. 3.5 minutes is still a bit long

That's the time it takes to run 'install_msw.sh' (adjusted for cross
compiling), which does something like 'make clean; ./configure && make'
for each library, as well as building lmi itself. That build script
uses a tunable '--jobs=' parameter for each library, but it builds the
libraries in sequence--so, while wx's configure script is running,
nothing else is being done.

Here are cross-build timings for both machines:

  *** OLD MACHINE ***
    --jobs=8
  3182.01s user 177.04s system 631% cpu 8:52.08 total
  3189.73s user 175.94s system 638% cpu 8:47.48 total

    --jobs=16
  4806.86s user 242.24s system 1166% cpu 7:12.83 total
  4838.01s user 244.74s system 1155% cpu 7:19.90 total

  *** NEW MACHINE ***

    --jobs=8
  1896.35s user 130.66s system 584% cpu 5:46.89 total
  1876.19s user 129.11s system 588% cpu 5:40.88 total
  1874.06s user 129.67s system 587% cpu 5:40.89 total

    --jobs=16
  2147.31s user 137.37s system 956% cpu 3:58.86 total
  2147.85s user 137.95s system 961% cpu 3:57.72 total
  2143.15s user 137.42s system 936% cpu 4:03.46 total

    --jobs=32
  3238.23s user 175.19s system 1605% cpu 3:32.59 total
  3237.65s user 174.71s system 1619% cpu 3:30.69 total
  3237.35s user 173.99s system 1615% cpu 3:31.14 total

  *** comparison ***
    --jobs=8 : 8:50 / 5:40 = 530s / 340s = 160% speed
    --jobs=16: 7:15 / 4:00 = 435s / 240s = 180% speed

So the new machine's about 70% faster than the old. Is that
still less than you were expecting? Here's how the new one
is configured:

- SSD has 94 GB unallocated out of 954 GB total
    (I think Samsung builds in an extra 7% overprovisioning)
- fstab looks all right to me:
    UUID=65..66 / ext4 noatime,errors=remount-ro 0 1
- swappiness (cat /proc/sys/vm/swappiness) is 1
- scheduler is cfq; some sources suggest deadline, but I did
    echo deadline > /sys/block/sdb/queue/scheduler
  and reran the '--jobs=32' test, and it was one second slower




reply via email to

[Prev in Thread] Current Thread [Next in Thread]