guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Combining Guix, direnv and Emacs for environment customisation


From: Roel Janssen
Subject: Re: Combining Guix, direnv and Emacs for environment customisation
Date: Sat, 03 Jun 2017 23:08:54 +0200
User-agent: mu4e 0.9.18; emacs 25.1.1

Ludovic Courtès writes:

> Roel Janssen <address@hidden> skribis:
>
>> Ludovic Courtès writes:
>>
>>> I agree that we could do a lot more things with a faster ‘guix
>>> environment’.  My guess is that it won’t be easy to go optimize, and
>>> very hard to go below 1 second.  We should profile that and see what can
>>> be done.
>>
>> FWIW, on our NFS-mounted /gnu, the 'guix environment' command takes at
>> least 20 seconds, but for any reasonably big environment it takes more
>> than one minute.  The biggest bottleneck here is the disk latency.
>> Could it be that 'guix environment' accesses many files?  If we could
>> reduce that, it would speed things up (at least for us ;)).
>
> Interesting.   Does guix-daemon access /gnu over NFS too?

Yes.

> Could you send the output of (first run the same command without
> ‘strace’):
>
>   strace -c guix environment --ad-hoc coreutils --pure -- true
>
> On my laptop I get:
>
> --8<---------------cut here---------------start------------->8---
> $ strace -c guix environment --ad-hoc coreutils --pure -- true
> % time     seconds  usecs/call     calls    errors syscall
> ------ ----------- ----------- --------- --------- ----------------
>  37.84    0.017268           1     14036     10965 stat
>  15.13    0.006904           1      9990           read
>  14.02    0.006398           1     10000           write
>   8.75    0.003991        3991         1           wait4
>   6.06    0.002764           1      5218         4 lstat
>   4.73    0.002157           1      1930       136 open
>   2.58    0.001176           1      1794           close
>   2.08    0.000949           1      1717         3 lseek
>   2.04    0.000932           1       689           mmap
>   1.89    0.000861           1       645           mprotect
>   1.71    0.000781           4       189        32 futex
>
> […]
> --8<---------------cut here---------------end--------------->8---
>
> We stat a lot mostly to access all the Guix modules.  Are they on NFS
> too?

Yes.

> Thanks,
> Ludo’.

Here's my output:

address@hidden ~]$ strace -c guix environment --ad-hoc coreutils --pure -- true
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 41.79    0.038363          22      1782       188 open
 14.71    0.013500           3      4769      1747 stat
 13.19    0.012106           2      8033           read
  6.06    0.005561        1854         3         1 wait4
  5.83    0.005356           8       633       147 futex
  4.15    0.003814           0     10020           write
  3.25    0.002987         157        19           clone
  3.23    0.002968           1      5086         2 lstat
  2.86    0.002627          25       105           sendfile
  1.04    0.000954         954         1           connect
  0.97    0.000889         889         1           socket
  0.63    0.000582           0      1596           close
  0.50    0.000460           1       685           mmap
  0.45    0.000414           0      1025           fstat
  0.42    0.000386           0      1520         3 lseek
  0.38    0.000350           1       644           mprotect
  0.28    0.000255          12        22           getdents
  0.11    0.000103           1       186           brk
  0.06    0.000051          26         2           madvise
  0.02    0.000021           4         5           munmap
  0.02    0.000021           0        65           clock_gettime
  0.01    0.000009           5         2           execve
  0.01    0.000006           0        50           rt_sigprocmask
  0.01    0.000005           0        13         1 fcntl
  0.01    0.000005           1         5           prlimit64
  0.00    0.000002           0        25           rt_sigaction
  0.00    0.000002           1         2           arch_prctl
  0.00    0.000001           0        10         3 ioctl
  0.00    0.000001           0         3         3 access
  0.00    0.000001           0         3           dup2
  0.00    0.000000           0         1           rt_sigreturn
  0.00    0.000000           0         1           pipe
  0.00    0.000000           0         2           getpid
  0.00    0.000000           0         2           uname
  0.00    0.000000           0         1           getcwd
  0.00    0.000000           0         2           readlink
  0.00    0.000000           0         1           sysinfo
  0.00    0.000000           0         1           getuid
  0.00    0.000000           0         1           getgid
  0.00    0.000000           0         1           geteuid
  0.00    0.000000           0         1           getegid
  0.00    0.000000           0         1           getppid
  0.00    0.000000           0         1           getpgrp
  0.00    0.000000           0         2           sched_getaffinity
  0.00    0.000000           0         1           set_tid_address
  0.00    0.000000           0         1           faccessat
  0.00    0.000000           0         1           set_robust_list
  0.00    0.000000           0         3           pipe2
------ ----------- ----------- --------- --------- ----------------
100.00    0.091800                 36339      2095 total


So the main difference is that calls to 'open' take a lot longer.
There's little we can do, other than come up with a good caching
mechanism.

Kind regards,
Roel Janssen



reply via email to

[Prev in Thread] Current Thread [Next in Thread]