lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] Creating a chroot for cross-building lmi


From: Greg Chicares
Subject: Re: [lmi] Creating a chroot for cross-building lmi
Date: Sat, 16 May 2020 22:03:21 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0

On 2016-09-08 22:06, Vadim Zeitlin wrote:
> On Fri, 2 Sep 2016 01:31:12 +0000 Greg Chicares <address@hidden> wrote:

[...debootstrap with options like --second-stage and --foreign...]

> I think a slightly better option is to use different debootstrap options:
> --{make,unpack}-tarball:
> 
>       # debootstrap --arch=amd64 \
>               --make-tarball=/var/cache/jessie_bootstrap.tar \
>               jessie \
>               /tmp/whatever-you-want-it-is-going-to-be-deleted-anyhow
>       # debootstrap --arch=amd64 \
>               --unpack-tarball=/var/cache/jessie_bootstrap.tar \
>               jessie \
>               /srv/chroot/cross3

That approach works. I've been using it for years, as expressed in
these two scripts:
  lmi_setup_10.sh
  lmi_setup_11.sh

Yet in these two scripts:
  install_redhat.sh
  install_centos.sh
for some unremembered reason I've been doing this instead:

mkdir -p /srv/chroot/"${CHRTNAME}"
debootstrap "${CODENAME}" /srv/chroot/"${CHRTNAME}" 
http://deb.debian.org/debian/

which certainly works, but without the benefit of caching. I thought it
would be interesting to measure the speed of both. Details:
 - I chose 'sid' because I know I don't have it cached anywhere.
 - I specified '--variant=minbase' because it should be smallest (and
     who knows, it might be as good for building lmi as the default).
 - I added '--include=zsh' because that seems better than running a
     separate step to fetch zsh (etc.) later.

*** The two-step tarball approach

time debootstrap --arch=amd64 --make-tarball=/var/cache/sid_bootstrap.tar 
--variant=minbase --include=zsh sid /tmp/eraseme

mkdir -p /tmp/sid-chroot
time debootstrap --arch=amd64 --unpack-tarball=/var/cache/sid_bootstrap.tar 
--variant=minbase --include=zsh sid /tmp/sid-chroot

1:11.31 total --make-tarball
45.225 total  --unpack-tarball

rm -rf /tmp/sid-chroot
mkdir -p /tmp/sid-chroot
[repeat --unpack-tarball command: same speed as above]
44.692 total

BTW, disk usage for tarball "cache":

du -sb /var/cache/sid_bootstrap.tar
48465802        /var/cache/sid_bootstrap.tar

*** A single-step approach, with --cache-dir

mkdir /tmp/sidcache
time debootstrap --arch=amd64 --cache-dir=/tmp/sidcache --variant=minbase 
--include=zsh sid /tmp/sid-chroot http://deb.debian.org/debian/

1:53.07 total

repeat the debootstrap command:
1:05.26 total

BTW, disk usage for '--cache-dir' (and the chroot):

du -sb /tmp/sidcache
37383120        /tmp/sidcache
du -sb /tmp/sid-chroot
218032473       /tmp/sid-chroot

*** A single-step approach, with no caching:

As in the last example above, but without '--cache-dir':

1:52.12 total
repeating:
rm -rf /tmp/sid-chroot
1:52.75 total

...so '--cache-dir' matters greatly.

*** Summary

I recreate a tarball or --cache-dir cache rarely, but use it often,
so only the final step (after caching) is important.

   44.692 total --unpack-tarball
  1:05.26 total --cache-dir

It looks like '--unpack-tarball' wins, with 150% the speed of '--cache-dir'.
However, a tarball is write-once-read-often, whereas a cache directory
should be updated whenever needed. That shifts the balance: the true cost
of the tarball approach is increased by having to run 'apt-get dist-upgrade'
after unpacking (and retrieving all updates since the --make-tarball step),
but the single-step '--cache-dir' approach shouldn't retrieve the same file
twice.

In fact, I've been doing this (for a debian-within-centos-within-debian chroot):

mount --bind /var/cache/bullseye 
/srv/chroot/centos7lmi/srv/chroot/lmi_bullseye_1/var/cache/apt/archives

and packages installed or upgraded in the innermost chroot have their '.deb'
files added to that directory.

I plan to try switching to this approach for actual production work.

Vadim--Given that /var/cache/apt/archives/ and the above /tmp/sidcache/ are
just collections of '.deb' files, is there any good reason to keep them
separate? It would seem simpler to throw all those '.deb' files into a
single location. Certainly apt already knows which of these to use for
my 'buster' base system:
  /var/cache/apt/archives/base-files_10.3+deb10u1_amd64.deb
  /var/cache/apt/archives/base-files_10.3+deb10u2_amd64.deb
  /var/cache/apt/archives/base-files_10.3+deb10u3_amd64.deb
  /var/cache/apt/archives/base-files_10.3+deb10u4_amd64.deb
so would it do any harm if I moved this 'bullseye' file:
  /var/cache/bullseye/base-files_11_amd64.deb
into the same directory?

> GC> And is there a way to cache downloads for this step:
> GC>   apt-get update
> GC>   apt-get install g++-mingw-w64-i686 automake libtool make pkg-config \
> GC>     git zsh bzip2 unzip sudo wine
> GC> which I'm doing as root inside the chroot (which runs a later version
> GC> of debian than my real system)?
> 
>  I see two ways of doing this. The proper one would be to setup a local
> Debian repository and drop a line with a (file:///) URI for it in a file in
> /etc/apt/sources.list.d directory in chroot. If you'd like to do this, I
> think the simplest and newest (so still actively maintained) tool for doing
> this is aptly (https://www.aptly.info/), but I am not really sure, this
> changes regularly.
> 
>  But I think we could use a simpler low-tech solution here: just download
> these packages into some directory ("apt-get download") and run "dpkg -i"
> on them inside chroot. As there are relatively few dependencies, it should
> be simple enough to do it in the right order, which is really the "only"
> advantage that using apt over dpkg brings.

I think I've stumbled upon an even tidier approach, above: just bind-mount
the chroot's /var/cache/apt/archives/ to some directory in the host system
(something like /var/cache/debian-testing/ does work, and above I speculate
that the host's own /var/cache/apt/archives/ might also work).


reply via email to

[Prev in Thread] Current Thread [Next in Thread]