[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


From: Alexandre Oliva
Subject: Re: BTRFS, LVM, LUKS
Date: Thu, 04 Jul 2019 16:20:45 -0300
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux)

Hello, Giovanni,

On Jun 30, 2019, Giovanni Biscuolo <address@hidden> wrote:

> wellcome to Guix!


> Alexandre Oliva <address@hidden> writes:

> Guix is "just" not able to activate/assemble LVM volumes at boot

Ok, that doesn't sound too hard to fix.  (famous last words ;-)

> Device mapper is definitely supported


> I personally manage a physical machine using multi-disk BTRFS
> and tested root on BTRFS on LUKS a couple of times on a physical machine


> "It sounds like we’re almost there, I guess."

Yeah, it looked so nearly finished that I was very surprised it didn't
make it yet.

> Please do not be blocked by the lack of LVM support, try start using
> Guix on BTRFS on a physical host if you can

I might have to make up some room by shrinking LVM partitions ;-)  I
don't think I have any systems that don't have all disks fully allocated
to LVM, other than the yeeloongs.

> All I can say is that to start hacking (that means locally build several
> packages or services) on Guix you need enough memory (at least 4GB but
> 8GB is far better... and use swap!) and enough CPU power (4 cores at
> least)

That won't be a problem, once I get at least some LVM going.  My most
powerful local machines (just as old as the others, but desktops with 6
cores and 24+GB of RAM) should be able to deal with the workload, but
rearranging their disks is even trickier.

> must first undestand how device-mapped device are activated and add
> support to for LVM ones

I'm quite familiar with that myself.  There are basically three ways to
go about it: (i) scan and activate all PV-like devices, (ii) scan and
activate them individually as they come up from udev or whatever, or
(iii) know what you're looking to assemble, and look for the specific
physical volumes

(i) is the simplest, at least as long as you don't have devices that
take a long time ot come up and might cause the scan to time out before
they're there, but I'm told it's not such a good idea for shared-storage
systems (like, multiple hosts connected to the same pool of physical or
virtual storage, with some arbitration among them that can get messed up
if one of them goes about scanning and activating and whatever what
others are already using.

(ii) is probably the sanest, at least after the root fs is fully
activated, since removable storage might be plugged in at any time, and
the infrastructure to support scanning and activating it automatically
then is pretty much the same that activates the initially-available
volumes, though the latter might take some simulated udev events.

(iii) sounds a little more in line with what I understand GuixSD system
configuration is about, but...  though it's nice to have a config file
describing how the storage is set up, that kind of obviates the
flexibility of lvm and could make things more difficult for storage
disaster recovery.

Anyway, I guess it would make most sense to at least start building up
on existing practice.  How does Guix currently bring up multi-device
root filesystems (btrfs, mdraid, ...), and any recursive combinations of
mdraid, dmcrypt, etc?  I suppose it would make sense to stick to similar
logic in bringing up physical volumes towards assembling a volume group
containing the root logical volume, bearing in mind that any of these
might be also be mdraid, dmraid, dmcrypt...  So a single lvm vgscan,
while covering the simplest configurations, would not quite go all the
way to full generality, which, in the GNU spirit of avoiding arbitrary
limitations, if not quite in in the strict sense of the letter, is what
I assume GuixSD would aim for.  Is that so?

Alexandre Oliva, freedom fighter  he/him
Be the change, be Free!                 FSF Latin America board member
GNU Toolchain Engineer                        Free Software Evangelist
Hay que enGNUrecerse, pero sin perder la terGNUra jamás - Che GNUevara

reply via email to

[Prev in Thread] Current Thread [Next in Thread]