bug-cpio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-cpio] Hard links broken under fakeroot?


From: Pavel Raiskup
Subject: Re: [Bug-cpio] Hard links broken under fakeroot?
Date: Mon, 14 Nov 2016 11:57:43 +0100
User-agent: KMail/5.3.2 (Linux/4.8.6-201.fc24.x86_64; KDE/5.27.0; x86_64; ; )

On Saturday, November 12, 2016 9:56:40 PM CET Doug Graham wrote:
> As far as I can tell, we run into this problem only when the inode number
> of the hardlinked file exceeds 2^31.  This is happening on a tmpfs
> filesystem of ~48GB where some of the machines are using very large inode
> numbers.  It still looks like 2.12 does not have this problem so I guess we
> need to upgrade.

There's not planned update to 2.12 in RHEL6, but there's backport request
which (if everything goes fine) should be in RHEL 6.9:
https://bugzilla.redhat.com/show_bug.cgi?id=1155814

Should you have more RHEL related questions please contact Red Hat support
channels.

Pavel

> 
> On Sat, Nov 12, 2016 at 6:11 PM, Doug Graham <address@hidden> wrote:
> 
> > Hi,
> >
> > Our builds are done on a pool of RHEL 6.6 x86_64 machines on which cpio
> > 2.10-12 is installed.  The builds build a Linux root filesystem like so:
> >
> >   find . | fakeroot cpio -H new -o | xz ...
> >
> > This works most of the time, but when run on specific machines in the
> > pool, all hardlinked regular files within the directory being archived are
> > zero length in the archive. The files are still hardlinked but the contents
> > are gone.
> >
> > An md5sum on the /bin/cpio binaries shows that they are different, but
> > this must be because of prelinking and address randomization.
> >
> > I suspect a bug in cpio, but can't rule out a bug in fakeroot or something
> > else.  I did try building cpio 2.12 from source and using that, and that
> > does seem to have made the problem go away, but without knowing what the
> > root cause was, I can't really be sure it won't come back.
> >
> > Any idea what might be causing this?
> >
> > Thanks,
> > Doug
> >
> >
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]