bug-coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RFC: How du counts size of hardlinked files


From: Phillip Susi
Subject: Re: RFC: How du counts size of hardlinked files
Date: Fri, 13 Jan 2006 13:56:30 -0500
User-agent: Thunderbird 1.5 (Windows/20051201)

Maybe I misunderstood you but you seem to think that each hard link to the same file can have different ownerships. This is not the case. Hard links are just additional names for the same inode, and permissions and ownership is associated with the inode, not the name(s).

Also I just tested it and du doesn't report the size used by duplicate hard links in the tree twice. I did a cp -al foo bar, then a du -sh, du -sh foo, and they were both the same size.



Johannes Niess wrote:
Hi list,

du (with default options) seems to count files with multiple hard links in the first directory it traverses.

The -l option changes that.

But there are other valid viewpoints.

Somehow the byte count of multiple hardlinks partially belongs to all of them, even when not part of traversed directories. In this mode a file with 10 bytes and 3 hardlinks would be counted as 3 files with 3 bytes (an only one hardlink) each. The rounding error of integers is acceptable in this 'approximate' mode. Programmatically this is should be very similar to the -l mode. Use case: Different physical owners of the hardlinks and doing fair accounting for them. (Of course the inode has only one common logical owner for all directory entries).

Not counting multiple AND out-of-tree hardlinks is also usefull. It tells us how much space we really gain when deleting that tree. 'rm-size' could be a name for this mode. Programmatically this is similar to default mode: In Perl I'd use hash keys for the test in default mode. In 'rm-size' mode I'd increase the hash values of visited inodes. Finally compare # of visited directory entries to the # of links.

du seems to be the natural home for this functionality. Or is it feature bloat?

Background: Backups via 'cp -l' need (almost) no space for files unchanged in several cycles. But these shadow forests of hardlinks are difficult to account for. Especially when combined with finding and linking identical files across several physical owners.

Johannes Niess

P.S: I'm not volunteering to implement this. I did not even feel enough need to do the perl scripts.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]