[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: making GNU ls -i (--inode) work around the linux readdir bug

From: Phillip Susi
Subject: Re: making GNU ls -i (--inode) work around the linux readdir bug
Date: Thu, 10 Jul 2008 17:50:20 -0400
User-agent: Thunderbird (Windows/20080421)

Wayne Pollock wrote:
How can either ls or readdir be considered correct when
the output is so inconsistent?  What behavior do you expect from
backup scripts (and similar tools) that use find (or readdir)?
It seems clear to me that returning the underlying inode numbers
must result in having the wrong files backed up in this case.

As has already been discussed, the backup tools are already in error if they are relying on the inode number as reported by readdir. Essentially the number is meaningless and should be ignored in the context of a mounted directory.

If some folk feel the underlying numbers reported by the
(buggy; sorry but I don't know what other adjective to use) readdir,
then for the sake of consistency stat must be considered buggy
for reporting the root inode number of mount points.  "Consistency
first, and performance be damned!"

It is a logical fallacy ( false choice ) to assume that if one is right, the other MUST be wrong. The kill command is both a shell builtin and an external utility. The builtin version understands job specs, but the external only accepts pids. So, depending on whether you end up using the builtin or the external, the behavior is inconsistent. That does not mean that the external utility is broken.

 From my (admittedly inexpert) point of view the inode of the underlying
filesystem directory is useless except to find "hidden" files.  So
the stat behavior, even if useless without a device number too, is
the better value to return in all cases.

The number is useless for finding ANY file, hidden or otherwise.

It might be argued which value is better, but inconsistent values
(and perhaps unpredictable values without reading the source code)
are clearly wrong.

Inconsistency and unpredictability by themselves are not necessarily wrong. If you have an uninitialized local variable in C for instance, its value is unpredictable, but this is quite acceptable because the standard clearly indicates that its value is undefined and should not be relied on.

What I'm saying is, add a note to the man page to document the this somewhat confusing fact, and let it be.

You will find a similar inconsistency if you run a du on an entire disk, then compare the size reported with that from df. Often times they will be ( sometimes vastly ) different and this will confuse people that do not understand why. The reason is that files removed from the directory tree do not immediately have their space freed - that is only done once the file has no names pointing to it, AND no process has it open. This does not mean that the number reported by df is wrong, and that it should be changed to do the work that du does in order to get the same result.

The POSIX standard is correct in my opinion; glibc is currently buggy.
I would claim (and I guess I did) that this behavior is buggy even
if every Unix system in the universe always worked this way.
A long standing bug is still a bug.

It isn't glibc, it is the kernel. Indeed, just because it is longstanding does not mean it is not a bug, but the fact that it is confusing or inconsistent likewise does not make it a bug either. You consider it a bug because you got a result that is not what you expect. It is your _expectation_ that needs changed, not the result.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]