bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: History file clobbered by multiple simultaneous exits


From: Linda Walsh
Subject: Re: History file clobbered by multiple simultaneous exits
Date: Thu, 25 Jul 2013 01:08:53 -0700
User-agent: Thunderbird



Geoff Kuenning wrote:

I can also see the possibility of some kernel or file system routine
waiting after you issue the close call so that it doesn't have to zero
the area where data is arriving.  I.e. it might only zero the file beyond
the valid text AFTER some delay (5 seconds?) OR might wait until the file
is closed, so if you completely overwrite the old file with text, the
kernel won't have to zero anything out.

If so, that would be a big bug.  When you're truncating a file to a
shorter length, some filesystems do indeed delay freeing the blocks in
hopes of reusing them.  But the length is set to zero when the O_TRUNC
happens, and likewise if you write n bytes, the length is immediately
increased by n.  There are certain races on some filesystems that could
cause the n bytes to be incorrect (e.g., garbage), but that generally
happens only on system crashes.  There's a paper on this from a few
years back; I'd have to review it to be sure but my recollection is that
you can't get zero-length files in the absence of system or hardware
failures.  (However, I'm not sure whether they used SMPs...)
-----
        Instead of "junk", secure file systems mark it as needing to be zeroed. 
 Perhaps
instead of zeroing it ext3 simply marks it of zero length?  Imagine, embedded 
in the
junk are credit cards and passwords and you'll begin to understand why zero pages are
kept "in-stock" (in memory) in the kernel so they can rapidly issue a fresh 
page.


It's an edge case usually only seen during some system crash as you mentioned,
so I can't see how it would cause the symptom you are seeing, but it's only
seen in crashes in more mature file systems.  Can you reproduce it on another
file system like xfs?



Still, I suppose it could be a kernel bug.  Maybe I'll have to write a
better test program and let it run overnight.
----
        Well... remember, between bash and the kernel are layers of libc-library
stuff, as well as file-system drivers that often all act just slightly 
differently
from every other driver... ;-)

in the case of write...close to non-pre-zeroed record, the operation
becomes a read-modify-write.  Thing is, if proc 3 goes out for the
partial buffer
(~4k is likely), it may have been completely zeroed from proc2 closing
where proc3
wants to write.

No generic Linux filesystem that I'm aware of zeroes discarded data at
any time; it's too expensive.
-----
        Actually the buffer is zeroed before the user stuff is copied into it
if it is a partial record.  having no data leakage between processes is a
security requirement on secure systems and that requirement has just become the
status quote for most modern OS's...




reply via email to

[Prev in Thread] Current Thread [Next in Thread]