[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [coreutils] [PATCH] doc: show how to shred using a single zero-writi
From: |
Jim Meyering |
Subject: |
Re: [coreutils] [PATCH] doc: show how to shred using a single zero-writing pass |
Date: |
Mon, 17 Jan 2011 18:08:18 +0100 |
Pádraig Brady wrote:
> On 17/01/11 10:36, Jim Meyering wrote:
>>>From 7dc6335653afcdad9a3ffa327877571734644285 Mon Sep 17 00:00:00 2001
>> From: Jim Meyering <address@hidden>
>> Date: Mon, 17 Jan 2011 11:32:35 +0100
>> Subject: [PATCH] doc: show how to shred using a single zero-writing pass
>>
>> * doc/coreutils.texi (shred invocation): Give an example showing how
>> to invoke shred in its most basic (fastest) write-only-zeros mode.
>> ---
>> doc/coreutils.texi | 9 +++++++++
>> 1 files changed, 9 insertions(+), 0 deletions(-)
>>
>> diff --git a/doc/coreutils.texi b/doc/coreutils.texi
>> index 9c3e2ed..8fb9f0c 100644
>> --- a/doc/coreutils.texi
>> +++ b/doc/coreutils.texi
>> @@ -8892,6 +8892,15 @@ shred invocation
>> shred --verbose /dev/sda5
>> @end example
>>
>> +On modern disks, a single pass that writes only zeros may be enough,
>> +and it will be much faster than the default.
>
> Well only 3 times, due to the disk being the bottleneck
> (since we changed to the fast internal PRNG by default).
> Also for security, writing random data would probably be more effective.
> So I'd reword the above sentence to:
>
> "To simply clear a disk"
Regarding "only 3x" I agree that 3x is better than the 25x difference
when comparing with the default from coreutils-7.0 and earlier.
However, when it takes two hours with "-n0 --zero", using
the default would add 4 hours.
Interestingly, the difference is sometimes far greater than the expected 3x:
(this is with F14/ext4 on an i7-960 and an 60GB OCZ vertex-2 SSD)
$ seq 10000000 > k
$ env time --f=%e shred -n0 --zero k
0.36
$ env time --f=%e shred k
3.75
Surprisingly, even with just -n1, it's still nearly 3x:
(I expected this to have minimal performance difference with
the -n0 --zero run):
$ env time --f=%e shred -n1 k
0.95
Let's compare syscalls (the only difference should be PRNG-related)
$ strace -c shred -n1 k
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
96.34 0.052992 52992 1 fdatasync
2.72 0.001496 0 6421 1 write
0.94 0.000518 259 2 read
0.00 0.000000 0 4 open
0.00 0.000000 0 6 close
0.00 0.000000 0 3 fstat
0.00 0.000000 0 1 lseek
0.00 0.000000 0 7 mmap
0.00 0.000000 0 3 mprotect
0.00 0.000000 0 1 munmap
0.00 0.000000 0 4 brk
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 4 fcntl
0.00 0.000000 0 1 arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.055006 6460 2 total
$ strace -c shred -n0 --zero k
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
96.57 0.053991 53991 1 fdatasync
3.43 0.001916 0 6421 1 write
0.00 0.000000 0 2 read
0.00 0.000000 0 4 open
0.00 0.000000 0 6 close
0.00 0.000000 0 3 fstat
0.00 0.000000 0 1 lseek
0.00 0.000000 0 7 mmap
0.00 0.000000 0 3 mprotect
0.00 0.000000 0 1 munmap
0.00 0.000000 0 4 brk
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 4 fcntl
0.00 0.000000 0 1 arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.055907 6460 2 total
Odd... no difference in CPU time or syscall counts.
I wonder if the SSD is doing something special with blocks of all zeros,
which is reminiscent of The Reg's article:
http://www.theregister.co.uk/2011/01/14/ocz_and_ddrdrive_performance_row/
>> +Use a command like this to tell @command{shred} to skip all random
>> +passes and to perform only a final zero-writing pass:
>> +
>> +@example
>> +shred --verbose -n0 --zero /dev/sda5
>> +@end example
>
> It's probably not worth noting the equivalent:
> dd conv=fdatasync bs=2M < /dev/zero > /dev/sda5
I like shred's --verbose %-completion indicator, though even with dd,
you might be able to get a similar progress indicator using
the phantom-progress.bash script here:
http://blog.ksplice.com/2011/01/solving-problems-with-proc/