bug-ddrescue
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-ddrescue] ddrescue errors: maybe yes, maybe no


From: Antonio Diaz Diaz
Subject: Re: [Bug-ddrescue] ddrescue errors: maybe yes, maybe no
Date: Tue, 18 Feb 2014 00:47:11 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.7.11) Gecko/20050905

Hello Adrien.

Adrien Cordonnier wrote:
BTW, would anybody here find useful that ddrescue could produce compressed
logs of rates and reads? I think they may become pretty large (specially
the reads log).

I think this is a really good idea. Actually I subscribed to the list last
week because ddrescue became really slow, probably because of the size of
the log file.

I don't think the slowness of ddrescue is caused by the size of the logfile. A logfile of 7 MB is written to disc every 5 minutes. This is only 23 kB/s.

I was refering to the reads log (the one activated by option --log-reads), which can grow to 55 MB after reading just 100 GB without errors using the default cluster size.


The disk to rescue is 500 GB with bad sectors mainly at 50% and 75%. I
backed up the first 50% then the last 25% with the -r option. I saw that
the speed was sometimes 11-15 MB/s, sometimes 5 MB/s in the third quarter.
Thus I stupidly ran ddrescue for the third times with a minimum speed of 10
MB/s to get the fast areas first. The speed decreased to around 9 MB/s so I
backed up almost nothing more and the log file grew to 7 MB. Now, ddrescue
speed has decreased to 10 kB/s, 1000 times less than dd at the same
position.

A lot of things sound incorrect in this description. For example, the --min-read-rate option (I suppose this is what you mean with "minimum speed") does not have effect when retrying (-r option). Also the "dd is 1000 times faster than ddrescue" sounds pretty suspicious.


I think it would be good to have the option to keep the previous versions
of the log file.

1) Using an old version of the logfile just makes ddrescue to forget about some of the areas already tried, and you don't want this.

2) The logfile is a critical resource in the rescue. I do not plan even to ever compress it or somehow decrease its reliability.

3) 7zip is an archiver for windows systems, the least adequate kind of program to use in combination with "posix" programs like ddrescue. We are not talking about compressing a regular file on disc. We are talking about compresing a stream on the fly through pipes.


I suggest 7zip compression because it gives much better compression. For
example my 7 MB log file (available if you want) is between 300 kB and 500
kB with zip, gzip or bzip2 but only 84 kB with 7zip (with default
fileroller parameters).

For this task I would only use bzip2 or lzip (see the NOTE here[1]), but lzip is much better than bzip2 for this kind of file:

-rw-r--r--  1 55343627 2014-02-17 20:55 readlog
-rw-r--r--  1  1343741 2014-02-17 20:55 readlog.bz2
-rw-r--r--  1  4154545 2014-02-17 20:55 readlog.gz
-rw-r--r--  1   351966 2014-02-17 20:55 readlog.lz
-rw-r--r--  1   513932 2014-02-17 20:55 readlog.xz

[1] http://www.nongnu.org/zutils/zutils.html


I am interested by any idea that you may have to proceed with the back up
of my disk. Currently, I consider either:
a) to run dd on 0-50%, 55-70% and 75-100%, and ask ddrescue to finish the
work by guessing what the log file should be.
b) to write a python script to simplify my 7 MB log file by replacing all 3
lines non-tried/1 good sector/non tried with a long non-tried area.

I would just run ddrescue without options and would let it do its job, unless I have proof that it is not behaving properly.

If you do option 'a' remember to give the conv=noerror,sync option to dd, or else you will ruin your rescue. (And be prepared to combine the generated logfile with the real one with ddrescuelog).

Option 'b' makes no sense, as ddrescue would have to read again already read sectors.


Best regards,
Antonio.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]