[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-ddrescue] Slow reads for x time to exit and whitespace skipping

From: Cameron Andrews
Subject: Re: [Bug-ddrescue] Slow reads for x time to exit and whitespace skipping
Date: Wed, 25 Jan 2017 12:38:36 +1000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1

Hi Antonio,

On 25/01/17 03:24, Antonio Diaz Diaz wrote:
What is the problem with using --max-slow-reads=60 and power cycling every time ddrescue exits with non zero status? (Ddrescue increases 'slow reads' at most once per second).

The problem I have with this approach is that on some occasions the hard drives are difficult to get started, requiring many power cycles before a drive will show itself to Linux. And consequently this can happen each time the drive is power cycled. So only using the feature as it currently is which exits when --max-slow-reads is reached can be troublesome.

What do you think about Paul Daniels' idea of resetting the slow reads count if read speed rises above the threshold again?

I think this would be the best approach and would solve the problem nicely.

Do you mean using a different exit status for each cause of termination? This requires some amount of thinking, because there are a good number of causes for ddrescue to exit, some of which can happen simultaneously.

Generally speaking yes, I don't know exactly how to approach this either. But the general idea is that it makes it easier to script around ddrescue, both shell scripting and web based scripting. I haven't tried the --log-events functionality yet, perhaps this logs the errno and errors as well? If so, that would be probably be a workable compromise. Will try this shortly...
This may perhaps be implemented as yet another cause of skipping, perhaps adding two more passes to the copying phase. It should also be considered if passes 3 and 4 should read the blocks skipped due to slow areas before passes 5 and 6 read the blocks skipped due to empty space, or the other way round. (Note that a given block may have been skipped because it is suspected to be both slow and empty).

I think that would be a good way to do it yes.

This requires major changes to ddrescue and I am not sure it will be generally useful. For example, when you know that the data are stored, say, at the beginning of the drive, you can limit the rescue domain. But if, as you say, the data are spread sparsely, there is no way to be sure that all data have been recovered but reading the whole drive.

Well the thing is, I had a drive which was reading at about 64KB/s, so 160 days to recover a 1TB drive is not exactly my idea of fun. Fortunately, about 40GB (and many days) into the exercise the drive sped up and recovered the rest in the last 24hrs of the process.. But I could have finished it many days sooner if ddrescue was able to search for the data. This particular drive had a large portion of the data at the front of the drive, roughly the first 29GB, but after that there were large gaps between small sections of data up till about 35GB. Doesn't seem like a lot of space, but at the rate it was pulling the data, it takes a long time.

All your efforts with this software are greatly appreciated, I am not a fan of suggesting more features, as it seems like your already awesome software will never be finished. Again, thank you.

Kind Regards,
Cameron Andrews
North Brisbane Data Recovery

reply via email to

[Prev in Thread] Current Thread [Next in Thread]