bug-ddrescue
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-ddrescue] Feature Suggestion: Automatic Cooldown mode


From: David Deutsch
Subject: Re: [Bug-ddrescue] Feature Suggestion: Automatic Cooldown mode
Date: Fri, 7 Feb 2014 12:44:40 +0100

After about 18 hours of reading in -R mode:

GNU ddrescue 1.18-pre7
rescued:     1768 GB,  errsize:   8131 MB,  errors:   93896
Current status
rescued:     1769 GB,  errsize:   8282 MB,  current rate:        0 B/s
   ipos:     1670 GB,   errors:   97123,    average rate:     4515 B/s
   opos:     1670 GB, run time:   18.92 h,  successful read:       6 s ago
Copying non-tried blocks...

So it currently runs an order of magnitude slower than before. Or was
that to be expected for the area that I'm reading?

-David

On Fri, Feb 7, 2014 at 3:06 AM, Scott Dwyer <address@hidden> wrote:
> After doing some quick math on a part of the logfile, I came up with a ratio
> of 5.77, which means that 6 heads is about right. And a closer visual
> assessment says the same thing.
>
> Scott
>
>
>
> On 2/6/2014 4:05 AM, David Deutsch wrote:
>>
>> Hi Scott,
>>
>> Thanks for the details. The Model number of the disk is WDC
>> WD20EARS-00MVWB0. The reason I assumed it would be 4 platters with 8
>> heads is that it's more than two years old and at the time was the
>> largest capacity you could get. A web search for head count is
>> somewhat inconclusive, with some sources claiming that the same model
>> might have different configurations. This here seems quite thorough
>> and would suggest it's actually three platters with 6 heads:
>>
>>
>> http://rml527.blogspot.de/2010/10/hdd-platter-database-western-digital-35_1109.html
>>
>> I'm still having a hard time visualizing the topography of the drive
>> to begin with, but it does make a little more sense with your
>> explanation. I would agree that the most probably case, judging from
>> how regular the pattern is, would be that we have one side of one
>> platter side (=one head) producing errors with one end of the platter
>> being worse than the other. One head would be 333GB and since I have
>> already "rescued" more than 1760GB it doesn't feel like I'm that badly
>> off. The errsize jumped to 7.9GB during the first 128GB but hasn't
>> moved much since - the operation is at 748GB ipos and errsize is now
>> at 8.1GB. If I can continue to get the same rescue to errsize ratio
>> that I currently get out of it for the rest of the rescue, this would
>> suggest that I might end up at 10GB lost to error.
>>
>> Of course - I only see this process right now, I have no experience
>> with fsck'ing rescued images and whether those errors end up wrecking
>> everything.
>>
>> What I did notice is that readout seems to be slowing down again - it
>> was at over 100kb/s for two days but this has gone down yesterday:
>>
>> GNU ddrescue 1.18-pre7
>> Initial status (read from logfile)
>> rescued:     1757 GB,  errsize:   8071 MB,  errors:   77429
>> Current status
>> rescued:     1764 GB,  errsize:   8111 MB,  current rate:        0 B/s
>>     ipos:   713673 MB,   errors:   87250,    average rate:    83757 B/s
>>     opos:   713673 MB, run time:   23.65 h,  successful read:   11.03 m
>> ago
>>
>> GNU ddrescue 1.18-pre7
>> rescued:     1764 GB,  errsize:   8111 MB,  errors:   87250
>> Current status
>> rescued:     1766 GB,  errsize:   8123 MB,  current rate:        0 B/s
>>     ipos:   745794 MB,   errors:   90348,    average rate:    62385 B/s
>>     opos:   745794 MB, run time:    7.70 h,  successful read:    2.11 m
>> ago
>>
>> ...and it seems a lot more common that successful reads are more than
>> 10 minutes apart. So that really does start to seem rather
>> excessive...
>>
>> As for reading it backwards: Can I even do that at this stage in the
>> process?
>>
>> -David
>>
>> On Thu, Feb 6, 2014 at 2:39 AM, Scott Dwyer <address@hidden> wrote:
>>>
>>> Hi David,
>>>
>>> I have looked at your logfile, and you may be in for a rougher recovery
>>> than
>>> I was expecting. I also had a sort of awakening about the appearance of
>>> the
>>> spiral pattern in ddrescueview.
>>>
>>> After absorbing information from the earlier reply from Franc Zabkar
>>> about
>>> how the disk will read sections of tracks from one surface (head) before
>>> moving on to the next, I believe that I was totally wrong about the
>>> visual
>>> pattern. I now believe that all the red problem area is actually all from
>>> one head. In your case just by the visual reference, it would appear that
>>> there are 4 heads (2 platters). This is assumed by the visual ratio of
>>> good
>>> to bad (3 good to 1 bad). The other logs I have seen like this were more
>>> 50/50 (2 heads / single platter). The reason it looks sort of spiral is
>>> because of the density difference of information per track from the
>>> outside
>>> to the inside of the disk (outside is bigger and holds more information
>>> per
>>> track). The first track is on the outside, and the last track is on the
>>> inside.
>>>
>>> There seems to be one thing in common from the logfiles I have seen. The
>>> data towards the inside of the disk seems to be less prone to errors
>>> (easier
>>> to read because the data is going past the head slower?) Unfortunately
>>> from
>>> what I can see from your logfile, the data towards the outside is really
>>> bad
>>> with no noticeable recovered data. It doesn't start to show any promise
>>> until it gets farther in. Sorry for that news after possibly getting your
>>> hopes up. Guess I should learn to see the logfile before making
>>> predictions.
>>>
>>> And you are correct that ddrutility will not help you. The only part of
>>> it
>>> that works with linux ext3 filesystems is the findbad bash script to find
>>> what files are affected. It won't work on damaged filesystems, and even
>>> if
>>> it did, it could take days or weeks (or longer) to complete on a recovery
>>> that has a large error size like you are likely to end up with.
>>>
>>> I just had a thought. Based on the pattern that I have seen, I wonder if
>>> it
>>> would be more productive to read backwards from the end, since it seems
>>> to
>>> be more likely to produce recovered data quicker. Something to think
>>> about...
>>>
>>>
>>> Scott
>>>
>>>
>>> On 2/4/2014 7:47 PM, David Deutsch wrote:
>>>>
>>>> Hi Scott,
>>>>
>>>> Wow, no, I did not see that at all! Sorry for seemingly ignoring you
>>>> for days, now. Not sure whether it's my gmail account or something...
>>>> Weird. Probably because I messed up my replies in the beginning.
>>>>
>>>> (replying to the points raised in that email since they also answer
>>>> your current questions)
>>>>
>>>>> Both finished logs showed something interesting, in that there were
>>>>> many
>>>>> small errors in what could almost be considered a spiral pattern
>>>>
>>>> Yeah, that's pretty much exactly the thing with my drive - it seems
>>>> like all bad sectors found are 512 Bytes.
>>>>
>>>>> The fun part was that the filesystem (it was an NTFS disk) was so
>>>>> messed
>>>>> up that nothing would mount it
>>>>
>>>> Maybe I got lucky there since I was using ext3.
>>>>
>>>>> even testdisk failed to find a large portion of the files. So be
>>>>> prepared
>>>>> to use something more robust than testdisk (like R-Studio) if you go
>>>>> through
>>>>> with the rest of the recovery.
>>>>
>>>> Yeah, that really is the scary part - Since we're talking about 1TB of
>>>> DSLR files (.JPG, .MOV) and music (flac, mp3) each, I would really
>>>> like to see this mounted. I have 'rescued' a number of disks for other
>>>> people and losing all the nice meta-data (directories etc.) would
>>>> be... quite a bummer. The music stuff I would probably just have to
>>>> redo from my CD collection... *sigh*
>>>>
>>>>> ddrutility
>>>>
>>>> >From what I understand that is mostly about your case, rescuing ntfs
>>>> partitions? Or would it help in my case as well?
>>>>
>>>>> Third, I am interested in a copy of your logfile if possible. Actually
>>>>> I
>>>>> would like the first one you sent to Antonio if you still have it, and
>>>>> also
>>>>> your current one.
>>>>
>>>> Sure thing. Will send them along in a separate message.
>>>>
>>>> cheers,
>>>> David
>>>>
>>>> On Wed, Feb 5, 2014 at 1:25 AM, Scott Dwyer <address@hidden>
>>>> wrote:
>>>>>
>>>>> Hi David,
>>>>>
>>>>> First, did you see my reply with my 2 cents? It contained some info (my
>>>>> opinion) as to what might have happened to your drive, and what you
>>>>> might
>>>>> expect (from my experience). I only replied to the bug list, so if you
>>>>> did
>>>>> not see it then you will have to look into the archives which can be
>>>>> found
>>>>> through the ddrescue page.
>>>>> http://lists.gnu.org/archive/html/bug-ddrescue/
>>>>>
>>>>> Second, while errors are skipped, every error takes time to process,
>>>>> first
>>>>> by the drive itself and then that is multiplied by any op system
>>>>> retries
>>>>> (from what I can tell in linux from observation, it is about 15 retries
>>>>> normally, or 5 retries using the direct option). So if the drive takes
>>>>> 3
>>>>> seconds per error, then it would take 15 seconds with the direct option
>>>>> to
>>>>> process the error, or 45 seconds without the direct option. I used 3
>>>>> seconds
>>>>> for the drive as that is about an average from a few drives I have
>>>>> seen,
>>>>> but
>>>>> that is dependent on the drive itself. Doing a little math on that
>>>>> means
>>>>> that at 15 seconds per error, you could process about 5760 errors per
>>>>> day.
>>>>> And you are going to have a LOT of errors by the looks of it, so you
>>>>> are
>>>>> in
>>>>> for a long recovery. But don't be too discouraged just yet. You will
>>>>> have
>>>>> many errors spread all over, but there is still a chance that you will
>>>>> end
>>>>> up with 99% of good sectors vs bad, not to say that file recovery will
>>>>> be
>>>>> easy when done. What file system is this? Is is NTFS? What type of
>>>>> files
>>>>> will you be trying to recover?
>>>>>
>>>>> Third, I am interested in a copy of your logfile if possible. Actually
>>>>> I
>>>>> would like the first one you sent to Antonio if you still have it, and
>>>>> also
>>>>> your current one.
>>>>>
>>>>> Scott
>>>>>
>>>>>
>>>>>
>>>>> On 2/3/2014 5:00 PM, David Deutsch wrote:
>>>>>>
>>>>>> Close to breaking 1750GB, too. I think this kills the "1/8 of the disc
>>>>>> is dead" idea, ie. one platter/side or read head being dead. Still
>>>>>> curious what could produce such a regular error, though. Particularly
>>>>>> across the entire space of the disc. Or maybe I just have no frigging
>>>>>> clue how hard discs werk (I really don't).
>>>>>>
>>>>>> Reading still progresses in a steady pace in general, although it's
>>>>>> kind of weird: It only reads every two to three minutes, sometimes up
>>>>>> to ten. Not sure whether that is the drive hardware failing more, in
>>>>>> general (though speeds improving would say otherwise) or just the
>>>>>> general issue with bad sectors. Then again: Shouldn't it just skip
>>>>>> past those? Or are the sectors around the bad ones just hard to get
>>>>>> anything out of?
>>>>>
>>>>>
>>>> _______________________________________________
>>>> Bug-ddrescue mailing list
>>>> address@hidden
>>>> https://lists.gnu.org/mailman/listinfo/bug-ddrescue
>>>>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]