HDDSuperClone

maximus

Member
Jared":3k2ja8rw said:
[post]4430[/post] Hey, still isn't too bad. I've had jobs image for two full months before we had enough data to finally disconnect it. I've heard of other guys here who went six+ months. This is where a software tool would be awesome as it'll only tie up a cheap Linux computer instead of a $5000 hardware imager for all that time.
Yes, but isn't that the point where a professional would do a head swap to get things moving along again so it would not take weeks or longer?
 

Jared

Administrator
Staff member
Depends on the case. Some models, if you head swap it might never come ready again. So if it's reading steady however slow, best to leave it alone and let it do it's thing.
 

maximus

Member
Jared":1s81lqln said:
[post]4433[/post]
HaQue":1s81lqln said:
"would be awesome" ? Hey Jared, it is already awesome ;-) Great work Maximus, I wish I had your commitment and such quick results.
I'm not saying it isn't. I'm just waiting until it can power cycle drives that hang before I'm likely to implement the tool in day to day use here.
And you are going to wait a little bit longer for that now. This is a bit of a setback. I was planning on doing some drive cloning/swapping to improve my test system and get a second system up and running, and now I am going to be a drive shorter than planned, plus I am tying up the drive I am cloning to and the OS drive. And I had just received my new 4-relay boards a couple days before this too, now they are just sitting there waiting for me to play with them...

On the plus side, this "test" has shown a small bug that happens in the log file that I have to figure out. As far as I can tell I don't think it is going to affect the recovery, as it is just a matter of two lines in a row that have the same status that should have been merged, and can be merged again by the built in log repair feature. If I end up spending days on this and it turns out to be messed up I will be very unhappy.

It is up to 96.6% recovered now. I am doing things that I did not intend to be done. The skipping algorithm worked great, but hitting the third pass without skipping was not working out very well for this case. So I am using the options to do more controlled skipping on the last of the untried areas that have already been skipped. I can provide the tool and it does help that I understand it best because I wrote it, but there is no way I can teach someone how to do something like this. I am making judgement calls on the fly to see what works best for this drive.
 

maximus

Member
The drive recovery is up to 97.79 percent, and I am calling it quits. According to my calculations, the rest would take a few months. I think I will run my ntfsfindbad on the clone and see what files are worth keeping from the recovery attempt. Some of the files that I was hoping to keep are large and are most likely corrupt. Bummer. Oh well, no super critical data lost. Time to move forward again.
 

maximus

Member
pclab":350dbd5c said:
[post]4479[/post] A 97% recovery I would call it successful. Sometimes we get less percentage....
That is a matter of perspective. For a personal recovery, that could mean getting most of the important documents, pictures, videos, and other. Some pictures can handle a bad spot and still look okay, videos can usually handle small bad spots and still be okay, documents are usually small enough that they are either recovered or not. But what if the data is larger files that can be considered corrupt and useless / untrustworthy if they are not 100% intact, such as an image file or a virtual machine. A bad head can span across the entire drive with lots of small errors, maybe only one sector in size each. Many small files can be recovered 100%, but larger files are subject to likely corruption. If the desired files to be recovered are large in nature and need to be 100%, then a case like this is not a success.
 

pclab

Moderator
Yeah true, but with 3% left the probability of corruption is low. Usually you don't have a HDD 100% full right?
But sometimes, it's a Murphy's Law: on that 3% left, you will have the most important file :mrgreen:
 

maximus

Member
Yes, but if that 3% is all one head and spans across the entire disk, then very large files are almost certainly damaged. There were a few places where there were very long runs with no errors, so one can still have a bit of hope. Getting ready to check it in a little bit...
 

jol

Member
maximus":3k039vv2 said:
[post]4485[/post] but if that 3% is all one head
it doesn't matter if it's from 1 head or more, statistically speaking, if it's not 1 file on the entire drive (which most of the time it doesn't) then 97% is count for a success
 

maximus

Member
jol":zsbzhkn2 said:
maximus":zsbzhkn2 said:
[post]4485[/post] but if that 3% is all one head
it doesn't matter if it's from 1 head or more, statistically speaking, if it's not 1 file on the entire drive (which most of the time it doesn't) then 97% is count for a success
Yes, statistically this is a good recovery. And I am getting many good files from it, I think. Just not the large ones. My ntfsfindbad showed almost of the bad files were ones I did not care about. But it is flawed, as I found that it was not reporting on the larger files (this is caused by an issue where the data run info is not directly in the MFT, and I did not address this issue yet). That also means that my custom software to recover individual files won't work on some of the files. So I broke down and did the inevitable. I bought R-Studio (the Linux version of course). Had to happen sooner or later. I also used the fill mode of ddresuce to mark the bad areas so I will be able to tell for sure if files are corrupt (not implemented in mine yet, there is a reason I mad it backwards compatible with ddrescue logs).
 
Top