I was searching around and found this post. I would like to add some information.
First, I only helped with the ddrescue algorithm between versions 1.17 and 1.19. The original algorithm was very jumpy and would cause much head movement, and I assisted with making it better for real world recovery. Ddrescue version 1.19 and higher are much improved by this. That is the only credit I can take with ddrescue.
I learned much from analyzing ddrescue, but hddsuperclone is only based on the concept. All of the code and algorithms are very much my own. The idea of the different copy phases is a bit similar, but I take it to a higher level that is based on real world results.
As for being slower, it could be from some extra overhead. In my testing on a decent computer (2.7GHz Quad Core) I have so far found hddsuperclone to be right about the same speed as ddrescue when cloning a good drive with similar settings. I have noticed in certain tests that when using the more robust ATA passthrough mode, it can be up to 5% slower than ddrescue for read speed. But even if hddsuperclone is a bit slower, there are reasons that it is still better.
The first reason it is better is that the self learning head skipping algorithm will kick ass on a drive with a weak, damaged, or dead head. It will do its best to get the most from the good heads first, before digging into the bad head. This does not help as much on a drive with only a bad spot or just a few bad sectors, but can still be a good algorithm as bad spots can still be related to a head.
Second reason it is better, it makes a backup copy of the log file. This may not seem like a big deal, but shortly after I added this feature, someone sent me a log that was incomplete and cut off. When I asked, I was told that there was a power outage, and was sent the backup log. The backup log was complete, and a recovery could have been resumed from it. Had that been ddrescue, the missing part of the log would have been lost.
Third reason it is better, it always uses “direct” unbuffered reads and writes. With default settings, ddrescue allows the OS to use buffering for both reading and writing. For reading that can sometimes mean the OS performs unwanted retries, which is why the --direct option is recommended. Hddsuperclone uses different more direct system calls instead of the standard read, which is as good as or better than the --direct option of ddrescue. For writing, hddsuperclone always uses direct so there is no OS buffering. Ddrescue has as option to do this in recent versions, but you have to choose it. If the OS is buffering the write data, then it may not be written right away, and in the event of a power failure or crash, the data would not be written to the destination. The log file could possibly show that the data was recovered, but a small portion could actually not be written in this case. It is possible that this can be a difference in performance speed. But I don’t care if it makes hddsuperclone a little bit slower, I will stand by this choice as it makes it more robust and reliable.
If HDDSuperClone is 10% slower than ddrescue, I am not really concerned too much. If it is 20% or more slower in a case, then I may wish for more information on that recovery, to attempt to see why there is such a big difference.