As far as I understand, the scanner just reads data, and identifies unreadable sectors, and just reports problems it but does to fix them.
Here is what I would expect:
Case 1: unduplicated
If it ever manages to read the sectors by several retries, it should write to the sector to force the drive firmware to replace the sector by a replacement sector and to ensure that the sector will not become unreadable again.
There should also be a threshold for the number of repair operations to identify a failing drive, where write operations should be avoided as far as possible, but the user should just try to move all the data from the drive.
Case 2 : duplicated
a) Move everything away from the drive. There is already an option to do this. But but a single unreadable sectors on a 3-4TB drive does not make the drive completely unusable. So there should be another option :
As there is a duplicate copy of the file on another drive, the specific sector can be read from the other drive and the scanner can repair the bad sectors by writing the data to the defective sector (or file).
There should also be a threshold on the number of repair these operations. If this is met try to move everything off the drive.
Can you please comment or maybe my view is not correct what really happens currently.
Another thing would be to have a new function to rewrite all sectors (magnetic refresh) whould could be scheduled every couple of months. This should ensure that even with data with changes very seldom, the sectors do not become unreadable over time.
Question
Carsten
As far as I understand, the scanner just reads data, and identifies unreadable sectors, and just reports problems it but does to fix them.
Here is what I would expect:
Case 1: unduplicated
If it ever manages to read the sectors by several retries, it should write to the sector to force the drive firmware to replace the sector by a replacement sector and to ensure that the sector will not become unreadable again.
There should also be a threshold for the number of repair operations to identify a failing drive, where write operations should be avoided as far as possible, but the user should just try to move all the data from the drive.
Case 2 : duplicated
a) Move everything away from the drive. There is already an option to do this. But but a single unreadable sectors on a 3-4TB drive does not make the drive completely unusable. So there should be another option :
As there is a duplicate copy of the file on another drive, the specific sector can be read from the other drive and the scanner can repair the bad sectors by writing the data to the defective sector (or file).
There should also be a threshold on the number of repair these operations. If this is met try to move everything off the drive.
Can you please comment or maybe my view is not correct what really happens currently.
Another thing would be to have a new function to rewrite all sectors (magnetic refresh) whould could be scheduled every couple of months. This should ensure that even with data with changes very seldom, the sectors do not become unreadable over time.
Link to comment
Share on other sites
4 answers to this question
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.