Jump to content
  • 0

Recover from bad sectors / Rewritting data


Carsten

Question

As far as I understand, the scanner just reads data, and identifies unreadable sectors, and just reports problems it but does to fix them.

 

Here is what I would expect:

 

Case 1: unduplicated

If it ever manages to read the sectors by several retries, it should write to the sector to force the drive firmware to replace the sector by a replacement sector and to ensure that the sector will not become unreadable again.

There should also be a threshold for the number of repair operations to identify a failing drive, where write operations should be avoided as far as possible, but the user should just try to move all the data from the drive.

 

 

Case 2 : duplicated

a) Move everything away from the drive. There is already an option to do this. But but a single unreadable sectors on a 3-4TB drive does not make the drive completely unusable. So there should be another option B):

 

B) As there is a duplicate copy of the file on another drive, the specific sector can be read from the other drive and the scanner can repair the bad sectors by writing the data to the defective sector (or file).

There should also be a threshold on the number of repair these operations. If this is met try to move everything off the drive.

 

Can you please comment or maybe my view is not correct what really happens currently.

 

Another thing would be to have a new function to rewrite all sectors (magnetic refresh) whould could be scheduled every couple of months. This should ensure that even with data with changes very seldom, the sectors do not become unreadable over time.

Link to comment
Share on other sites

4 answers to this question

Recommended Posts

  • 0

Carsten,

 

You are correct that StableBit Scanner doesn't write to the disk to force it to fix the damaged sectors.

This is intentional. It does have a "file scanner" to that will attempt to read and then save affected files to a new location. But it doesn't repair the damaged sector.

 

The reason that we do not do this is that in doing so removed any chance to recover the file. Once the sector has been reallocated, it's "gone". And that isn't what we want to do.

 

As for the failing threshold, well that really depends on where the damage takes place. If it takes place at the beginning of the drive, well, that will likely damage the partition table and make the entire drive useless. 

Also, this damage shouldn't occur, period. It means that all of the disks' error correction routines have actively failed. 

And while we understand that this may be "normal", it means something very bad has happened.

 

Also, where there is one damaged sector, there are usually more that will appear.

 

 

 

Additionally, the "magnetic refresh" part, in theory, even just a read should be enough to fix any issues. If there is a problem, then the drive's firmware should detect that and attempt to correct it. Silently. If there is an issue and it needs to remap the sector, you should notice the "Reallocated sector count" increase. If there is a serious issue with it and it fails to fix the sector, then you should notice an "Uncorrectable Sector count" increase in the SMART data.

 

Also, writing the to disk is a tricky proposition here. At least, re-writing the existing content. First we have to be able to read it, then attempt to write it back to the exact same location on the disk. This sounds simple but it is definitely complicated.

Writing "zeros" or random data is simple, and definitely could be done.

But both of these options would be at the expense of the ability to use data recovery.

 

 

However, this (the writing to damaged sectors) has been a much requested feature, and we are definitely considering adding it to Scanner.

Link to comment
Share on other sites

  • 0

Christopfer,

 

thanks forSome more questions:

 

Your wrote: This is intentional. It does have a "file scanner" to that will attempt to read and then save affected files to a new location. .

 

1) What does the the file scanner do exactly? If it can read the file after some retries, does it it the file to another location? Is this transparent for access to the Pool and does it copy it to another file name? What happens to pool integrity?

 

2) What happpens, when the scanner cannot read the file? Will it use information of a duplicated disk?

 

3) What happenes, when the pool tries to read from a duplicated file and one version of the file is not readable? Does it use the other disk and reduplicates the file by writing to it again? Maybe the scanning should be in DrivePool instead of the scanner? Is there an integrity check in drivepool which compares the duplicated file for binary identity? If yes, DrivePool could scan the drives and in case of read errors, just rewrite the file. In case of inkonsistencies (readable but binary different), it should either stop access to the fiel and let the use decide which file version to use, or just use any of the versions and overwrite the other. If there is more then one additional copy of the file, it could use the contents which is on the majority of drives.

 

4) Why is writing to a file remove the ability to recover it? It should only write successfully read data, i.e. a sector is not readable and let's say the scanner can read it after 20 tries, then it should write this sector immediately to force remapping. It should never write a sector which it can't read and which belongs to a file.

 

5) Basically there are two kinds of bad sectors: those in files and those in freespace. Bad sectors in free space can be overwritten immediately with zeros, for those in files you have to wait until you can read it or read it from a duplicated version and then write it.

 

6) To you point who to find exactly the sector: From the clustermap you should now which files belongs to which clusters or sectors. Then either just rewrite data in the specific position in the file via file system or use the microsoft defragmentation api.

 

Link to comment
Share on other sites

  • 0

Hi I have asked a similar question I also had a drive that had a few bad sectors all duplicated of course but scanner seems unable to use the duplicated files to repair or replace damaged files which kind of seems pointless however, if you completely remove the drive then the duplicated files kick in and all is is good. While this is great in case if failure it not helpful on the day to day maintenance of a system the other option we have is to delete the damaged file and replace it with a fresh copy again it's just not what I want or expect they way I see it at the moment while scanner will inform us of things going bad it does nothing in regards to keeping the file database intact FYI file recover has never recoverd a damaged file yet for me I always end up replacing the whole file even tho it is duplicated.

Link to comment
Share on other sites

  • 0

  1. The Scanner uses different "read head" positioning strategies to try to read the damaged sectors. There are 20 that we use. And they are what you think they are. We read different sections of the disks to get the read heads to come at the damaged sectors from different areas, and such.

     

    As for the files, this is handled separately from DrivePool. However, you could save the file to a pooled location.

    But before attempting the recovery, it asks you were you would like to save the affected files. Once it has successfully read the file, it will save it/them to the specified location.

     

    If it fails, it lets you know. At that point, it's entirely up to you. If the file is duplicated and you delete the affected copy, then DrivePool will attempt to duplicate it. And the quickest way to force it to do so, is change the duplicate status, or access the file on the pool.

     

  2. StableBit Scanner really isn't integrated into DrivePool in such a way as to do this. Also, aside from it being damaged, how would Scanner know which file is the one we want to save?

     

    This is a subject that has been brought up repeatedly, and something that we have been thinking about. However, any solution here would not be a simple one, as anything else would be a half measure.

     

     

  3. Since the driver for the pool is not aware of StableBit Scanner (and to be honest, I'm not sure if it's possible to do that, or how much work would have to go into it to do so), then if you have a problem file, the the Pool will have a problem with the file. In fact, most disk related issues with the pool are due to issues with the underlying disks.

    As for files that don't match contents, the Pool does detect that and notifies the server (and UI) that there is an issue (file mismatch). 

     

    And no, there isn't an integrity check for the Pool yet. Emphasis there. The idea/suggestion/recommendation has been brought up numerous times, and we are planning on something to that affect. And hopefully, it would allow for the integration, error checking, and ... well, handling damaged files like you've mentioned. But all of that will be rather complex, and would take a while to develop.

     

     

  4. It depends on the circumstance here.

    If you're rewriting the file to the existing location... if any errors occur (like a BSOD, or "surprise removal"), you will corrupt the data. 

    Additionally, if something does go wrong, since you've written (or attempted to do so), you may have overwritten data that could have been recovered by other means.

     

     

  5. This ties in close, so going to cover all of it in the next point: 

     

  6. That is a very good idea, and one that I didn't even think of (which is ironic, because i just reinstalled PerfectDisk on my Work VM).

    Writing to a damaged sector is the only way to "fix" it. Either the disk is able to recover the sector, or it's remapped to a new location on the disk (and should increase the "reallocated sector count" SMART value).

     

    Writing zeros would be the best way to force the disk to do this. But this is easy for empty space. But also, remember, that doing this may prevent you from recovering deleted files.

    But the challenging part, no matter how it is handled would be the existing files. But using the Defrag API would be an excellent method of handling this, since there is already a framework/API to handle this, in theory.

     

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...