Jump to content
Covecube Inc.

Search the Community

Showing results for tags 'failing drive'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
    • Nuts & Bolts
  • BitFlock
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 2 results

  1. Hope this makes sense... 2 days ago StableBit Scanner and WHS 2011 reported that a drive was failing....and a brand new 6TB WD Red drive of all things! So I shutdown the server, physically removed the drive and backed up the data to another 6TB drive. I mostly backed up the Server Folders that I don't have duplication turned on for - as the duplicated server folders should be ok (dupe copies on the other drives) - correct? So I was all set to add the NEW 6TB drive into the pool and then follow the instructions in this posting: http://wiki.covecube.com/StableBit_DrivePool_Q4142489 to move the backed up data back into the pool. If I understand that process, once I do that, all of the non-duplicated files should be back in the pool and DrivePool will do it's thing to re-populate the new drive with dupe copies of anything that was on the failing drive.... But then it hit me - I haven't removed the failing drive from the pool yet...it still shows as "missing" - so I'm guessing DrivePool won't actually start duplicating the dupe copies of stuff that was in duplicated folders on the missing drive until I remove it from the pool via the dashboard? I don't want to screw this up - so what is the proper approach for this scenario I have? Here's the facts summarized: * 6TB failing drive ("failing") physically removed from the server and non duplicated folders backed up to a new 6TB drive ("new") * NOTE: I also backed up some other "more critical" folders that ARE in fact already setup for duplication in the pool - just in case (probably didn't need to do this - but figured that if I had as much of the duplicated files backed up, I could use the instruction in the link above to move the data from the backed up folders back into the pool once that "new" disk was added, and save DrivePool some work/time - is that a correct assumption? * I have already added the "new" drive to the pool (ne with the backed up files noted above) and was about to go through the instructions in the link above to move that data back into the pool * I still haven't removed the "failing" drive from the DrivePool yet If you folks can please advise on the best way to approach this - that would be most appreciated. Let me know if anything is not clear LATE ADDITION: - Forgot to mention I'm running WHS 2011 on an HP N40L. AND since I had to do some work on this drive issues, I decided to take the time to swap out my external drive enclosure, install a NEW PCIe card (6GB/s eSATA card with 4 ports) and also upgrade my memory (none of which has anything to do with the drives) - BUT - I now have the origirnal eSATA port on my N40L free - as well as a second external eSATA port - So it just dawned on me that I could drop the "failing" drive in a single external drive enclosure I have - plug it into one of the free eSATA ports - and then REMOVE the "failing" drive in the Dashboard - and let DrivePool do it's thing to migrate the data over to the "new" drive I added. I only have about 2TB of data on the "failing" drive - and have about 3TB free on the "new" 6TB dirve (rest is the backed up data in a separate folder NOT in the "ServerFolder" - so not in the pool. So enough space to let DrivePool do it's thing - and worst case if something doesn't get pulled off the failing drive - I have the backed up data. So this would seem to be the best approach based on what I've read - correct? If not - let me know... And then I need to update my version of DrivePool ;-)
  2. Running DrivePool 2.1.558 on Windows Server 2012 R2. Don't own Scanner. 20TB DrivePool, 6 Drives, 6 SATA ports (all full) One (1) of these six drives, a 3TB Toshiba, is in the process of failing BADLY. Symptoms… whenever I attempt to copy data from the Pool, ONLY files being read off the failing disk slows to a craw of around 50-80Kb… then stops and chugs… then reads another 40-80kb… stops forever while the HDD churns. I'm assuming the drive's built-in ECC algorithms are working overtime to recover the bits and the SMART system is reallocating sectors (but the drive is nearly 100% full… maybe 60-70GB free out of 3TB. The files I'm copying off DO eventually get read though. And so far, the files I've copied off the Pool and spot checked seem to be error free, but it can take up to 1-2hrs to read back a single 1GB file from the damaged sectors! Anyways, it's OBVIOUS this 3TB Toshiba will die any SECOND now. So my options: Option #1 Buy a PCI-e SATA III controller card. Buy a new 3TB+ disk. Add new controller & disk to server then use the built-in "remove drive from Pool" function to empty all data off the failing Toshiba hdd. Q: Will the "remove drive from Pool" function timeout/error out? As noted above, I HAVE been able to successfully copy files off this damaged HDD using plain old File Explorer… it just takes a LONG TIME. How patient is the DrivePool evacuation function? Just as patient as File Explorer? I know there's a "force removal of damaged disk" checkbox, but frankly I'm wary of that option. Nearly all this data is multi-part .RAR files without parity (music, movies, audiobooks). If a single file from a multi-part .RaR-ed folder gets skipped b/c DrivePool decides it's taking too long to read, then I effectively lose 100% of the data in that folder even if 99% of those .Rar's are safely residing on the other 5 functioning HDD's in the Pool. Q2: Does the "remove drive from Pool" function actually DELETE all the contents off the removed HDD after the removal process completes successfully or even errors out??? What if the process finishes but with errors?? Does DrivePool delete all files off the removed drive? If so, that's bad… gives me no opportunities to use a data recovery tool like SpinRite after the fact to recover those files. Option #2 Physically remove the failing Toshiba 3TB drive from the server. Place the failing drive into a 2nd PC running Windows 10 x64. Place a new 3TB+ replacement drive the 2nd PC and then manually copy contents to the new hdd using File Explorer. Finally, move new disk back to server and run some command to reintroduce the "new" HDD containing the old/existing files to DrivePool??? Q1. There is a total filename/file path length limit in Windows I don't know the exact number, 127 chars?, but somehow you guys seem to get around this issue inside DrivePool. But I've encountered this problem when coping highly nested file trees using File Explorer (which my DrivePool contains (ie.. Media\Video\HighDef\TV\ShowName\Season\xxxxx.xxxxxx.xxxxxx.xxxxxx.xxxxx\yyyyy.yyyyyy.yyyyyy.yyyyyyy.yyyyyy.part99.rar). Am I going to get thousands of "filename/file path too long, please rename file or shorten file path" errors when I attempt this manual file copy using Win10 x64? Should I try using a disk cloning tool like Macrium Reflect Free Edition to clone the disk instead of using a manual file copy with File Explorer to get around such issues? Q2. If I'm successful copying my disk over, I know there's a manual command to force DrivePool to notice the replacement disk and make it reindex/remeasure/re-integrated the data back into the DrivePool. Is there a URL to this FAQ? Sorry for the really long post.
×
×
  • Create New...