Jump to content

All Activity

This stream auto-updates

  1. Today
  2. Hi, I'd suggest opening a support ticket with StableBit to investigate. Please let us know how it goes.
  3. A week ago Scanner reported a drive is about to fail with a few unreadable sectors. Did a file scan to see which files were affected before I removed it for recovery. However, half way through the scan, Scanner stopped responding. I assume the drive went full cactus. I removed the drive, restarted the server but Scanner refuses to start and open the UI. Tried the repair option, doesn't work. The only way I could get it working was uninstall and reinstall. Today, I had to shut server down, and when booted up again a few hours later Scanner will not start. Tried restarting, repair. Sometimes the UI screen opens, but hangs on either starting service, connecting or something else. Eats resources like crazy while in this state, Gigs of ram, 12-15% CPU usage constantly. Drivepool is fine. Any help would be appreciated. Windows 10 LSI 9217 + HP SAS Expander
  4. Yesterday
  5. A way to do it would be Manage Pool -> Balancing -> Settings -> tick "File placement rules respect real-time file placement limits set by the balancing plug-ins", then Manage Pool -> Balancing -> File Placement -> add a Rule for "*" with all drives ticked except the one you want to leave empty.
  6. If your goal is to maximise the available free space for incoming files, you could use the SSD Optimizer balancing plugin to set the largest drive(s) as "SSD" and the rest as "Archive", with high Fill values, then set Automatic balancing to "immediately" and the Triggers to "100%" / a very small file size, so that files come into the pool via the "SSD" drives and then automatically migrate to the smaller "Archive" drives (you could also use the Ordered placement section of the SSD Optimizer so that migrated files go to the smallest "Archive" drives first). "I could split these larger drives into multiple partitions to reduce the backup size making the pool more viable but then the structure between the drives becomes more complex requiring links or mount points and more backups to schedule. Not undoable." Depending on the nature of the backup source, perhaps you could use another backup method that doesn't need to create singular massive backup files? E.g. I use Veeam for single-image backups of my family's laptop PCs but use FreeFileSync for file-by-file backups of the family pool (lots of archives, docs, photos, mail, videos, etc).
  7. Hi, Is there a way to set a pool of say 5 drives, have 4 of them work as normal but leave the 5th empty in case scanner detects a problem and evacuates a drive? Essentially leave it empty for an emergency. Thanks
  8. MERman

    Windows 2025 Support

    What is the approximate timeline for release? Thanking you
  9. My attempt to fix this is using balancing rules to write these bigger files to one particular disk in the pool. My biggest backup file is <3TB. I've set the balancing rules to fill-up the smallest drive first and then progress to the next largest until only the largest has remaining space. I've directed any file going to a specific folder to go only to the largest disk (I could just take out that larger drive from the pool but then I'm back where I started). It's not foolproof but seems to make failures less likely in my head. Do I also need to create an inverse rule to prevent other files from being written to the largest drive? Another option would be to setup drive pools only for the largest files so there would be no small files competing for big available spaces. In total I have 5 backups exceeding 1TB each. The problem is I'm not gaining much value of the drive pool because it doesn't split files across the drives in the pool. The pool drives need to be sized to some multiple of fairly specific sized files. I could split these larger drives into multiple partitions to reduce the backup size making the pool more viable but then the structure between the drives becomes more complex requiring links or mount points and more backups to schedule. Not undoable.
  10. Last week
  11. Shane

    Complete Loss

    Yes, if all files are recoverable (along with which folders they belong to) the pool can be rebuilt from scratch (otherwise it'd be a bit of a jigsaw puzzle). Files are instanced, not striped, across the disks in the pool (thus the loss of any number of disks has no effect on any files stored in any remaining disks). Pool content is entirely stored as normal folders and files in each poolpart folder, while the pool guid (the bit that basically says "this particular disk belongs to pool XYZ") and duplication levels are stored in alternate data streams attached to each instance of the poolpart folders and the content folders they contain. Only the user preferences (pool management, balancing and placement configurations, none of which are necessary to a pool's survival) are stored in the application folders of the host machine. Basically you can take a pool's drives straight out of one machine running DrivePool and plug them into any other machine running DrivePool and it'll go "oh hey, a new pool with content already in it", you'd just need to redo your preferences - and if every copy of DrivePool and/or Windows mysteriously disappeared you could still just plug those drives into any other OS that can read a NTFS volume and all your stuff would be there (though in the latter case if you'd done anything fancy with reparse points, e.g. symlinks or junctions, I think those might need to be redone).
  12. verazula

    Complete Loss

    Hi Shane! Yes, some further digging has revealed some likely damage to file / partition tables on the disks themselves. DMDE has confirmed the files/data is still intact. I'll return to this post as soon as I repair this in case further action is necessary. Thanks! As an aside: in theory, if the files are all recoverable, the pool should be able to be rebuilt from scratch if necessary, is that accurate? (the files are just "striped" across the disks in the pool)
  13. Shane

    Complete Loss

    Hi, if the damage is to the file table and/or pool metadata on that drive then it could cause that kind of problem (the very wrong size suggests at least the former has happened). What I'd do: Shutdown the system and disconnect/depower the damaged drive. Restart the system and check that the pool and Explorer are functional again (the pool should be in read-only mode due to the "missing" damaged drive). If it's functional and your pool was fully duplicated: Remove the "missing" damaged drive from the pool. DrivePool should resume normal operations and proceed to re-duplicate. Attempt a long (not quick) format of the damaged drive on a separate system, then use Scanner to verify its status. If it comes back good then you can decide whether to add the "new" no-longer-damaged drive to the pool. You're done. If it comes back bad then you can decide whether to add a new replacement drive to the pool. You're done. If it's functional but your pool was not fully duplicated: Do a repair (e.g. with chkdsk) of the damaged drive on a separate machine. Make note of any files it is unable to recover in case you can restore those from backup. If the damaged drive wasn't repairable, remove its "missing" entry from the pool, replace the damaged drive with a new one and restore the lost content from your last backup of the pool (if any). You're done. If the damaged drive was "repaired", it's still possible that its pool metadata remains damaged (since chkdsk doesn't fix that) and it may still have bad sectors: If you'd prefer to rely on your last backup, see step 4.2 above. If you'd prefer to attempt recovery from the external drive: Before reconnecting it to your system, rename its hidden poolpart folder (e.g. rename the "Poolpart" text before the GUID string to "Suspect"). Remove the "missing" repaired drive from the pool. The pool should resume normal operation. Connect the repaired drive to your system and manually copy the suspect poolpart's content into the corresponding folder tree in your pool (do not copy it directly into another poolpart) - or alternatively you may wish to copy it into a temporary folder / compare it with your pool to decide later. See step 3.2 - not to be confused with step 4.3.2 - regarding formatting and verifying the damaged drive's suitability for continued use. Hope the above helps!
  14. "1. With duplication on...I should still have one good/in tact version of the file on a different drive somewhere? Is that correct?" A1. Yes. The normal procedure in the event of a bad drive, if the entire pool is duplicated, would be to physically remove/replace the bad drive then tell DrivePool to remove the missing drive so it can reduplicate using the good drives. However, see 2. "2. I have tried recovering the file with Scanner and it didn't work. It finished with a "partial file recovery" message...but I assume that essentially just means it didn't work?" A2. I believe that means it was able to recover only the undamaged parts of the file. I'm not actually sure what that means for the good/intact version on the other drive (replaced? unchanged? I'd guess the latter if both instances still have the original last-modified date). The best option is as you mentioned to attempt A1 anyway and test the result to see if it still works (in this case I guess by watching the video). "3. ...could I just manually go into the hidden pool folder on the hard drive that has the bad sectors and manually delete the affected file? Then with that file gone...it's obviously no longer duplicated, so I'd just let drivepool redo the duplication and that would essentially recopy the good version of that file into the pool in a different location to get the duplication back? I assume those bad/unreadable sectors are already marked as such and the HDD will avoid trying to write there for any future writes??" A3. You could indeed just manually delete the bad instance inside the affected poolpart and let DrivePool automatically reduplicate from the good instance elsewhere, and that should be what happens, but as you've already realised it is at your own risk (whether more sectors fail on that drive - or on any other in the pool). P.S. Incidentally for very large static files in addition to duplication I use MultiPar to create parity files of them so that in the black swan event of any bitrot or damaged sectors somehow managing to slip through duplication (e.g. by being moved across to a whole new pool) I still have a good chance of repairing the damage via the par2 file. YMMV!
  15. verazula

    Complete Loss

    Restarted my system this morning with both DrivePool and Scanner but forgot to turn one of my external HDDs (part of the pool) back on. The system ran for about an hour before I realized. Once the external HD was turned on, DrivePool showed an error (missing drives) and then appeared to repopulate the pool. Unfortunately, trying to navigate to files in File Explorer just crashes and/or hangs for minutes. Scanner then alerted that there was an error/damage to one of my HDs, that's now reporting as 440 sectors damaged and approx 115 PBs in size (obviously incorrect). DrivePool is currently "Checking..." but is not showing any progress and all files are currently unreachable (haven't taken the hour or so to try to open a specific file). Is this something I just need to be patient with due to the size of the pool or is further involvement necessary?
  16. Hi...long time user of Drivepool and scanner and love it! I've been using it for many years and it's always just worked. Recently, I had one of my hard drives show an error with 16 bad sectors. The sectors are located within a single large file (35 gig blu ray rip). I have 2x duplication on all my files. With that, I have a few questions. 1. With duplication on...I should still have one good/in tact version of the file on a different drive somewhere? Is that correct? 2. I have tried recovering the file with Scanner and it didn't work. It finished with a "partial file recovery" message...but I assume that essentially just means it didn't work? 3. I know best practice would be to just remove the hdd altogether from the pool...but I was half tempted to keep using it a little longer (it's a 4tb drive and relatively newer to my pool). If it got any additional bad sectors, then I'd for sure pull it. With that in mind, would it be possible to manually try to fix this corrupted file? By this, I mean...could I just manually go into the hidden pool folder on the hard drive that has the bad sectors and manually delete the affected file? Then with that file gone...it's obviously no longer duplicated, so I'd just let drivepool redo the duplication and that would essentially recopy the good version of that file into the pool in a different location to get the duplication back? I assume those bad/unreadable sectors are already marked as such and the HDD will avoid trying to write there for any future writes?? Sorry if any of the above are dumb questions...just trying to learn more about how the file handling and duplication works. And also to see what my options are at this point. And again, I know the BEST option is just to remove that drive now. But with only 16 bad sectors...would it be worth trying to keep using it a little longer to see if any additional sectors start failing??
  17. I have exactly the same issue. BSOD around every 30Minutes 2.3.11 1663 Statistics: 5 Drives 3X 16TB, 2x 12TB Files: 3020618 / Folders: 5598 Size: 1175GB with Duplication: Files 5919600 / Folders 23015 Size: 2342 GB avarage Filesize: 412KB (1KB - 50GB) Bug Check Code Parameter 1 Parameter 2 Parameter 3 Parameter 4 Caused By Driver Caused By Address 0x0000013a 00000000`00000011 ffffc508`ba400140 ffffc508`c9481740 00000000`00000000 covefs.sys covefs.sys+47000 I upload a Full Dumpfile to the provided link [## 289 ##] Bluescreen Covefs.sys Thanks for your help The Betaversion 2.3.9.1612 does not crash at all!
  18. Hi, in the settings.json file - see https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings - you can try setting the override value for FileBalance_BackgroundIO to False so that DrivePool doesn't use the lower background priority for balancing (similarly FileDuplication_BackgroundIO for any scheduled duplication). You can safely stop the current balancing by pressing the 'x' button in the GUI next to the pool organization bar.
  19. I added 3 new 20tb wd red pros to my pool which brings it to 160tb in total, and after 3 days it's only balanced 2.94tb. At this rate it will take forever. I checked stablebit scanner and the transfer rate is 18mbps. Why is this so slow. I've already tried clicking the arrows and noticed there was absolutely no difference in speed. Is there a setting that can be enabled or something? And another question. Is it possible to stop the current balance if i had to. I'm afraid of it losing data or stopping mid file. I'm open to any suggestions and I wonder if this is somehow a bottle neck in windows with drivepool causing this. Manual file transfer is around 250mbps so I know it's not the controller.
  20. Earlier
  21. Each file placement rule is set to never allow files to be placed on other disks. Since posting this I have actually been able to correct the issue. I added another file placement rule to the list for .svg images in my media library as I noticed emby fetched one for a newly added movie. When Drivepool started balancing I noticed it placing files onto the SSD's excluded from the media file placement rules. I stopped the balancing, then started again and the issue started correcting itself. All media files on the SSD's were moved to the hard drives, and all images/metadata were moved to the correct SSD's. I strongly suspect that somehow the scanner balancer combined with old file placement rules from using those SSD's with the optimizer plugin were at fault. One of my drives had developed a couple of unreadable sectors a few weeks ago and started evacuating files onto the SSD's. At the time the SSD's were included in the file placement rules from previously being used with the ssd optimizer plugin. I had stopped the balancing, removed the culprit drive from the pool choosing to duplicate later and did a full reformat of the drive to overwrite the bad sectors. When the format completed I did a full surface scan and the drive showed as healthy, so I was comfortable re-adding it back into the pool. Drivepool reduplicated the data back onto the drive and a balancing pass was also done to correct the balancing issues from the drive being partially evacuated. The file placement rules were also changed at this time to exclude those SSD's from being used by my media folders and metadata/images. Everything was working as expected until the other day. For whatever reason only a small portion of newly added files were actually written to the excluded SSD's. It was literally only stuff added within the last couple of days despite a significant amount of data being added since re-adding that drive back into the pool and adjusting file placement rules to exclude the SSD's that were previously used by the optimizer. It's as if some kind of flag set by the scanner balancer or the old file placement rules weren't fully cleared. Regardless of the cause I'm glad that everything seems to be chugging along like normal now. If I start noticing any more weird behavior I intend on enabling logging to help support identify what's going on under the hood.
  22. Just to confirm, in each of your relevant File Placement rules have you selected the option "Never allow files to be placed on any other disks"? The default option is "Allow files to be placed on other disks if all of the selected disks are this full" with a slider defaulting to 90%.
  23. Hi, DrivePool reports the total available space of the pool to the OS for reasons including: 1) the pool is a NTFS volume and the operating system expects to see total space = (total) used space + (total) free space. 2) explorer and various utilities will refuse to perform a dialog copy operation if the total size of the files to be copied is larger than the free space reported by the target - which means otherwise there would be situations where you'd have plenty of room to receive the files but reporting only the space of only one of the disks would result in explorer/utilities thinking it can't be done. For example, let's say we had a pool of two 2 TB disks with 1 TB free on each disk: if DrivePool reported only the 1 TB free, then Explorer would refuse to copy a thousand 1.5 GB files even though there's plenty of room. if DrivePool reported the 2 TB free, then Explorer would proceed to copy them. either way Explorer would still fail to copy a 1.5 TB file to the pool. So DrivePool reports the total free space to the OS and uses the GUI to tell the user the free space per disk.
  24. Reported available space by StableBit is listing 4.3tb on my server but this is misleading since no file will be split across drives that makeup that space. It really needs to show the maximum available free space on a single drive in the pool since that is the biggest file that can be stored in the pool.
  25. So I've been using file placement rules for years to control which folders or file types are stored on which drives, and up until recently everything has been working perfectly. I've noticed that as my pool is getting closer to being full the file placement rules are not being respected. My main use case is ensuring media such as movies and tv shows are stored on hard drives while images and metadata in the same folder are stored on SSD's, as well as keeping other folders on drives that do not hold my media collection. The only balancing plugin enabled is for stablebit scanner and all drives are healthy. In file placement settings the only option checked is "Balancing Plug-ins respect file placement rules"The order of my file placement rules are setup properly so that the file extension rules for images and nfo files are above the rules that place all other media file types on the selected drives. All other file placement rules for other folders are above the rules for media files and folders with the media drives excluded. Duplication is enabled for the entire pool. There are 9 hard drives for the media library each with around 400-450GB free space, 2 500GB SSD's for images and metadata with around 50GB used, 4 hard drives that are excluded from media and file rules with one pair of 2TB drives having 322GB used and a pair of 3TB drives having only 60GB free space, 2 1 TB SSD's for other micselaneous data that are excluded from media folder and file rules with about 250GB used before this issue started. The SSD's that are excluded from all of the media library rules is where I'm noticing both media files as well as images and metadata being placed. These drives have more availible space than the media drives in the pool, however the media drives have more than enough free space for the files being added. It seems that these SSD's are receiving 1 copy of the media files. No automatic balancing is triggered to correct the file placement. I've double checked all file placement rules and settings, everything is as it always has been. I do plan on upgrading a couple of my smaller media drives to larger ones in the near future, but I don't anticipate completely running out of space before then. I'm really stumped as to why this setup is suddenly having issues when it's worked so well for so long. My assumptiom was that file placement rules will always be respected assuming the correct settings and assuming there's enough free space for the data being added.
  26. Thanks! Was just wondering if it was expected.
  27. My guess is because he only had two disks in his pool and those disks were only 233 GB each in size, while you've got nine disks and those are 2.73 TB each in size. Larger disks can mean more Other even when empty. I can see you're using a bunch of 2.73 TB drives so I've grabbed one of mine, formatted it and made a pool of 1 disk; Other is 240 MB. Nine times that is about what you're seeing on yours.
  1. Load more activity
×
×
  • Create New...