Jump to content

All Activity

This stream auto-updates

  1. Today
  2. I've had Stablebit for years and never had so much as a problem until now. I organized a Drivepool as T: and all subdirectories as A,B,C,etc. They contain all of my TV Series and have been fine for a long time. Until the system somehow reset all of the settings and then balanced the entire pool across 10 8TB drives. I had all of the A on one drive and B on the next and so forth according to the whole size of the series. It was simple math to combine Z Y X V on one drive and use a small rule for each of the others to keep the drives and the files organized. The duplication was on separate drives also. Until one day when an external 4TB USB Brick started going offline and then back online. My system basically crashed and burned. All of the pool drives reset and began moving files nilly willy. When I tried to reset all of the rules on the different directories the system tried but eventually the system ground to a halt. The files were in worse shape and I had to spend hours and I mean hours moving files. I just want to let everyone know that this can happen. And yes, it happened when I did not upgrade from the 12 version. I upgraded and I will let you know the results. I even tried making rules 3 levels deep. No joy.
  3. Last week
  4. I'm getting an error "Duplicate chunk 879,612,224,324 of chunk 2,992,104 does not exist and cannot be used for recovery." This is causing the drive to force unmount. I can reattach the drive, but within minutes it does the same thing. I know I can go to Troubleshooting and ignore this message, but I'm afraid to do that without some way to fix the error and turn the ignore option back off. I'm fearful of my data getting corrupted. Any guidance on remediation? Thanks in advance.
  5. I think DrivePool will ignore folders that it hasn't created itself. Never tried this actually. I'm not using any backup solution myself, just JBOD hurdled together in DrivePool as a single drive, and I've had my fair share of disks that have failed (FUBAR cables, for the most part), and also disks that were full (Like 0 bytes left) which have led to having to move data, and I found that it was working to move data outside of the PoolPart-folder made by DrivePool, and then to/from source/destination. But, remember to pay heed to never move data from one PoolPart-folder to another since DrivePool tends to pull a David Copperfield on you and make stuff disappear, like magic. I have yet to try out recovery software to see if removed data can be restored, come to think of it. Not that it has anything to do with this, but it should work I think. I find it very simple to work with the folder structure (PoolPart) since anything within it, you can just move up one level and it's taken out of the pool, then you can merge that data with another folder on another disk or whatever then you can just put it back and et voila - It's back in the pool again.
  6. Thanks. I am just a bit concerned as some of the folder names are also on the other disks as DrivePool would split files across multiple drives, so not sure if it messes up. So you think I should not put it in an identical hidden drive pool folder, but have all the same files on the drive and point it to the same mounted drive location ?
  7. I'd suggest importing the new disk, then just copy the old data to that new PoolPart-folder. It would seem that each folder is unique and I'm guessing it's tied to the serial number of the disk or something. That code that follows is hexadecimal and it would suggest that it's tied, as I said to a specific drive. Btw, don't expect the admins to answer your questions anytime soon. Edit: Didn't read it all. Just mount the new drive in the same location. It should work just the same.
  8. One of my drives was failing, I have got myself an identical sized 6TB drive and managed to move all the data across so now it looks identical, but I had to file move rather than partition move, do I just keep the same volume folder name and just point the mounted drive to the new position ? I have mounted it to C:\Mounts\D2 and drivepool gave it the hidden folder name of PoolPart.553ace38-e6ff-463c-8a9a-54a2b0725b30 What do I do next ? Do I go in to Disk Management and mount the new drive to the D2 folder like the failed one was ? Thanks for any help.
  9. One of my data HDD's went today, a 6tb one, I have DrivePool with SnapRaid. I have a single 8tb parity drive. Given my drives are mounted to folders on the C drive and not letters, how do I repair it ? Also once repaired will DrivePool see it as though it is the same drive that was there before ? Will it matter than I synced it 7 days ago and so the other working drives will have newer data ? I presume I just lose everything added the last 7 days only off the drive that broke ? I have pasted below my SnapRaid config in case it helps. It is the D2 drive that failed. Thanks everyone in advance !! # Example configuration for SnapRaid for Windows # Defines the file to use as parity storage # It must NOT be in a data disk # Format: "parity FILE [,FILE] ..." parity C:\Mounts\PARITY1\snapraid.parity # Defines the files to use as additional parity storage. # If specified, they enable the multiple failures protection # from two to six level of parity. # To enable, uncomment one parity file for each level of extra # protection required. Start from 2-parity, and follow in order. # It must NOT be in a data disk # Format: "X-parity FILE [,FILE] ..." # Defines the files to use as content list # You can use multiple specification to store more copies # You must have least one copy for each parity file plus one. Some more don't hurt # They can be in the disks used for data, parity or boot, # but each file must be in a different disk # Format: "content FILE" content C:\SnapRAID\snapraid.content content C:\Mounts\D1\snapraid.content content C:\Mounts\D2\snapraid.content content C:\Mounts\D3\snapraid.content content C:\Mounts\D4\snapraid.content content C:\Mounts\D5\snapraid.content content C:\Mounts\D6\snapraid.content content C:\Mounts\D7\snapraid.content content C:\Mounts\D8\snapraid.content # Defines the data disks to use # The name and mount point association is relevant for parity, do not change it # WARNING: Adding here your boot C:\\ disk is NOT a good idea! # SnapRAID is better suited for files that rarely changes! # Format: "data DISK_NAME DISK_MOUNT_POINT" data d1 C:\Mounts\D1\PoolPart.9e511ba4-d2d2-4bff-8ae7-0c5f9fa82209 data d2 C:\Mounts\D2\PoolPart.553ace38-e6ff-463c-8a9a-54a2b0725b30 data d3 C:\Mounts\D3\PoolPart.af703e7a-33f5-46ce-9865-81ea6ed96a87 data d4 C:\Mounts\D4\PoolPart.7424422f-0989-444a-9a5e-40fd4f00980a data d5 C:\Mounts\D5\PoolPart.a8fb213f-a2de-4b3b-81dc-56a569cb6301 data d6 C:\Mounts\D6\PoolPart.233fe9e1-8151-4cda-88ee-0297350ac92a data d7 C:\Mounts\D7\PoolPart.9efe5915-5cd0-4bbe-9dd9-716502869531 data d8 C:\Mounts\D8\PoolPart.4247ddb6-c80d-446f-ae20-b7f3cf1b8956 # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR\" # Format: "exclude \PATH\FILE" # Format: "exclude \PATH\DIR\" exclude *.covefs exclude *.unrecoverable exclude Thumbs.db exclude \$RECYCLE.BIN exclude \System Volume Information exclude \Program Files\ exclude \Program Files(x86)\ exclude \Program Files (x86)\ exclude \Windows\ exclude \Windows.old\ # Defines the block size in kibi bytes (1024 bytes) (uncomment to enable). # WARNING: Changing this value is for experts only! # Default value is 256 -> 256 kibi bytes -> 262144 bytes # Format: "blocksize SIZE_IN_KiB" block_size 256 # Defines the hash size in bytes (uncomment to enable). # WARNING: Changing this value is for experts only! # Default value is 16 -> 128 bits # Format: "hashsize SIZE_IN_BYTES" #hashsize 16 # Automatically save the state when syncing after the specified amount # of GB processed (uncomment to enable). # This option is useful to avoid to restart from scratch long 'sync' # commands interrupted by a machine crash. # It also improves the recovering if a disk break during a 'sync'. # Default value is 0, meaning disabled. # Format: "autosave SIZE_IN_GB" autosave 800 # Defines the pooling directory where the virtual view of the disk # array is created using the "pool" command (uncomment to enable). # The files are not really copied here, but just linked using # symbolic links. # This directory must be outside the array. # Format: "pool DIR" #pool C:\\pool # Defines the Windows UNC path required to access disks from the pooling # directory when shared in the network. # If present (uncomment to enable), the symbolic links created in the # pool virtual view, instead of using local paths, are created using the # specified UNC path, adding the disk names and file path. # This allows to share the pool directory in the network. # See the manual page for more details. # # Format: "share UNC_DIR" #share \\\\server # Defines a custom smartctl command to obtain the SMART attributes # for each disk. This may be required for RAID controllers and for # some USB disk that cannot be autodetected. # In the specified options, the "%s" string is replaced by the device name. # Refers at the smartmontools documentation about the possible options: # RAID -> https://www.smartmontools.org/wiki/Supported_RAID-Controllers # USB -> https://www.smartmontools.org/wiki/Supported_USB-Devices #smartctl d1 -d sat %s #smartctl d2 -d usbjmicron %s #smartctl parity -d areca,1/1 /dev/arcmsr0 #smartctl 2-parity -d areca,2/1 /dev/arcmsr0
  10. I have a half dozen different CD implementations and they all have this problem. My guess is that this is an issue where some MS service decides that CD is being unnecessarily slow during shutdown and just kills the process, regardless. Since I feel that fighting windows integration issues is often a fool's errand of whack-a-mole, how about a feature that allows the user to just shutdown CD manually (with a medium verbosity log being displayed while this attempted). This way, we can do a windows shutdown ourselves after CD has successfully taken everything offline. Or, if there is an issue that is preventing the dismount, we will at least have log errors to share with the dev.
  11. I have a very similar use case and CD is fairly resilient, but it's a two-part problem, since failing to shut down properly in the first place is a major issues, particularly if you have very large datasets. In my own case, I need to re-upload 500 gigs or so a couple of times a week due to 'improper' shutdowns. Anyway, I will start a thread on that, separately. I would love to see this a per-drive option to start drives unmounted. This would save all kind of of VPN madness.
  12. also you might wanna check this older thread. maybe there are some interesting information ...
  13. Thanks for the tips and experience. Can´t you just disable duplicate file and do a frequent backup ? Also "media folder might be spread out over 2 or more drives." as I am playing with this enw software maybe its because of the automatic file balancing feature that copies files to the different harddrives. Cant you just disayble this feature and the audio files in the media folder won´t be spread that much between different hard drives ? Cheers, nostaller
  14. Also, if you use duplicate later, consider removing one disk at a time.
  15. Earlier
  16. When I remove a drive from DrivePool, for example a 4TB HDD, it can be a lot longer than 18 hours. Add to that, a file duplication after the removal, and you would be looking at many, many hours for the task to complete. Having said that, I know DrivePool is not the fastest solution for file transfers, but it seems to get the job done in its own time. In my case, the process takes place in the background and I just let it run because I'm usually not in any great hurry. I almost always have DrivePool remove the drive and duplicate as needed in the removal process. I suspect, that DrivePool might offload all your files on your remove drive but you might still have some duplicate files on the drive when it is done. But, I have very few folders designated to duplicate. To check that your files set for duplication were redistributed properly on DrivePool, you can go to Settings>Troubleshooting>Recheck duplication and that should report results for you. Again, that process might take some time. I have 80TB+ on my DrivePool, almost all pool drives are slow USB archive drives, and I have learned to be patient. Fortunately, DrivePool seems to handle things better without my interference and I have seen some processes take a few days to complete.
  17. I see. your SSD's are used as archive drives, just very fast archive drives. I have not used File Placement in my DrivePool, so I don't have any personal experience to offer. If adding your new 8TB drive to the systems cleans up the files being dumped on your SSD's, then I hope you update your results so we all can learn.
  18. Nice to see that the admins are really answering our questions... *Not* Just noticed this myself, and I don't see any other alternatives, judging from this thread, than to download and install =0/ I currently have v2.2.5.1237, which is a bit newer than the OP, which bakes the noodle about how the upgrade function is supposed to work. Obviously something's either been overlooked, a server that's not responding correctly, or a series of flaws in the following updated versions. Will test installation and see if it persists. Only question is how long it will take for the next version to arrive. This site isn't exactly teeming with enthusiastic admins, again, judging from the forum activities. Edit: For any admin reading this: Future version should have a more prominent update progress, not just that it will notify "if" an update is available. That's just stupid. Any other program, pretty much, does show progress, and potential problems with said checking of updates. If an error occurs in communications, how the heck are we supposed to know that status? Are we just forced to traverse logs (Not checked this myself yet) to find any such problems, or are we just supposed to sit here like a deer in headlights on the road, waiting for something that might not happen?
  19. Actually I'm not using the SSD's as "cache" on my configuration. Their are just (faster) storage devices in my pool, an therefore I just wan't them to hold only data that must be delivered as quickly as technically possible. In my case, the Plex Photo library just skyrocketed in responsiveness and browsing speed, by delivering the content, from where it is stored, the SSD drives, no matter of inside or out of my home network. Couch-time family photos browsing is a delight now. SSD caching works great and make a difference indeed when direct file level access/delivery on the network is required. But that's not my model usage for now.
  20. Hello. Long story short. Ran out of space Bought new disks Emptied 3 small 120GB SSDs to be replaced with 4x 16TB drives. I Selected "Duplicate Files Later" for all 3 removing drives. Some files were duplicated between these 3 drives, not anywhere else. First disk removes fine. 2nd disk gets removed, but with files remaining on it. 3rd drive gets emptied until it gets to files for which no more duplicates exist. DrivePool is stuck on emptying. Tried several things, but had to resort to restarting Windows. (Soft) After boot, Drivepool takes a while to load, says 3rd disk is missing. I re-add disk 2 and 3 because they show as not empty. Didn't want to take the risk. I check the removed drives. 1st is empty 2nd & 3rd have files on them left None of these files have a duplicate in DrivePool I have to run disk repair to recover the files as they are no longer indexed. Recovery does not put them in drivepoolpart folder so they are not automatically being duplicated back. A user who doesn't understand what's going on might lose all this 'in limbo' data. Because even when manually checking the drives, you don't see the left-over folders and files without doing some kind of recovery first. Suggestion, even when selecting "Duplicate Files Later", do a check if a duplicate still exists.
  21. I am not support, just another user. However, I like your approach Method2 for Backup. If SSD1 fails, you could remove it from DrivePool and replace it with a new SSD. Then, you run a compare folders of your backup drive and DrivePool in any number of File Explorer programs (Free Commander is what I use), and select it to write any missing files/folders. To me, that is a simple approach, and it lets DrivePool write the files to your SSD's per your DrivePool settings. OK, I am a big fan of DrivePool, however, there are some shortcomings I have become aware of in using the program for the past ~2+ years. First of all, file duplication is not a smart duplication in that if your source files are damaged, then your duplicate copy will also have the damaged files. I have been looking for some kind of automatic checksum program that verifies all files are present and in good working order. The only thing I have found, so far, is using Multipar (freeware) and its .par2 files for verification and recovery, but that is a manual process. For my media files, like an album with multiple file tracks, or an audiobook with multiple file chapters, I typically set my Redundancy threshold at 10%. That allows me to verify all contents of the folder are present, and if some files are lost or damage, it can recover some files. Of course, a higher Redundancy threshold would recover more missing or damaged files. Using DrivePool, I have discovered that a media folder might be spread out over 2 or more drives. Without some means of verifying all files are present in the folder, you could end up being a song or two short of the full album. Ask me how I know! I had a HDD fail in DrivePool and I discovered lots of albums missing a few tracks. Would DrivePool file duplication save the day? Maybe, if none of the files were damaged and duplicated as damaged. But with Multipar, I can easily verify all files are still in the folder and/or recover some lost or damaged files. At 10% redundancy, I can verify my files and recover/repair most of my lost or missing files if needed. DrivePool file duplication at 100% redundancy cannot verify all files are in your folder and/or recover lost or damaged files in the same manner. If you have read thus far, let me say that DrivePool works great for me, but I have separate HDD backups of all my important files sitting in my closet. Additionally, I use Multipar with .par2 files to verify my data. If any or your DrivePool data gets damaged, you might never know and happily backup damaged files or incomplete folders. Multipar works great on media sized folders of less than a few GBs, but it would take forever to create .par2 files for my entire DrivePool. So, unfortunately, I have to create .par2 files on a per folder basis. If you know of anything that creates checksums and verification/recover options like Multipar, but on a large data set like a DrivePool virtual drive, please let me know.
  22. EDIT: Deleted by user.
  23. Dear Support, I have some trouble understanding what the best method would be in case of an SSD failing to restore the files and rebuilt the pool with an external Hardddisc Backup I would like to briefly explain two backup methods that I understand might work, not knowing what method is more reliable to use. In all cases I would use a program like Synergy, that copies all the files and changes from a source folder to a destination folder (my external Backup Drive). I need an occasional Backup (once a week) In this setup I would not use any folder duplication or pool file duplication feature from Stablebit. Method1 for Backup: Source: Destination: Backup Drive 14TB D: SSD1 -> Folder SSD1 E: SSD2 -> Folder SSD2 F: SSD3 -> Folder SSD3 G: SSD4 -> Folder SSD4 H: (Drive Pool Combining SSD1 to SSD4) As I understand all the pool files are stored in the ntfs format in the hidden file folder inside SSD1 to SSD4. My firsts method would be to backup all this „hidden“ files of each SSD (SSD1 to SSD4) on a separate folder inside the Backup Drive. I case SSD1 fails, I would need to recover the files from „Folder SSD1“ of the Backup Drive onto a new SSD and that new SSD would be automatically recognised belonging to the pool when enabled. I would just need to give the new SSD the same label naming as the old ssd or I would need to use the „identify“ feature of Stablebit when seeing the „Drive is missing“ info. Would this work ? Method2 for Backup: Source: Destination: Backup Drive 14TB D: SSD1 E: SSD2 F: SSD3 G: SSD4 H: Drive Pool Combining SSD1 to SSD4 -> Entire Pool Backup This second method would be easier to setup when doing a backup, because I would just backup the entire file structure from the virtual Dive Pool (H) to the Backup drive. In case SSD1 fails, I would use the „remove“ feature of StablePool and add a completely new empty SSD to the pool with the same or larger size. Then I would recover the files from the Backup drive to the Drive Pool H, and Stablebit would fill up this new SSD1 in whatever manner it wants to store data on it. In this case I would not have to worry about what file content each SSD has, because I already have an backup of the entire Pool Drive and Stablebit would manage everything else in the background… —————————————————————————— Would both methods work for having a save backup to restore the files ? If both work, I would tent to use Method2, as it is easier to backup one Virtual Drive Pool instead of 4 individual SSDs. Are there any disadvantages using Method2 ? Is there an easier method using the Stablebit pool file duplication feature for example (Although I really like how Synergy works with backup up files) As I understand, I dont need the Stablebit folder duplication feature because I use a separate backup on an external hard drive, is that correct ? Thanks a lot, Cheers, nostaller
  24. I have had to deal with a similar problem before. If you know which PoolPart is old and not being used, move all files from the old PoolPart into Drivepool using Explorer. Then delete the old PoolPart. Do this on both HDDs that have duplicate PoolParts. Then delete the old PoolParts. When you remeasure, you should have all your files in DrivePool and your "other" data should be back to nil (or close to it). It might be faster to just move all data from the old PoolPart to the new PoolPart on the same HDD, and then remeasure, but I felt more confident in letting DrivePool "fix" itself by moving the files from the old PoolParts into DrivePool. That way, DrivePool will put the files on the HDDs that make most sense for your settings (balancing, etc..).
  25. I have found that DrivePool balancing and duplication often have problems when I hit that 90% threshold. I have seen my SSD cache get kicked offline as well. Fortunately, adding more HDD storage and/or removing unused files corrects the problems on my system. I have to manually recheck my SSD to tell DrivePool that it should be used as an SSD cache and not an archive disk. But it seems to work again without any problems. If you are happy with your current custom SSD cache settings, I would write them down because it seems to me that I had reenter my settings. After many months of not looking at DrivePool (it just works), I had forgotten my custom SSD cache settings and had to play around with it again until I got it back to where it works best for my system. If you use the default settings, then it might not be an issue.
  26. If you wanted to go down this route, you could stablebit cloud drive (fileshare) the Pool from Server 1 (Shared drive at root), and then have drive pool and cloud drive attached to the Server 2, and have that serve the files, as one big pool, bit clunky but could work.
  27. Total Commander is allowing you to search on the Pool itself or on the drives (letter or mountpoints). You can also Save the searches as templates, and use them quickly later. Is also supporting attributes, Text, conditions, time based attributes, Regex ... It's a shareware application, and if your bothered by the startup message, you could register it.
  28. [Update] I think I have identify the reason behind the file placement rule violation (overflowing). It seems that all other HDD's total used space is 90% filled or more, and the default Pool Overflow rule is permitting this: I will add another 8TB to the pool, rebalance, and will check afterwards, if the disk total used space will drop under 90%, the SSD data will also be properly rebalanced.
  1. Load more activity
×
×
  • Create New...