Jump to content

Shane

Moderators
  • Posts

    756
  • Joined

  • Last visited

  • Days Won

    70

Everything posted by Shane

  1. That sounds like the bad sectors might've unfortunately included the file system's indexing of the drive? DrivePool normally deletes the files and folders on a drive being removed as it migrates each of them to the rest of the pool (so that the drive is all tidy for reuse); it might've run into a problem it couldn't handle there ("DrivePool: okay, that bit's copied, delete that bit" Drive's file system: "potato" DrivePool: "what?"). FWIW with a "known bad" drive I'd recommend ticking at least the first option, "Force damaged drive removal (unreadable files will be left on the disk)", maybe the second as well*, "Duplicate files later (faster drive removal)", but if the structure itself was affected that might not have helped either. *If I know a drive has bad sectors I'd rather have it out of the pool ASAP, but mileage may vary and it doesn't do anything if duplication isn't enabled.
  2. As a customer myself, I'm glad to hear that it's working now. What turned out to be the issue?
  3. If a poolpart is empty on a removed drive that would suggest DrivePool successfully moved all the pooled files off it but failed to remove the poolpart folder itself. Did you do a normal remove, or tick any of the options? Also odd that there's two poolpart folders on it. But as the drive is still showing 5TB used... were you using it for anything besides DrivePool? Maybe check with a tool like TreeSize (run as Administrator) or similar?
  4. buyrak, I'm not seeing anything incorrect in Christopher's statement that you uploaded. If you're using a provider, data is stored on the provider; that's the point of using a provider. And if the account becomes read-only, it should thus still be readable. Etc. Did you attempt to manually enable read-only mounting of the drive as Christopher suggested? (The relevant instructions in the manual can be found here.) I've also checked Dropbox's documentation and it states a trial ends either by it being cancelled or by it automatically moving to a paid subscription, and that upon cancelling a trial (or subscription), "You’ll retain access to all your files and folders, as well as any content that was shared with you. Your account will be limited to the 2 GB storage quota of Dropbox Basic. If you exceed that limit, your account will stop syncing to your devices." So if Dropbox itself is saying "you can still access your files" but that's not actually the case, that's not StableBit's fault. Now it may be that Dropbox's idea of "you can still access your files" is actually only via their website or some other restricted mechanism (I have not personally attempted to connect CloudDrive to an expired Dropbox account, but I am willing to test it if someone is willing to provide me with lawful access to one); that's also not StableBit's fault. (we won't even get into how it looks when someone blames Party A because Party B is no longer providing a commercial service they aren't getting paid for to someone) However, all that said, if you can still at least download the CloudPart folder (including everything within) from your Dropbox account via whatever mechanism Dropbox still claims to allow you, there is a CloudDrive command-line tool to convert the folder into a format that can be accessed by CloudDrive's own Local Disk provider. Let me or StableBit know if you need help with that. Finally, as a (volunteer) moderator: I at least won't be deleting your post. First, as Christopher appears polite and factual in the response, I suggest your post reflects rather more poorly on you than it does them or StableBit. Second, while your post comes across to me as immature and one-sided (generally when complaining to a forum about a company's response, one should include the question that prompted that response), it would still be useful to the community to know how cancelling a Dropbox account affects its accessibility in CloudDrive - especially if that may have changed from how it worked previously and/or we can get confirmation.
  5. You can use the Cog icon (top right of GUI) -> Troubleshooting -> Service log... and look for entries mentioning the missing drive's disk id, volume id and pool part folder. The last seven days' entries can be found saved in the log files in the C:\ProgramData\StableBit DrivePool\Service\Logs\Service folder. Alternatively (for future use) if you enable Cog icon -> Notifications -> Notify by email... the emails will include the drive's model and serial number; which you can then compare by hovering the mouse cursor over each drive in the pool in the GUI to see their model, serial number and disk id.
  6. I should perhaps add here that DrivePool won't stop writing a new file if it hits the % max, only if/when it hits actual full. The % max setting is for files yet to be written (e.g. "this disk is at/over % max, don't put more files here") and/or files that are not in use (e.g. "this disk is at/over % max, move file(s) off it until satisfied"); an example of a balancer controlling both of these (no new files and move old files) can be found in the Prevent Drive Overfill balancer. Likewise, with the SSD Optimizer plugin its option "Fill SSD drives up to X%" doesn't mean "empty at this point", it means "overflow to Storage drives at this point". When it empties is instead controlled by your choices in the main DrivePool settings menu (Balancing -> Settings -> Automatic balancing - Triggers). This may be somewhat counterintuitive, but it is useful to avoid running out of space in situations where the "SSD" is being filled faster than it can be emptied.
  7. Shane

    Neil

    In the DrivePool GUI, try Manage Pool -> Re-measure... If that doesn't help, you might need to try resetting DrivePool to its default settings (and then re-apply your preferred settings): in the GUI, top-right, Cog icon -> Troubleshooting -> Reset all settings... Also, check that you are able to read and write normally to each drive that is part of the pool - perhaps a benchmark utility (e.g. CrystalDiskMark) to check if any of your drives are performing poorly (which can be indicative of other issues).
  8. Sorry, I don't know of any benchmark programs that use user-specified files. I found DiskTT which can use a user-specified folder, but it still creates its own files and it only provides throughput (not latency etc).
  9. 1) Is there a way to write files to the disk with the most available space in the pool? DrivePool defaults to attempting this, though if multiple large files start being written more or less simultaneously to the pool there might be an issue (q.v. next answer). 2) File sizes are unknown until the backup is finished. My assumption is that this will be a problem for DrivePool, in that, if it's writing to a disk that only has 4TB free and it ends up being a 6TB backup, then it will fail. Correct? Correct. 3) I'm assuming there's no way to allow "write continuation" to another disk if the current disk fills or hits the % max. Correct. 4) If a disk starts to fill can I set a lower max % , say 50%, and set the balance plugin to run every X minutes? My intent would be that if a disk would start to "balance" other data off the disk and make room for additional write capacity as the current backup being written grows. You can set Balancing to always run immediately upon detecting a trigger condition or only no more often than as small as any integer multiple (including 1x) of 10 minute intervals. Note that (I believe) it cannot balance open files, or at least not files that are actively being written. @Christopher (Drashna)? 5) I would anticipate that we'll use 70-80TB of the almost 100TB that we'll have available to us. We will have headroom, but I'm concerned about managing/maximizing write space. Depending on above answers. I would assume Veeam will start having write failures for larger backup files if there's not enough room on the volumes. Correct. I've had this happen. I take it the enterprise version of Veeam still doesn't support splitting? (I use the standalone agents at home) 6) Can I configure a non-SSD as a cache point, say one of the 20TB SATA volumes, that would then write out to the pool? I'd used it purely as a staging point, rather than for performance. At this point, ANYTHING is faster than our DataDomain's Yes, you can. The SSD Optimizer plugin doesn't actually care whether an "SSD" is actually an SSD or not; it would be more accurate to call it the Cache Optimizer plugin. For example, you might set "Incoming files are cached on drives A and B; when A and B are more than 25% full they empty* to drives C, D, E and F in that preferred order, but try not to fill any storage drive to more than 90% capacity and if any are then move files off them until they are no more than 80% full". Note that you can also make pools of pools (so pool P could consist of pools Q and R which could consist of drives A+C+D and B+E+F respectively) if for some reason you want to have different configurations for different sub-pools. *the SSD Optimizer plugin doesn't have fine control over emptying; when it starts it will attempt to continue until the cache is empty of all files not being written to it. P.S. it is possible to write your own balancing plugins if you've got the programming chops. P.P.S. do not enable Read Striping in DrivePool's Performance options (it defaults to off) in production until you have confirmed that the software you use works reliably with it. I've found some hashing utilities (for doing comparison/integrity/parity checks) seem to expect a single physical disk and intermittently give false readings when read striping is enabled.
  10. Well that makes things easier! Thanks Christopher! Can confirm, just grabbed some external drives and made a Storage Spaces pool with them then added that pool to be part of a DrivePool pool without any problems. However: I've been playing with Storage Spaces in JBOD mode to see what you'd need to do, and it was NOT happy after removing the second-last disk - Storage Spaces removed it without any warnings but then immediately afterward complained that there was an issue, its pool ceased to be accessible and the drives became stuck in a limbo of being removed but not actually removed until I physically pulled them and manually wiped them. Ouch. I would strongly recommend that you have enough spare capacity to empty the last two drives, not just one, if/when you dismantle the old Storage Spaces pool, and the order is apparently "delete storage pool then remove last two physical drives". So all that said, the steps for migrating from Storage Spaces to DrivePool as the pool that Plex, etc, sees should be: Create a DrivePool pool (let's say it's P: drive). Doesn't actually have to have any other physical drives in it yet. Add your Storage Spaces pool (let's say it's S: drive) to the above. Turn off any services/programs looking for S: drive (e.g. your Plex software). Also close the Stablebit DrivePool program and turn off the StableBit DrivePool service. Manually move all your user content in S: drive - so excluding system folders like System Volume Information, $RECYCLE.BIN, etc - into the hidden PoolPart folder that's now in the root of S: drive. For example S:\MyStuff\MyFile.txt would become S:\PoolPart.guidstring\MyStuff\MyFile.txt Swap the drive letters of the two pools via Windows Disk Management (remove S: from Storage Spaces pool, remove P: from DrivePool pool, add S: to DrivePool pool, add P: to Storage Spaces pool). Turn the Stablebit DrivePool service back on and open the StableBit DrivePool program. If it doesn't proceed to do so automatically, tell DrivePool to re-measure the pool via Manage Pool -> Re-measure... so that it accurately reports the disk usage to you. Turn your services/programs that use S: drive back on. Presuming everything is now humming along with the DrivePool pool as your "front end", you could then gradually remove your drives from the Storage Spaces pool and add them directly to the DrivePool pool instead (keeping in mind what I discovered about trying to remove the last two drives - if need be you could just leave the last two drives alone). If/when you're ready to delete the old Storage Spaces pool (i.e. only two drives left inside it and you've got enough spare capacity on your other drives in the DrivePool pool), remove it from the DrivePool pool first and then once that's successful delete the old Storage Spaces pool and only then remove the physical drives from the Storage Spaces pool so they can be added directly to the DrivePool pool. DrivePool's theoretical limit is at least 8 PB (yes, PB, as in petabytes) and only because the Windows OS itself doesn't currently support larger than that (note that some older versions of Windows only support up to 256 TB).
  11. Could you explain what you mean by "Windows Drive Pool"? Do you mean a pool created via Windows Storage Spaces? Something else? What drives (e.g. 3x12TB + 1x10TB) do you currently have and how are they currently allocated (e.g. system, storage spaces, other, empty)? How many additional drives do you have room for? How fussy is your media server about how it sees your files (e.g. you could tell it "everything is stored on drive P and drive Q" with those being your Stablebit and Windows pools, and then just gradually move files between pools)? The method I saw folks use re BackBlaze was to backup each drive within the (Stablebit) pool rather than the pool as a whole, so that if a drive within the pool fails that whole drive can be restored from BackBlaze rather than trying to figure out which files were on it; I don't know if BackBlaze's restore process has improved since then.
  12. DrivePool's duplication is similar to RAID in that, when enabled, it can protect against one or more drive failures (see this recent thread where I explain that further as well as backup practices). But the short answer is that if one of your drives failed (assuming you hadn't enabled duplication) you'd lose whatever was on that particular drive; the rest of your pool would remain intact - e.g. if you had song1.flac only on drive A and song2.flac only on drive B in your pool, and drive A failed, song1.flac would be lost and song2.flac would be kept. Regarding buying a HDD - it seems like you want/need as little latency and as much speed as possible when working? So I'd keep the HDD separate from your pool of SSDs (maybe in its own pool if you plan to expand) and set up a scheduled automatic backup (whether that's a robocopy script, freefilesync mirror, veeam agent, etc) to happen while you sleep. Your mileage may vary of course.
  13. Windows only supports drive letters A through Z. However, it isn't necessary for a drive (other than the boot, system and pagefile drive, and perhaps CD/DVD drives and similar) to have a letter; drives can instead be accessed by mounting them as folders in another drive (e.g. C:\Array\Drive27, C:\Array\Drive28, etc) and furthermore itself DrivePool can have drives form part of a pool without being lettered or mounted at all. To add/remove drive letters or mount drives as folders in other drives, use Windows Disk Management: right-click a volume and click Change Drive Letters and Paths... Late edit for future readers: DON'T mount them as folders inside the pool drive itself, nor inside a poolpart folder. That risks a recursive loop, which would be bad.
  14. If you hover the mouse cursor over the 90.2% does it give you any information?
  15. Try the cog icon in the top right, Troubleshooting, Recheck duplication...?
  16. I'm not aware of any CLI commands for CloudDrive besides the convert tool. The feature was planned way back but I believe that can was kicked indefinitely down the road in favor of working on other features and on DrivePool, Scanner, etc.
  17. As far as I know it'd need to be converted to be mountable locally and the current tool doesn't support GoogleDrive (only Amazon and DropBox). I don't know if there are plans to update the tool for GD. @Christopher (Drashna) I can find this https://community.covecube.com/index.php?/topic/2879-moving-clouddrive-folder-between-providers/#comment-19900 but it doesn't indicate whether it ended up being possible?
  18. You could try physically disconnecting / taking offline via Disk Management each drive individually and testing playback? The pool will stay online in read-only mode so unless the file you're streaming is only on that particular drive... For the USB enclosures you'd need to temporarily unplug / disable in Device Manager those. Remember to test each off individually and then both off together. Could also be the USB hub they're connected to rather than the enclosures themselves, too.
  19. Yes, to completely prevent any chance of data loss from 2 drives suddenly failing at the same time you'd need 3 times duplication. Note that scanner doesn't protect against sudden failures; that's why they're called "sudden". Scanner protects against the type of failures that'll take longer to kill your drive than you/drivepool will take to rescue any data you want off it. Basically there are what I'd consider to be four types of drive failure: Sudden - bang, it's dead. This is what things like Duplication and RAID are meant to protect against. Imminent - you get some warning it's coming. Duplication and RAID also protect against this type, and Scanner tries to give you enough time for pre-emptive action. Eventual - no drive lasts forever. Scanner helps with this too by notifying you of a drive's age, duty cycle, etc, so you can budget ahead of time for new ones. Subtle - the worst but thankfully rarest kind, instead of the drive dying it starts corrupting your data. Scanner can sometimes give clues, otherwise you need some method of being able to detect/repair it (e.g. some RAID types, SnapRAID, PAR2, etc) or at least having intact backups elsewhere. DrivePool might help here, depending on whether you notice the corruption before it gets to the other duplicate(s). If it helps any, I suggest following the old 3-2-1 rule of backup best practice, which means having at least three copies (production and two backups), at least two different types (back then it was disk and tape, today might be local and cloud) and at least one of those backups being kept offsite, or some variant of that rule suitable for your situation. For example, my setup: DrivePool with 2x duplication (3x for the most important folders) to protect against sudden mechanical drive failure on the home server. Pool is network-shared; a dedicated backup PC on the LAN takes regular snapshots to protect against ransomware and for quick restores. Pool is also backed up to a cloud provider to protect against environmental failures (e.g. fire, flood, earthquake, theft).
  20. Some testing with the latest version of DrivePool suggests it's actually pretty fast at emptying/removing drives - at least it was hitting close to the mechanical limits of my drives, although I was using mostly large files - so that's good news since you want to keep the pool active: The simplest method is just to add the two new drives and then remove the old two drives. The catch is that drives can only be removed one at a time, so if it takes X hours to finish removing the first drive and just happens to finish five minutes after you go to bed, it won't be emptying the other one while you sleep. To avoid that, another method is to add the two new drives to the pool as above but then temporarily turn off all the balancers except for the the Drive Usage Limiter balancer (and Stablebit Scanner if you're using that) and tell it not to let anything be stored on the two old drives (untick both duplicated and unduplicated); it'll keep going until they're both empty and then you should be able to remove them without any unplanned waiting inbetween.
  21. Replace "probably result in data-corruption" with "certainly result in data-corruption". It's also not just a case of pausing one and unpausing the other; you have to use the Detach and Attach functions and it will want to complete any pending uploads in the cache. Personally I feel like attempting this on a regular daily basis could be a bad time waiting to happen if you accidently click past the warnings. Instead consider setting up a VPN between home and work, and/or setting up an ownCloud server or similar, to provide remote access to whichever machine is going to be CloudDrive's "home base".
  22. If you have X times duplication then DrivePool will try to keep each file on X number of drives. The default behaviour when saving a file to the pool is for it to be put on whichever drive(s) have the most free space at the time. So if you had 2x duplication and 3 drives, any given file would be on 2 of those 3 drives and that means if 2 of your drives suddenly failed at random then (assuming a bunch of equally sized files in a pool with default behaviour) in theory on average you'd have a 2 in 3 chance of keeping any given file. Basically if you're using DrivePool just by itself, to completely eliminate the risk of losing any files if N drives simultaneously fail you need to use N+1 times duplication. If you're using something like SnapRAID to provide protection for your DrivePool, you can (assuming a few details) instead dedicate N drives to parity protection to protect against N simultaneous drive failures - if I remember rightly this becomes more storage efficient when the total number of drives exceeds twice N; the tradeoff is that SnapRAID's parity drives need to be updated after any files change in the pool drives to protect those changes instead of DrivePool's real-time duplication feature, so if your files are regularly changing all the time it may not offer enough protection by itself. You could of course use both duplication and parity if you had enough drives.
  23. The "best" way depends on your needs. By all drives being resident in the case, do you mean they are all plugged in and visible to the OS at the same time? Are you wanting to keep the existing Pool and just change from the old drives to the new drives, or are you wanting to create a new Pool? Do you want to be able to use the Pool normally while the process happens, or is read-only good enough, or is taking it offline for the duration acceptable to make the transfer happen ASAP? What are your old drives and what are your new drives (e.g. 3x8 vs 4x10)?
  24. Just to check, igobythisname, did you use the Resize... option to shrink the drive after deleting the data from it? Running the Cleanup... option won't shrink the drive on its own, the shrinking has to be done first (also the Resize should attempt to automatically Cleanup after it finishes the shrink).
  25. IF the cloud drive is being used solely for that useless backup and nothing else, then you could just destroy the cloud drive and start a fresh one? Alternatively, this entry in the manual might be of help? It looks like you could stop the clouddrive service, manually delete/move the local cache, then restart the service and reattach the clouddrive? I'd test with a dummy setup first though. You can also contact the developers directly via https://stablebit.com/Contact
×
×
  • Create New...