Jump to content

Shane

Moderators
  • Posts

    736
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by Shane

  1. Can you try stopping the service, taking a backup of the current folder, then restoring an earlier version of the Disk*.json files and starting the service again?
  2. CloudDrive is pretty good at avoiding corruption from interruptions, although we did just have someone with a problem that a cloud drive became inaccessible (not corrupted, just unable to be mounted) on Dropbox when that provider set the content read-only (due to an expired account) which has apparently since been fixed in a new CloudDrive update so... First, make sure you have the latest version of CloudDrive. As noted, this apparently fixes an issue where a cloud drive that becomes read-only due to an action on the provider's end can still be mounted by CloudDrive. @Christopher (Drashna) can you confirm whether my understanding is accurate? Make sure you complete any existing pending / in-progress uploads to your cloud drive from the local cache well before the transition date; as soon as that's done you should change the cloud drive to a read-only state. That way there's no risk of anything bad happening from google flicking the switch while CloudDrive is in the middle of writing to the cloud drive. Note, while the manual also states that cloud drives that are part of a DrivePool pool cannot be set read-only (this way). So if your cloud drive is being used by DrivePool you are going to need to remove the cloud drive from the drive pool before you can change the cloud drive to read-only; if you don't and your cloud drive becomes read-only anyway (due to Google) then your DrivePool pool will be unable to write to the cloud drive and will have trouble operating (I don't think that itself would cause corruption; I'd expect you'd just be unable to perform tasks that expect the cloud drive to be writable - which given Windows loves to write to disk could be problematic). If you want to remove the cloud drive from drive pool but you are concerned that the amount of data that would need to be moved locally is too much to fit locally and/or complete transferring normally in the month remaining, it is possible to manually split (so to speak) a drive pool; you could then manually complete moving the google-stored read-only pool content into your local-stored working pool afterwards. TLDR: 1. update to latest CloudDrive version. 2. remove the cloud drive from the drive pool (split the pool manually if necessary to do so in time). 3. complete any pending/current uploads from the local cache to the cloud drive. 4. change the cloud drive to read-only in CloudDrive itself. 5. manually complete moving pool content from google if you had to split it in step 2.
  3. Sometimes escalating is necessary and gets things done ASAP, but it's also important to step back for a moment and ask, "Am I taking out my frustration on someone who is at least trying to be helpful?" As I understand it StableBit is a small team, and a thing I've found dealing with customer support at many companies over decades is that they're almost always overworked. Politeness and thanking them for what they are able to do, e.g. "thankyou for the explanation, did you miss my last request about the older versions?", can go a long way. Joining the company's community forums to (constructively) complain about a frustrating problem or experience? Sure, go for it! Emotional insults, no. As a volunteer I'm not always around but I do enjoy helping people on these forums; I've never had to officially warn someone but the button is still there (spammers don't count, that's a different button) and the opening post did get reported to the moderators for incendiary content. I don't know if your request for previous versions was answered in another response, but for future reference those can be found here. I hope CloudDrive works out for you now that the problem's solved. Please do feel free to ask the community (in a new thread) if you need further help with CloudDrive or other StableBit software.
  4. It looks like disk location information (e.g. case and bay) is stored in C:\ProgramData\StableBit Scanner\Service\Store\Json\Disk_diskidentifierstring.json if that helps any?
  5. I've seen something similar happen when Windows did a major update (e.g. 10 to 11) and it ate the virtual driver along the way. In the Windows Device Manager, can you see any instances of Covecube Virtual Disk under Disk drives? Can you see Covecube Disk Enumerator under Storage controllers? Any yellow triangles or other alert icons on them? If you're in a hurry you could try renaming all the PoolPart.guidstring folders to have "X" or "Old" or whatever in front (don't change the unique identifier that comes after PoolPart) and then with any luck you should be able to add those disks to a new pool (and then just move all your data from the old poolpart to the new poolpart on each disk), but if you have the time it would be great if you could open a ticket with Stablebit so they can take look at why DrivePool is able to detect the old pool but no longer able to load it.
  6. That sounds like the bad sectors might've unfortunately included the file system's indexing of the drive? DrivePool normally deletes the files and folders on a drive being removed as it migrates each of them to the rest of the pool (so that the drive is all tidy for reuse); it might've run into a problem it couldn't handle there ("DrivePool: okay, that bit's copied, delete that bit" Drive's file system: "potato" DrivePool: "what?"). FWIW with a "known bad" drive I'd recommend ticking at least the first option, "Force damaged drive removal (unreadable files will be left on the disk)", maybe the second as well*, "Duplicate files later (faster drive removal)", but if the structure itself was affected that might not have helped either. *If I know a drive has bad sectors I'd rather have it out of the pool ASAP, but mileage may vary and it doesn't do anything if duplication isn't enabled.
  7. As a customer myself, I'm glad to hear that it's working now. What turned out to be the issue?
  8. If a poolpart is empty on a removed drive that would suggest DrivePool successfully moved all the pooled files off it but failed to remove the poolpart folder itself. Did you do a normal remove, or tick any of the options? Also odd that there's two poolpart folders on it. But as the drive is still showing 5TB used... were you using it for anything besides DrivePool? Maybe check with a tool like TreeSize (run as Administrator) or similar?
  9. buyrak, I'm not seeing anything incorrect in Christopher's statement that you uploaded. If you're using a provider, data is stored on the provider; that's the point of using a provider. And if the account becomes read-only, it should thus still be readable. Etc. Did you attempt to manually enable read-only mounting of the drive as Christopher suggested? (The relevant instructions in the manual can be found here.) I've also checked Dropbox's documentation and it states a trial ends either by it being cancelled or by it automatically moving to a paid subscription, and that upon cancelling a trial (or subscription), "You’ll retain access to all your files and folders, as well as any content that was shared with you. Your account will be limited to the 2 GB storage quota of Dropbox Basic. If you exceed that limit, your account will stop syncing to your devices." So if Dropbox itself is saying "you can still access your files" but that's not actually the case, that's not StableBit's fault. Now it may be that Dropbox's idea of "you can still access your files" is actually only via their website or some other restricted mechanism (I have not personally attempted to connect CloudDrive to an expired Dropbox account, but I am willing to test it if someone is willing to provide me with lawful access to one); that's also not StableBit's fault. (we won't even get into how it looks when someone blames Party A because Party B is no longer providing a commercial service they aren't getting paid for to someone) However, all that said, if you can still at least download the CloudPart folder (including everything within) from your Dropbox account via whatever mechanism Dropbox still claims to allow you, there is a CloudDrive command-line tool to convert the folder into a format that can be accessed by CloudDrive's own Local Disk provider. Let me or StableBit know if you need help with that. Finally, as a (volunteer) moderator: I at least won't be deleting your post. First, as Christopher appears polite and factual in the response, I suggest your post reflects rather more poorly on you than it does them or StableBit. Second, while your post comes across to me as immature and one-sided (generally when complaining to a forum about a company's response, one should include the question that prompted that response), it would still be useful to the community to know how cancelling a Dropbox account affects its accessibility in CloudDrive - especially if that may have changed from how it worked previously and/or we can get confirmation.
  10. You can use the Cog icon (top right of GUI) -> Troubleshooting -> Service log... and look for entries mentioning the missing drive's disk id, volume id and pool part folder. The last seven days' entries can be found saved in the log files in the C:\ProgramData\StableBit DrivePool\Service\Logs\Service folder. Alternatively (for future use) if you enable Cog icon -> Notifications -> Notify by email... the emails will include the drive's model and serial number; which you can then compare by hovering the mouse cursor over each drive in the pool in the GUI to see their model, serial number and disk id.
  11. I should perhaps add here that DrivePool won't stop writing a new file if it hits the % max, only if/when it hits actual full. The % max setting is for files yet to be written (e.g. "this disk is at/over % max, don't put more files here") and/or files that are not in use (e.g. "this disk is at/over % max, move file(s) off it until satisfied"); an example of a balancer controlling both of these (no new files and move old files) can be found in the Prevent Drive Overfill balancer. Likewise, with the SSD Optimizer plugin its option "Fill SSD drives up to X%" doesn't mean "empty at this point", it means "overflow to Storage drives at this point". When it empties is instead controlled by your choices in the main DrivePool settings menu (Balancing -> Settings -> Automatic balancing - Triggers). This may be somewhat counterintuitive, but it is useful to avoid running out of space in situations where the "SSD" is being filled faster than it can be emptied.
  12. Shane

    Neil

    In the DrivePool GUI, try Manage Pool -> Re-measure... If that doesn't help, you might need to try resetting DrivePool to its default settings (and then re-apply your preferred settings): in the GUI, top-right, Cog icon -> Troubleshooting -> Reset all settings... Also, check that you are able to read and write normally to each drive that is part of the pool - perhaps a benchmark utility (e.g. CrystalDiskMark) to check if any of your drives are performing poorly (which can be indicative of other issues).
  13. Sorry, I don't know of any benchmark programs that use user-specified files. I found DiskTT which can use a user-specified folder, but it still creates its own files and it only provides throughput (not latency etc).
  14. 1) Is there a way to write files to the disk with the most available space in the pool? DrivePool defaults to attempting this, though if multiple large files start being written more or less simultaneously to the pool there might be an issue (q.v. next answer). 2) File sizes are unknown until the backup is finished. My assumption is that this will be a problem for DrivePool, in that, if it's writing to a disk that only has 4TB free and it ends up being a 6TB backup, then it will fail. Correct? Correct. 3) I'm assuming there's no way to allow "write continuation" to another disk if the current disk fills or hits the % max. Correct. 4) If a disk starts to fill can I set a lower max % , say 50%, and set the balance plugin to run every X minutes? My intent would be that if a disk would start to "balance" other data off the disk and make room for additional write capacity as the current backup being written grows. You can set Balancing to always run immediately upon detecting a trigger condition or only no more often than as small as any integer multiple (including 1x) of 10 minute intervals. Note that (I believe) it cannot balance open files, or at least not files that are actively being written. @Christopher (Drashna)? 5) I would anticipate that we'll use 70-80TB of the almost 100TB that we'll have available to us. We will have headroom, but I'm concerned about managing/maximizing write space. Depending on above answers. I would assume Veeam will start having write failures for larger backup files if there's not enough room on the volumes. Correct. I've had this happen. I take it the enterprise version of Veeam still doesn't support splitting? (I use the standalone agents at home) 6) Can I configure a non-SSD as a cache point, say one of the 20TB SATA volumes, that would then write out to the pool? I'd used it purely as a staging point, rather than for performance. At this point, ANYTHING is faster than our DataDomain's Yes, you can. The SSD Optimizer plugin doesn't actually care whether an "SSD" is actually an SSD or not; it would be more accurate to call it the Cache Optimizer plugin. For example, you might set "Incoming files are cached on drives A and B; when A and B are more than 25% full they empty* to drives C, D, E and F in that preferred order, but try not to fill any storage drive to more than 90% capacity and if any are then move files off them until they are no more than 80% full". Note that you can also make pools of pools (so pool P could consist of pools Q and R which could consist of drives A+C+D and B+E+F respectively) if for some reason you want to have different configurations for different sub-pools. *the SSD Optimizer plugin doesn't have fine control over emptying; when it starts it will attempt to continue until the cache is empty of all files not being written to it. P.S. it is possible to write your own balancing plugins if you've got the programming chops. P.P.S. do not enable Read Striping in DrivePool's Performance options (it defaults to off) in production until you have confirmed that the software you use works reliably with it. I've found some hashing utilities (for doing comparison/integrity/parity checks) seem to expect a single physical disk and intermittently give false readings when read striping is enabled.
  15. Well that makes things easier! Thanks Christopher! Can confirm, just grabbed some external drives and made a Storage Spaces pool with them then added that pool to be part of a DrivePool pool without any problems. However: I've been playing with Storage Spaces in JBOD mode to see what you'd need to do, and it was NOT happy after removing the second-last disk - Storage Spaces removed it without any warnings but then immediately afterward complained that there was an issue, its pool ceased to be accessible and the drives became stuck in a limbo of being removed but not actually removed until I physically pulled them and manually wiped them. Ouch. I would strongly recommend that you have enough spare capacity to empty the last two drives, not just one, if/when you dismantle the old Storage Spaces pool, and the order is apparently "delete storage pool then remove last two physical drives". So all that said, the steps for migrating from Storage Spaces to DrivePool as the pool that Plex, etc, sees should be: Create a DrivePool pool (let's say it's P: drive). Doesn't actually have to have any other physical drives in it yet. Add your Storage Spaces pool (let's say it's S: drive) to the above. Turn off any services/programs looking for S: drive (e.g. your Plex software). Also close the Stablebit DrivePool program and turn off the StableBit DrivePool service. Manually move all your user content in S: drive - so excluding system folders like System Volume Information, $RECYCLE.BIN, etc - into the hidden PoolPart folder that's now in the root of S: drive. For example S:\MyStuff\MyFile.txt would become S:\PoolPart.guidstring\MyStuff\MyFile.txt Swap the drive letters of the two pools via Windows Disk Management (remove S: from Storage Spaces pool, remove P: from DrivePool pool, add S: to DrivePool pool, add P: to Storage Spaces pool). Turn the Stablebit DrivePool service back on and open the StableBit DrivePool program. If it doesn't proceed to do so automatically, tell DrivePool to re-measure the pool via Manage Pool -> Re-measure... so that it accurately reports the disk usage to you. Turn your services/programs that use S: drive back on. Presuming everything is now humming along with the DrivePool pool as your "front end", you could then gradually remove your drives from the Storage Spaces pool and add them directly to the DrivePool pool instead (keeping in mind what I discovered about trying to remove the last two drives - if need be you could just leave the last two drives alone). If/when you're ready to delete the old Storage Spaces pool (i.e. only two drives left inside it and you've got enough spare capacity on your other drives in the DrivePool pool), remove it from the DrivePool pool first and then once that's successful delete the old Storage Spaces pool and only then remove the physical drives from the Storage Spaces pool so they can be added directly to the DrivePool pool. DrivePool's theoretical limit is at least 8 PB (yes, PB, as in petabytes) and only because the Windows OS itself doesn't currently support larger than that (note that some older versions of Windows only support up to 256 TB).
  16. Could you explain what you mean by "Windows Drive Pool"? Do you mean a pool created via Windows Storage Spaces? Something else? What drives (e.g. 3x12TB + 1x10TB) do you currently have and how are they currently allocated (e.g. system, storage spaces, other, empty)? How many additional drives do you have room for? How fussy is your media server about how it sees your files (e.g. you could tell it "everything is stored on drive P and drive Q" with those being your Stablebit and Windows pools, and then just gradually move files between pools)? The method I saw folks use re BackBlaze was to backup each drive within the (Stablebit) pool rather than the pool as a whole, so that if a drive within the pool fails that whole drive can be restored from BackBlaze rather than trying to figure out which files were on it; I don't know if BackBlaze's restore process has improved since then.
  17. DrivePool's duplication is similar to RAID in that, when enabled, it can protect against one or more drive failures (see this recent thread where I explain that further as well as backup practices). But the short answer is that if one of your drives failed (assuming you hadn't enabled duplication) you'd lose whatever was on that particular drive; the rest of your pool would remain intact - e.g. if you had song1.flac only on drive A and song2.flac only on drive B in your pool, and drive A failed, song1.flac would be lost and song2.flac would be kept. Regarding buying a HDD - it seems like you want/need as little latency and as much speed as possible when working? So I'd keep the HDD separate from your pool of SSDs (maybe in its own pool if you plan to expand) and set up a scheduled automatic backup (whether that's a robocopy script, freefilesync mirror, veeam agent, etc) to happen while you sleep. Your mileage may vary of course.
  18. Windows only supports drive letters A through Z. However, it isn't necessary for a drive (other than the boot, system and pagefile drive, and perhaps CD/DVD drives and similar) to have a letter; drives can instead be accessed by mounting them as folders in another drive (e.g. C:\Array\Drive27, C:\Array\Drive28, etc) and furthermore itself DrivePool can have drives form part of a pool without being lettered or mounted at all. To add/remove drive letters or mount drives as folders in other drives, use Windows Disk Management: right-click a volume and click Change Drive Letters and Paths... Late edit for future readers: DON'T mount them as folders inside the pool drive itself, nor inside a poolpart folder. That risks a recursive loop, which would be bad.
  19. If you hover the mouse cursor over the 90.2% does it give you any information?
  20. Try the cog icon in the top right, Troubleshooting, Recheck duplication...?
  21. I'm not aware of any CLI commands for CloudDrive besides the convert tool. The feature was planned way back but I believe that can was kicked indefinitely down the road in favor of working on other features and on DrivePool, Scanner, etc.
  22. As far as I know it'd need to be converted to be mountable locally and the current tool doesn't support GoogleDrive (only Amazon and DropBox). I don't know if there are plans to update the tool for GD. @Christopher (Drashna) I can find this https://community.covecube.com/index.php?/topic/2879-moving-clouddrive-folder-between-providers/#comment-19900 but it doesn't indicate whether it ended up being possible?
  23. You could try physically disconnecting / taking offline via Disk Management each drive individually and testing playback? The pool will stay online in read-only mode so unless the file you're streaming is only on that particular drive... For the USB enclosures you'd need to temporarily unplug / disable in Device Manager those. Remember to test each off individually and then both off together. Could also be the USB hub they're connected to rather than the enclosures themselves, too.
  24. Yes, to completely prevent any chance of data loss from 2 drives suddenly failing at the same time you'd need 3 times duplication. Note that scanner doesn't protect against sudden failures; that's why they're called "sudden". Scanner protects against the type of failures that'll take longer to kill your drive than you/drivepool will take to rescue any data you want off it. Basically there are what I'd consider to be four types of drive failure: Sudden - bang, it's dead. This is what things like Duplication and RAID are meant to protect against. Imminent - you get some warning it's coming. Duplication and RAID also protect against this type, and Scanner tries to give you enough time for pre-emptive action. Eventual - no drive lasts forever. Scanner helps with this too by notifying you of a drive's age, duty cycle, etc, so you can budget ahead of time for new ones. Subtle - the worst but thankfully rarest kind, instead of the drive dying it starts corrupting your data. Scanner can sometimes give clues, otherwise you need some method of being able to detect/repair it (e.g. some RAID types, SnapRAID, PAR2, etc) or at least having intact backups elsewhere. DrivePool might help here, depending on whether you notice the corruption before it gets to the other duplicate(s). If it helps any, I suggest following the old 3-2-1 rule of backup best practice, which means having at least three copies (production and two backups), at least two different types (back then it was disk and tape, today might be local and cloud) and at least one of those backups being kept offsite, or some variant of that rule suitable for your situation. For example, my setup: DrivePool with 2x duplication (3x for the most important folders) to protect against sudden mechanical drive failure on the home server. Pool is network-shared; a dedicated backup PC on the LAN takes regular snapshots to protect against ransomware and for quick restores. Pool is also backed up to a cloud provider to protect against environmental failures (e.g. fire, flood, earthquake, theft).
  25. Some testing with the latest version of DrivePool suggests it's actually pretty fast at emptying/removing drives - at least it was hitting close to the mechanical limits of my drives, although I was using mostly large files - so that's good news since you want to keep the pool active: The simplest method is just to add the two new drives and then remove the old two drives. The catch is that drives can only be removed one at a time, so if it takes X hours to finish removing the first drive and just happens to finish five minutes after you go to bed, it won't be emptying the other one while you sleep. To avoid that, another method is to add the two new drives to the pool as above but then temporarily turn off all the balancers except for the the Drive Usage Limiter balancer (and Stablebit Scanner if you're using that) and tell it not to let anything be stored on the two old drives (untick both duplicated and unduplicated); it'll keep going until they're both empty and then you should be able to remove them without any unplanned waiting inbetween.
×
×
  • Create New...