Jump to content

Shane

Moderators
  • Posts

    742
  • Joined

  • Last visited

  • Days Won

    66

Everything posted by Shane

  1. Try the cog icon in the top right, Troubleshooting, Recheck duplication...?
  2. I'm not aware of any CLI commands for CloudDrive besides the convert tool. The feature was planned way back but I believe that can was kicked indefinitely down the road in favor of working on other features and on DrivePool, Scanner, etc.
  3. As far as I know it'd need to be converted to be mountable locally and the current tool doesn't support GoogleDrive (only Amazon and DropBox). I don't know if there are plans to update the tool for GD. @Christopher (Drashna) I can find this https://community.covecube.com/index.php?/topic/2879-moving-clouddrive-folder-between-providers/#comment-19900 but it doesn't indicate whether it ended up being possible?
  4. You could try physically disconnecting / taking offline via Disk Management each drive individually and testing playback? The pool will stay online in read-only mode so unless the file you're streaming is only on that particular drive... For the USB enclosures you'd need to temporarily unplug / disable in Device Manager those. Remember to test each off individually and then both off together. Could also be the USB hub they're connected to rather than the enclosures themselves, too.
  5. Yes, to completely prevent any chance of data loss from 2 drives suddenly failing at the same time you'd need 3 times duplication. Note that scanner doesn't protect against sudden failures; that's why they're called "sudden". Scanner protects against the type of failures that'll take longer to kill your drive than you/drivepool will take to rescue any data you want off it. Basically there are what I'd consider to be four types of drive failure: Sudden - bang, it's dead. This is what things like Duplication and RAID are meant to protect against. Imminent - you get some warning it's coming. Duplication and RAID also protect against this type, and Scanner tries to give you enough time for pre-emptive action. Eventual - no drive lasts forever. Scanner helps with this too by notifying you of a drive's age, duty cycle, etc, so you can budget ahead of time for new ones. Subtle - the worst but thankfully rarest kind, instead of the drive dying it starts corrupting your data. Scanner can sometimes give clues, otherwise you need some method of being able to detect/repair it (e.g. some RAID types, SnapRAID, PAR2, etc) or at least having intact backups elsewhere. DrivePool might help here, depending on whether you notice the corruption before it gets to the other duplicate(s). If it helps any, I suggest following the old 3-2-1 rule of backup best practice, which means having at least three copies (production and two backups), at least two different types (back then it was disk and tape, today might be local and cloud) and at least one of those backups being kept offsite, or some variant of that rule suitable for your situation. For example, my setup: DrivePool with 2x duplication (3x for the most important folders) to protect against sudden mechanical drive failure on the home server. Pool is network-shared; a dedicated backup PC on the LAN takes regular snapshots to protect against ransomware and for quick restores. Pool is also backed up to a cloud provider to protect against environmental failures (e.g. fire, flood, earthquake, theft).
  6. Some testing with the latest version of DrivePool suggests it's actually pretty fast at emptying/removing drives - at least it was hitting close to the mechanical limits of my drives, although I was using mostly large files - so that's good news since you want to keep the pool active: The simplest method is just to add the two new drives and then remove the old two drives. The catch is that drives can only be removed one at a time, so if it takes X hours to finish removing the first drive and just happens to finish five minutes after you go to bed, it won't be emptying the other one while you sleep. To avoid that, another method is to add the two new drives to the pool as above but then temporarily turn off all the balancers except for the the Drive Usage Limiter balancer (and Stablebit Scanner if you're using that) and tell it not to let anything be stored on the two old drives (untick both duplicated and unduplicated); it'll keep going until they're both empty and then you should be able to remove them without any unplanned waiting inbetween.
  7. Replace "probably result in data-corruption" with "certainly result in data-corruption". It's also not just a case of pausing one and unpausing the other; you have to use the Detach and Attach functions and it will want to complete any pending uploads in the cache. Personally I feel like attempting this on a regular daily basis could be a bad time waiting to happen if you accidently click past the warnings. Instead consider setting up a VPN between home and work, and/or setting up an ownCloud server or similar, to provide remote access to whichever machine is going to be CloudDrive's "home base".
  8. If you have X times duplication then DrivePool will try to keep each file on X number of drives. The default behaviour when saving a file to the pool is for it to be put on whichever drive(s) have the most free space at the time. So if you had 2x duplication and 3 drives, any given file would be on 2 of those 3 drives and that means if 2 of your drives suddenly failed at random then (assuming a bunch of equally sized files in a pool with default behaviour) in theory on average you'd have a 2 in 3 chance of keeping any given file. Basically if you're using DrivePool just by itself, to completely eliminate the risk of losing any files if N drives simultaneously fail you need to use N+1 times duplication. If you're using something like SnapRAID to provide protection for your DrivePool, you can (assuming a few details) instead dedicate N drives to parity protection to protect against N simultaneous drive failures - if I remember rightly this becomes more storage efficient when the total number of drives exceeds twice N; the tradeoff is that SnapRAID's parity drives need to be updated after any files change in the pool drives to protect those changes instead of DrivePool's real-time duplication feature, so if your files are regularly changing all the time it may not offer enough protection by itself. You could of course use both duplication and parity if you had enough drives.
  9. The "best" way depends on your needs. By all drives being resident in the case, do you mean they are all plugged in and visible to the OS at the same time? Are you wanting to keep the existing Pool and just change from the old drives to the new drives, or are you wanting to create a new Pool? Do you want to be able to use the Pool normally while the process happens, or is read-only good enough, or is taking it offline for the duration acceptable to make the transfer happen ASAP? What are your old drives and what are your new drives (e.g. 3x8 vs 4x10)?
  10. Just to check, igobythisname, did you use the Resize... option to shrink the drive after deleting the data from it? Running the Cleanup... option won't shrink the drive on its own, the shrinking has to be done first (also the Resize should attempt to automatically Cleanup after it finishes the shrink).
  11. IF the cloud drive is being used solely for that useless backup and nothing else, then you could just destroy the cloud drive and start a fresh one? Alternatively, this entry in the manual might be of help? It looks like you could stop the clouddrive service, manually delete/move the local cache, then restart the service and reattach the clouddrive? I'd test with a dummy setup first though. You can also contact the developers directly via https://stablebit.com/Contact
  12. You may find this thread useful.
  13. No clue. If it helps at all, I have DrivePool v2.3.2.1493 on a Win10Pro box with a mix of nvme ssd and sata hdd, and I even plugged in a usb hdd to see if anything changed, and I'm not getting that issue. Edit: spoke too soon. I am seeing that on the usb hdd I plugged in, but not my other devices. It at least involves the DrivePool service, as stopping the service stops the effect and starting the service starts it back up.
  14. Maybe. If SMART isn't giving any warnings, you could try checking the DrivePool UI -> Cog -> Troubleshooting -> Service log... to see if there's any hints there, or using the File Placement rules to limit duplication to certain disks to see if that gives any clues. Otherwise I would suggest opening a ticket via https://stablebit.com/Contact as Christopher suggested.
  15. I believe licensing issues have to be done via the contact form (now that it's fixed and working again) or via emailing covecube (if the form still isn't working for you). No luck with either of those now?
  16. When you say happened again, do you mean #1 you went to win11 and rolled back to win10 again or #2 it's happened without win11 being involved this time?
  17. None of the above. It creates hidden root folders, or "poolparts", on basic (not dynamic) NTFS-formatted volumes chosen by the user and collectively presents those folders as a single NTFS-style virtual drive and basic volume, or "pool". You can have multiple pools, you can even have pools of pools, and there is also ReFS support. If you're familiar with Linux, it's a "union" virtualization similar to mergerfs but (amongst other things) runs at the system level rather than in userspace. Duplication is handled via multiple instancing of a file on the poolparts; there is no "original" vs "backup", there's just the same file existing on multiple volumes. For example if you created a new pool, let's call it "P", using six drives each formatted as a basic NTFS volume (they don't have to be the same size), let's call them "A" through "F", set 3x duplication for all files, and saved a file "example.txt" to the root folder of the new pool, it would look like this at the file system level (where guidstring is a alphanumberic identifier that uniquely identifies each poolpart): P:\example.txt A:\PoolPart.guidstring\example.txt B:\PoolPart.guidstring\example.txt C:\PoolPart.guidstring\example.txt D:\PoolPart.guidstring\ E:\PoolPart.guidstring\ F:\PoolPart.guidstring\ Basically you just work in the pool drive and DrivePool does its thing in the hidden poolpart folders background; you normally never need to manually deal with the latter unless something has gone wrong (e.g. your old PC's mainboard died, you can't connect to the internet to download DrivePool to your new PC and for whatever reason you need to get a file from the pool right away) or you're using DrivePool in conjunction with other storage management software (such as SnapRAID for additional file integrity/recovery features). There are a few advanced features of NTFS that it doesn't perfectly emulate that some programs complain about (e.g. Microsoft's OneDrive doesn't like to be installed on a pool), but otherwise IMO it's pretty great.
  18. I think it would be a software limitation? As I understand it, DrivePool's design philosophy is "be as lean and simple as possible to minimise use of CPU/RAM/IO". So it won't try to do fancier things like that because that's less resources available for other programs to use (e.g. it doesn't want to cause lag for other apps that are accessing files from the drives). I'd have said it was DrivePool trying to avoid its balancing interfering with normal pool access, but the fact that it initially was managing to balance at 50-80MB/s for the same sorts of files and nothing else is changing has me scratching my head. There's a bottleneck somewhere, yeah. Given 2x14TB is going to take about three weeks at 10-20MB/s, I think it might be worth opening a support ticket. P.S. Isn't balancing normally turned off (except maybe Scanner evacuation and Prevent Drive Overfill) if you're using DrivePool+SnapRAID, to avoid big diff/sync durations?
  19. Glad I could... inadvertently... help? Regarding #3, you don't need to turn it off but feel free since you're not using it.
  20. Hmm. Your settings look ok. What version of DrivePool do you have installed? What version of All In One? Do you have real-time duplication turned off (Manage Pool -> Performance)? Have you tried a Manage Pool -> Re-measure...? Since you manually moved files? Have you tried running dpcmd refresh-all-poolparts from an administrator command prompt? Have you tried turning off All In One, waited for DrivePool to recalculate its balancing, then turned All In One back on? If that didn't help, have you tried uninstalling then reinstalling the All In One plug in?
  21. It looks like bunny.net claims to support FTP and SFTP (and plans to support S3) which are also supported by CloudDrive? https://support.bunny.net/hc/en-us/articles/115003780169-How-to-upload-files-to-your-Bunny-Storage-zone
  22. @sophvvvv The disk transfer rate graph in the performance screenshot for S2D1 looks somewhat like what I'd expect from moving many small files; could you please test copying a folder containing a large amount of small files (e.g. photos are a good choice) from a good drive to the root folder of S2D1 (i.e. to the drive but outside the pool) and observe the performance to see if it is similar? The screenshot shows the drive with 7.46TB on it has a capacity of 7.72 TB.
  23. SyncThing needs to be installed on both; each instance scans its own content and compares notes with the other(s) to detect changes. This is different to FreeFileSync where it goes on one machine and scans both its own content and the remote content to detect differences. The former is better in slower networks, busier networks or involving large numbers of files (the issues compound each other), as it involves much less back-and-forth of network traffic, but FreeFileSync can compare a surprisingly large number of files on a fast network (e.g. about fifty thousand files per minute when my 1Gbps LAN is idle) and I feel its GUI is rather more user-friendly. Whichever option you go for, I'd suggest creating a test pool to trial it before committing your real pool - and you could make two test pools and try both.
  24. I meant the network share option (if you just wanted all devices in both buildings to able to access one pool via the 1GB LAN you mentioned) as an alternative to the SyncThing option (where you'd have two pools sync'd to each other, one in each building). I figured there was a reason you weren't doing that. But yeah, you could use a network share for syncing too - e.g. with the real-time syncing option of FreeFileSync - in which case you might only need to have the syncing software on just one of the machines. There'd be pros and cons to each option.
  25. - Whats the best way to initially duplicate the pool? Set-up a new Pool and then just copy in File Manager? I'd go with that, yes. - How should I set-up the remote sync - would I need cloud Drive for that? Cloud Drive isn't designed for multiple simultaneous clients. You'd need something else, e.g. SyncThing (also check out its GUI wrapper, SyncTrazor). - Would I also need DrivePool on the new computer (if I'm using Cloud Drive)? If you intend to access the drives as a pool at the second site you'd want an instance of DrivePool there too. - Is there a deal on the bundle (DP, Scanner, CD) for existing customers? As I understand it, if you're an existing customer you get a discount on each product by entering your existing activation ID and the discount is bigger if your existing activation ID covers the product you're buying; just having activation ID(s) for DP and Scanner without one for CloudDrive is enough to match the bundle price offered to new customers. TLDR: for what you're describing I'd use two installs of DrivePool (one for each machine) plus SyncThing/SyncTrazor to mirror in near-time (I'm presuming there's reasons you're not just opening a network share to the pool over the LAN). EDIT: make sure to exclude system folders (e.g. System Volume Information) from the sync between the two pools, and make sure to sync at the pool level - not the poolpart level. If you're using SyncThing/SyncTrazor solely across a LAN then I'd suggest disabling relaying and global discovery, and/or using direct addresses, for additional privacy/security/efficiency.
×
×
  • Create New...