Jump to content

Shane

Moderators
  • Posts

    812
  • Joined

  • Last visited

  • Days Won

    74

Shane last won the day on July 21

Shane had the most liked content!

About Shane

Profile Information

  • Gender
    Not Telling
  • Location
    Australia
  • Interests
    Eclectic.

Recent Profile Visitors

4057 profile views

Shane's Achievements

  1. Try the following in a command prompt run as administrator? dpcmd refresh-all-poolparts
  2. "For duplication, is it like raid 1 or does it work more like mirrored where it makes an exact copy of each file?" DrivePool's duplication is basically mirroring but file-based rather than block-based; if you set (or change) the duplication on a pool or folder to N copies then DrivePool will try to maintain N identical copies of the pool's or folder's files across the drives in the pool, one copy per drive. E.g. if you have a pool of five drives and you set pool duplication to x3, then DrivePool will keep one copy of each file on each of any three drives of the five drives in the pool. You have the option of real-time or nightly duplication (the latter will also run as a safeguard even if you've selected the former). "My only thing about not duplicating is I heard the scanner would move files off the dying drive to the other drives if it has space, is this true? Would it be good to go to that and the cloud as well to make sure nothing happens? Like get it to move automatically my data if a drive is dying or get notifications if they are. Will the move on their own the data on the bad drive, or would I need to initiate it?" StableBit DrivePool does have an option to evacuate drives if StableBit Scanner flags them as bad (e.g. via SMART warning); if it's enabled it happens automatically (unless it runs out of space, yes). Note that if a drive fails abruptly then it won't have time to evacuate the drive, which is where duplication is useful. DrivePool and Scanner can provide notifications about failed and failing drives respectively (including via email if configured). If a drive in a pool does fail, the pool will become read-only until the drive is manually replaced (or removed). In a business environment, good practice for backups (IMO) is usually considered to be at least three copies of your files (the working copy and two backups) in at least three locations (your work machine, your onsite backup machine and another backup that's secured somewhere else). YMMV.
  3. Hi Darkness2k. As the game is in open beta I'd suggest reporting the crashing to its developers (assuming you haven't already); they can then test it against DrivePool directly. If that doesn't progress, you can also open a support ticket with StableBit.
  4. Hi xelu01. DrivePool is great for pooling a bunch of mixed size drives; I wouldn't use windows storage spaces for that (not that I'm a fan of wss in general; have not had good experiences). As for duplication it's a matter of how comfortable/secure you feel. One thing to keep in mind with backups is that if you only have one backup, then if/when your primary or your backup is lost you don't have any backups until you get things going again. Truenas's raidz1 means your primary can survive one disk failure, but backups are also meant to cover for primary deletions (accidental or otherwise) so I'd be inclined to have duplication turned on, at least for anything I was truly worried about (you can also set duplication on/off for individual folders). YMMV. Regarding the backups themselves, if you're planning to back up your nas as one big file then do note that DrivePool can't create a file that's larger than the largest free space of its individual volumes (or in the case of duplication, the two largest free spaces) since unlike raidz1 the data is not striped (q.v. this thread on using raid vs pool vs both). E.g. if you had a pool with 20TB free in total but the largest free space of any given volume in it was 5TB then you couldn't copy a file larger than 5TB to the pool.
  5. In the file "C:\ProgramData\StableBit DrivePool\Service\Settings.json" use an "Override" value greater than -2 for "DrivePool_BackgroundTasksPriority". The maximum possible is 15 but I would both recommend against that and suggest incrementing gradually (e.g. start with 0 or 2) and checking if it causes any problems for other processes.
  6. Preventing: Do regular (monthly? YMMV) full reads of drives (so that their own firmware can find/refresh iffy blocks) if you don't already have something doing that for you (e.g. RAID scheduled scrubbing or StableBit Scanner or the like). Don't use SSDs as long-term offline storage, it might work but they're meant to be powered on at least semi-regularly to keep the data charge fresh (at least until someone invents a better SSD). Use ECC RAM (and a mainboard+cpu that actually supports it, which can be a pain). Don't overclock. Circumventing: Use some form of parity-based checking, e.g. RAID 5 disk array or a file system that does parity (e.g. zfs for linux) or utility software such as SnapRAID or MultiPar etc. Keep backups! TLDR: best (IMO) is use ECC RAM on a mainboard+cpu that supports it, use RAID5 arrays with a monthly scheduled scrub, have backups. Since I'm firmly not in the "money no object" category, however, I mostly just rely on duplication (3x for critical docs), backups, Scanner and MultiPar (I keep meaning to use SnapRAID...). "If I go Drivepool route, I need to pull everything out of RAIDS first?" No. DrivePool can add a hardware RAID array to the pool as if it was a single disk (because that's how arrays present themselves to the OS), so you don't need to pull everything out of an existing array if you're happy to just add the array to the pool.
  7. The big advantage of hardware RAID (2+) is parity-based duplication and healing (remember to use scheduled scrubbing!), and complete transparency to your file system, and RAID 5 in particular can give great performance (especially for reads). The big disadvantage of hardware RAID (2+) is that if more drives die than you've got parity for, you lose the entire array instead of just the failed drives. The big advantage of DrivePool is you don't have to care about keeping all your drives the same size and type, you can remove drives as you please without having to rebuild the whole bloody array from scratch, adding a drive doesn't require (background) parity recomputation, you can freely mix SSD and HDD and use the former as cache, you can even add non-local drives via CloudDrive. The big disadvantage of DrivePool is that if any bitrot happens it has no self-healing capability. Late edit/add: one other disadvantage of DrivePool is that it can't create a file larger than the free space of its largest drive (since it doesn't use striping), which is something to keep in mind if you plan to be working with extremely large files. So if money was no object and I had a big honking bay of drives, no particulars withstanding I'd build multiple small sets of RAID 5 (three to five disks to an array) and pool them with DrivePool (duplication x2) to get the best of both worlds while minimising the downsides of each. One drive dies? No problem, pool's still writable, thanks RAID 5! An array dies? No problem, pool's still readable, thanks DrivePool! File bitrot happens? No problem, file's healable, thanks RAID 5!
  8. That's ok to do, won't cause any harm.
  9. >>It really seems like the issue is drivepool not running "properly". It's running enough to mount the pool drives and I can access them, but not enough to do the balancing or maintenance tasks or run the GUI. Yes, and I'm not sure how to proceed here. I'm also wondering why M and K lost their drive letters; as I'm understanding it those should be via CloudDrive, not DrivePool. Perhaps try a repair/reinstall of Microsoft .NET 4.8, just in case that's an issue, then try reinstalling DrivePool again (the latest release version is 2.3.8.1600 btw)? But given Explorer is now hanging on This PC you might need to open a support ticket with StableBit to get their help with this.
  10. In the GUI if you use the Cog icon -> Troubleshooting -> Service log... you can see DrivePool log warnings and errors as they occur, it can provide more information (like hopefully what directory); run the Troubleshooting -> Recheck duplication after and see what the log stream shows? I would try the ACL fix I linked too on the pool if you haven't already. If the culprit still can't be found/fixed you could open a support ticket with StableBit.
  11. I can't say with absolute certainty that it couldn't have, but on the other hand all we've done outside the GUI (in terms of actual changes) so far is used dpcmd to ignore those poolparts, unless you manually copied anything between the poolparts (which should not cause DrivePool GUI's to hang anyway AFAIK). Your settings are the *.json files in your C:\ProgramData\StableBit DrivePool\Service\ and C:\ProgramData\StableBit DrivePool\Service\Store\Json\ folders in case you wish to back those up. DrivePool does a nightly duplication consistency check independent of balancing; I believe it will also automatically attempt to correct any duplication issues found rather than perform it as part of balancing. How far along is CloudDrive in uploading F's cache to the cloud? Did you try TreeComp (or similar) to look for differences between Q and L? Regarding the .NET processing, I'm not sure. I would be inclined to wait until F is finished uploading and checking whether your content in Q and L match before proceeding with anything that might break drivepool (further).
  12. It would mean 12 bytes. You could try opening the service log (cog icon -> troubleshooting) and then repeat going into the balancing settings to see if the problem folder is indicated, or the saved service logs in "C:\ProgramData\StableBit DrivePool\Service\Logs\Service\". I would suggest trying the fixes in the following thread: https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/
  13. Hi karumba, latest and previous versions can be downloaded here: https://covecube.download/CloudDriveWindows/release/download/
  14. "detach/reattach allowed for a change in cache location. It sounds like that's the path forward. Is it as simple as it sounds?" Pretty much. When re-attaching a drive (or creating a new one) expand the "Advanced Settings" triangle icon and you should be able to choose the drive it uses for local cache (note: I'd never pick the boot drive if at all feasible, despite the manual's screenshot; I'd pick some other physical drive). "Regarding destroying the M & K cloud drives via the GUI - I suspect eventually, once I confirm everything is squared away, that I will have to do this right? I've noticed that even though we did the ignore, they're still both constantly and hopelessly attempting to upload files to Onedrive." Correct; the ignore just disconnected them from the pool, it doesn't affect their own nature as cloud drives.
  15. "I'll answer my own question about the danger - which would be if anything from P: was written to L: without being duplicated to Q, after the Onedrives became quota-restricted, then the only place the files may be would be the cache drive. Correct?" Correct. In theory if you're using real-time x2 duplication for everything in P, then Q should be up to date and the disconnection of M and K from L should allow P to automatically backfill L from Q (as and when it has room). In practice you could verify this by manually comparing the contents of the nested poolparts of P that are in M and K with the one in Q and copy any missing files (excluding anything in any folder named ".covefs", "$RECYCLE.BIN" and "System Volume Information") to where they should be in the latter. Fiddly but possible. Specifically, you would compare the content of "M:\PoolPart.a5d0f5ec-ca52-48e9-a975-af691ace6a16\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\" with that of "Q:\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\", and the content of "K:\PoolPart.4d2b0ebb-14f2-4d9d-a192-641de224b2cb\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\" with that of "Q:\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\", to see if there's anything in M or K that is not in Q (there will, of course, be content in Q that is not in M or K, because Q should be at least the sum of M & K & L). Reminder, you must exclude the ".covefs", "$RECYCLE.BIN" and "System Volume Information" folders from the comparisons. If you don't already have a useful GUI tool for comparing folder trees, I can suggest TreeComp. "And in that case, can I manually move the cache files for M: & K: to another drive to give F: some headroom?" Unfortunately I don't know of a way to manually move a cloud drive's local cache files (I imagine/hope it's theoretically doable but I've never had to figure out how), and the GUI requires the upload cache to be empty (which we can't do for M and K being over-quota) to complete a detach before it can be safely re-attached to a different local drive. The alternatives would be detaching F (since it's the drive that's writable) instead to re-attach its cache to a different drive (one with plenty of room) or destroying the M and K cloud drives via the GUI (which I imagine you won't want to do if you're worried there's any unique files sitting in their caches).
×
×
  • Create New...