Jump to content

Shane

Moderators
  • Posts

    812
  • Joined

  • Last visited

  • Days Won

    74

Everything posted by Shane

  1. Try the following in a command prompt run as administrator? dpcmd refresh-all-poolparts
  2. "For duplication, is it like raid 1 or does it work more like mirrored where it makes an exact copy of each file?" DrivePool's duplication is basically mirroring but file-based rather than block-based; if you set (or change) the duplication on a pool or folder to N copies then DrivePool will try to maintain N identical copies of the pool's or folder's files across the drives in the pool, one copy per drive. E.g. if you have a pool of five drives and you set pool duplication to x3, then DrivePool will keep one copy of each file on each of any three drives of the five drives in the pool. You have the option of real-time or nightly duplication (the latter will also run as a safeguard even if you've selected the former). "My only thing about not duplicating is I heard the scanner would move files off the dying drive to the other drives if it has space, is this true? Would it be good to go to that and the cloud as well to make sure nothing happens? Like get it to move automatically my data if a drive is dying or get notifications if they are. Will the move on their own the data on the bad drive, or would I need to initiate it?" StableBit DrivePool does have an option to evacuate drives if StableBit Scanner flags them as bad (e.g. via SMART warning); if it's enabled it happens automatically (unless it runs out of space, yes). Note that if a drive fails abruptly then it won't have time to evacuate the drive, which is where duplication is useful. DrivePool and Scanner can provide notifications about failed and failing drives respectively (including via email if configured). If a drive in a pool does fail, the pool will become read-only until the drive is manually replaced (or removed). In a business environment, good practice for backups (IMO) is usually considered to be at least three copies of your files (the working copy and two backups) in at least three locations (your work machine, your onsite backup machine and another backup that's secured somewhere else). YMMV.
  3. Hi Darkness2k. As the game is in open beta I'd suggest reporting the crashing to its developers (assuming you haven't already); they can then test it against DrivePool directly. If that doesn't progress, you can also open a support ticket with StableBit.
  4. Hi xelu01. DrivePool is great for pooling a bunch of mixed size drives; I wouldn't use windows storage spaces for that (not that I'm a fan of wss in general; have not had good experiences). As for duplication it's a matter of how comfortable/secure you feel. One thing to keep in mind with backups is that if you only have one backup, then if/when your primary or your backup is lost you don't have any backups until you get things going again. Truenas's raidz1 means your primary can survive one disk failure, but backups are also meant to cover for primary deletions (accidental or otherwise) so I'd be inclined to have duplication turned on, at least for anything I was truly worried about (you can also set duplication on/off for individual folders). YMMV. Regarding the backups themselves, if you're planning to back up your nas as one big file then do note that DrivePool can't create a file that's larger than the largest free space of its individual volumes (or in the case of duplication, the two largest free spaces) since unlike raidz1 the data is not striped (q.v. this thread on using raid vs pool vs both). E.g. if you had a pool with 20TB free in total but the largest free space of any given volume in it was 5TB then you couldn't copy a file larger than 5TB to the pool.
  5. In the file "C:\ProgramData\StableBit DrivePool\Service\Settings.json" use an "Override" value greater than -2 for "DrivePool_BackgroundTasksPriority". The maximum possible is 15 but I would both recommend against that and suggest incrementing gradually (e.g. start with 0 or 2) and checking if it causes any problems for other processes.
  6. Preventing: Do regular (monthly? YMMV) full reads of drives (so that their own firmware can find/refresh iffy blocks) if you don't already have something doing that for you (e.g. RAID scheduled scrubbing or StableBit Scanner or the like). Don't use SSDs as long-term offline storage, it might work but they're meant to be powered on at least semi-regularly to keep the data charge fresh (at least until someone invents a better SSD). Use ECC RAM (and a mainboard+cpu that actually supports it, which can be a pain). Don't overclock. Circumventing: Use some form of parity-based checking, e.g. RAID 5 disk array or a file system that does parity (e.g. zfs for linux) or utility software such as SnapRAID or MultiPar etc. Keep backups! TLDR: best (IMO) is use ECC RAM on a mainboard+cpu that supports it, use RAID5 arrays with a monthly scheduled scrub, have backups. Since I'm firmly not in the "money no object" category, however, I mostly just rely on duplication (3x for critical docs), backups, Scanner and MultiPar (I keep meaning to use SnapRAID...). "If I go Drivepool route, I need to pull everything out of RAIDS first?" No. DrivePool can add a hardware RAID array to the pool as if it was a single disk (because that's how arrays present themselves to the OS), so you don't need to pull everything out of an existing array if you're happy to just add the array to the pool.
  7. The big advantage of hardware RAID (2+) is parity-based duplication and healing (remember to use scheduled scrubbing!), and complete transparency to your file system, and RAID 5 in particular can give great performance (especially for reads). The big disadvantage of hardware RAID (2+) is that if more drives die than you've got parity for, you lose the entire array instead of just the failed drives. The big advantage of DrivePool is you don't have to care about keeping all your drives the same size and type, you can remove drives as you please without having to rebuild the whole bloody array from scratch, adding a drive doesn't require (background) parity recomputation, you can freely mix SSD and HDD and use the former as cache, you can even add non-local drives via CloudDrive. The big disadvantage of DrivePool is that if any bitrot happens it has no self-healing capability. Late edit/add: one other disadvantage of DrivePool is that it can't create a file larger than the free space of its largest drive (since it doesn't use striping), which is something to keep in mind if you plan to be working with extremely large files. So if money was no object and I had a big honking bay of drives, no particulars withstanding I'd build multiple small sets of RAID 5 (three to five disks to an array) and pool them with DrivePool (duplication x2) to get the best of both worlds while minimising the downsides of each. One drive dies? No problem, pool's still writable, thanks RAID 5! An array dies? No problem, pool's still readable, thanks DrivePool! File bitrot happens? No problem, file's healable, thanks RAID 5!
  8. That's ok to do, won't cause any harm.
  9. >>It really seems like the issue is drivepool not running "properly". It's running enough to mount the pool drives and I can access them, but not enough to do the balancing or maintenance tasks or run the GUI. Yes, and I'm not sure how to proceed here. I'm also wondering why M and K lost their drive letters; as I'm understanding it those should be via CloudDrive, not DrivePool. Perhaps try a repair/reinstall of Microsoft .NET 4.8, just in case that's an issue, then try reinstalling DrivePool again (the latest release version is 2.3.8.1600 btw)? But given Explorer is now hanging on This PC you might need to open a support ticket with StableBit to get their help with this.
  10. In the GUI if you use the Cog icon -> Troubleshooting -> Service log... you can see DrivePool log warnings and errors as they occur, it can provide more information (like hopefully what directory); run the Troubleshooting -> Recheck duplication after and see what the log stream shows? I would try the ACL fix I linked too on the pool if you haven't already. If the culprit still can't be found/fixed you could open a support ticket with StableBit.
  11. I can't say with absolute certainty that it couldn't have, but on the other hand all we've done outside the GUI (in terms of actual changes) so far is used dpcmd to ignore those poolparts, unless you manually copied anything between the poolparts (which should not cause DrivePool GUI's to hang anyway AFAIK). Your settings are the *.json files in your C:\ProgramData\StableBit DrivePool\Service\ and C:\ProgramData\StableBit DrivePool\Service\Store\Json\ folders in case you wish to back those up. DrivePool does a nightly duplication consistency check independent of balancing; I believe it will also automatically attempt to correct any duplication issues found rather than perform it as part of balancing. How far along is CloudDrive in uploading F's cache to the cloud? Did you try TreeComp (or similar) to look for differences between Q and L? Regarding the .NET processing, I'm not sure. I would be inclined to wait until F is finished uploading and checking whether your content in Q and L match before proceeding with anything that might break drivepool (further).
  12. It would mean 12 bytes. You could try opening the service log (cog icon -> troubleshooting) and then repeat going into the balancing settings to see if the problem folder is indicated, or the saved service logs in "C:\ProgramData\StableBit DrivePool\Service\Logs\Service\". I would suggest trying the fixes in the following thread: https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/
  13. Hi karumba, latest and previous versions can be downloaded here: https://covecube.download/CloudDriveWindows/release/download/
  14. "detach/reattach allowed for a change in cache location. It sounds like that's the path forward. Is it as simple as it sounds?" Pretty much. When re-attaching a drive (or creating a new one) expand the "Advanced Settings" triangle icon and you should be able to choose the drive it uses for local cache (note: I'd never pick the boot drive if at all feasible, despite the manual's screenshot; I'd pick some other physical drive). "Regarding destroying the M & K cloud drives via the GUI - I suspect eventually, once I confirm everything is squared away, that I will have to do this right? I've noticed that even though we did the ignore, they're still both constantly and hopelessly attempting to upload files to Onedrive." Correct; the ignore just disconnected them from the pool, it doesn't affect their own nature as cloud drives.
  15. "I'll answer my own question about the danger - which would be if anything from P: was written to L: without being duplicated to Q, after the Onedrives became quota-restricted, then the only place the files may be would be the cache drive. Correct?" Correct. In theory if you're using real-time x2 duplication for everything in P, then Q should be up to date and the disconnection of M and K from L should allow P to automatically backfill L from Q (as and when it has room). In practice you could verify this by manually comparing the contents of the nested poolparts of P that are in M and K with the one in Q and copy any missing files (excluding anything in any folder named ".covefs", "$RECYCLE.BIN" and "System Volume Information") to where they should be in the latter. Fiddly but possible. Specifically, you would compare the content of "M:\PoolPart.a5d0f5ec-ca52-48e9-a975-af691ace6a16\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\" with that of "Q:\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\", and the content of "K:\PoolPart.4d2b0ebb-14f2-4d9d-a192-641de224b2cb\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\" with that of "Q:\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\", to see if there's anything in M or K that is not in Q (there will, of course, be content in Q that is not in M or K, because Q should be at least the sum of M & K & L). Reminder, you must exclude the ".covefs", "$RECYCLE.BIN" and "System Volume Information" folders from the comparisons. If you don't already have a useful GUI tool for comparing folder trees, I can suggest TreeComp. "And in that case, can I manually move the cache files for M: & K: to another drive to give F: some headroom?" Unfortunately I don't know of a way to manually move a cloud drive's local cache files (I imagine/hope it's theoretically doable but I've never had to figure out how), and the GUI requires the upload cache to be empty (which we can't do for M and K being over-quota) to complete a detach before it can be safely re-attached to a different local drive. The alternatives would be detaching F (since it's the drive that's writable) instead to re-attach its cache to a different drive (one with plenty of room) or destroying the M and K cloud drives via the GUI (which I imagine you won't want to do if you're worried there's any unique files sitting in their caches).
  16. P being duplicated across L and Q is great, as presuming P's duplication is up to date and Q has no issues of its own you should be able to proceed with: dpcmd ignore-poolpart L:\ PoolPart.a5d0f5ec-ca52-48e9-a975-af691ace6a16 dpcmd ignore-poolpart L:\ PoolPart.4d2b0ebb-14f2-4d9d-a192-641de224b2cb (So what you had but with a space between the L:\ and the Poolpart). And then step 2 is unchanged; removing the "missing" drives from pool L, which should make L usable again. And for step 3, if all your content is in P with nothing other than the nested poolpart (and any system folders) in L, M and K then you should just be able to select pool P and Cog icon -> Troubleshooting -> Recheck Duplication and it should proceed to fill back in L (and thus F) from Q rather than you having to manually copy your content from M and K's poolparts (which can get a little fiddly with nesting).
  17. Ah, okay, L is itself being used as a drive for another pool. Going to do some testing and see if I avoid kicking the can down the road, so to speak, or need to rewrite my 1-2-3. Question: I understand the onedrive accounts are over-quota and thus in a read-only state, but are the M and K drives still technically writable (in the sense that they can still accept files into whatever room remains in their upload caches) or are they (and thus the pool) also read-only?
  18. Hi Jabby, given covefs.sys is the BSOD suspect I'd ask if you could open a support ticket with StableBit so they can troubleshoot the bug with you?
  19. Could you please post the output of the following admin prompt commands? dpcmd list-poolparts L:\ dir /ah /b /s L:\poolpart.* dir /ah /b /s M:\poolpart.* dir /ah /b /s K:\poolpart.* dir /ah /b /s F:\poolpart.* And a screenshot of your DrivePool GUI with the L:\ pool selected, showing its Pooled Disks?
  20. Using the Ordered File Placement balancer, to require each removed drive's content goes only to the new drive, might work but I'm not sure in this case so I think I'd suggest skipping to the bigger hammer in the toolbox: Open a command prompt run as administrator; use the "dpcmd ignore-poolpart" command to disconnect each of the old poolparts from the pool (from the pool's perspective it's like you've just manually unplugged a physical drive). It also marks them with an ignore tag so they can't return to the pool unless the "dpcmd unignore-poolpart" command is used on them, but that's moot since we'll be removing them. usage: dpcmd ignore-poolpart p:\ foldername where p:\ is the pool drive root and foldername is the hidden poolpart folder on/representing the drive you want to disconnect from the pool. Use the Remove option in the GUI to remove the "missing" disks from the pool so that the pool is writable. Manually copy the content of the ignored poolparts on the old cloud drives into the pool (and thus to the new cloud drive). (note for future readers: if you have nested pools or duplication, it's a little different, see below)
  21. Re the undeletable folder, possibly an ACL corruption issue; if it happens in the future the second post of this thread may help. Re the leftover duplication error, you could try (in order): Manage Pool -> Re-measure... Command prompt as Administrator dpcmd refresh-all-poolparts dpcmd reset-duplication-cache Cog icon -> Troubleshooting -> Recheck duplication (to see if the above fixed it) Command prompt as Administrator dpcmd check-pool-fileparts p:\ >p:\checkpartslog.txt (where p:\ is your pool; can take a very long time for a big pool, search p:\checkpartslog.txt afterwards for problems, there'll be a summary at the bottom to let you know if the command found anything) If you run into more problems with undeletable files, or just to see if it helps, refer to the second post of the thread I linked at the top of this post; use the fix on the problem sections of your pool or on all the poolparts. Cog icon -> Troubleshooting -> Recheck duplication (to see if the above fixed it) if all of the above fail, consider using Cog icon -> Reset all settings... (carefully read the warning message and decide if you want to go ahead or continue to search for another fix)
  22. DrivePool's duplication can be set at the pool level (all content placed in the set pool inherits this) and the folder level (all content placed in the set folder inherits this). Some folks use different levels of duplication for different content (e.g. they might have their pool set to x2 but particularly important folders set to x3); if your whole pool is at x2 and that's the way you want it then you don't have to worry about that. The copying was referring to the suggestion of using a unique folder inside the poolparts IF you've already got other stuff in the pool that you want to avoid bumping into; either way your external content would still be moved - seeded - into the pool (whether under that folder or directly under the poolpart folders). Thanks. That shouldn't happen, yes. Can you give an exact step-by-step of how to reproduce this?
  23. Try adjusting the settings under Manage Pool -> Performance to see if that makes any difference (but if you're using duplication, Real-time duplication should be ticked)? Try going back to a previous stable release of DrivePool? Whether or not a previous version works, I would recommend opening a support ticket with Stablebit.
  24. Going with a bit of an infodump here; there's a TLDR at the end. This should all be accurate as of v2.3.8.1600. DrivePool uses ADS to tag folders with their duplication requirements (untagged folders inherit their level from the first tagged folder found heading rootwards towards and including the poolpart folder) where their level or a subfolder's level differs from their pool's level (and the poolpart folders themselves are tagged if the pool's base level is x2 or higher). This is, as far as I know, the only permanent record of duplication kept by DrivePool (there may be a backup in the hidden metadata or covefs folders in the pool); everything that follows on from that is done in RAM where possible for speed. Duplication consistency checking involves looking at each drive's NTFS records (which Windows tries to cache in RAM) to see if each folder and file has the appropriate number of instances across the poolparts (e.g. if alicefolder declares or inherits a level of x2, then alicefolder should be on at least 2 drives and alicefolder\bobfile should be on only 2 drives) and is in the correct poolparts (per any file placement rules that apply) and has matching metadata (i.e. at least size and last-modified timestamp). If everything lines up, it leaves the files alone. If it does not, it will either ensure the correct number of instances (if that's the problem) are on the correct poolparts (if that's the problem) or warn the user (if there is a metadata mismatch). (It doesn't, as far as I'm aware, do any content comparison - I wish it had that as an option - leaving content integrity handling up to the user e.g. via SnapRAID, RAID1+, Multipar, etc). Duplication consistency checking can be manually initiated or performed automatically on a daily schedule. This means that DrivePool should not be deleting either of your two sets of seeded files, unless you don't have duplication turned on for the pool [1] or for the folder [2] into which you're seeding your content, because your content will inherit the duplication level of whatever folder it is being moved into. [1] e.g. if you are moving content directly into poolpart.string\ rather than into poolpart.string\somefolder\ then your content will inherit the pool's duplication level. [2] e.g. if you are moving content into poolpart.string\somefolder\ rather than directly into poolpart.string\ then your content will inherit somefolder's duplication level. Note: if you move a folder within a pool, it will keep its custom duplication level only if it has such - folders with inherited duplication will inherit their new parent's duplication. If instead you copy a folder within a pool, the copy will always inherit its new parent's duplication level. Testing #0: created new pool across two drives. created identical external content on both drives, in a folder named calico. Testing #1: pool duplication x1. opened both poolpart folders directly, seeded both with calico, started duplication consistency check. drivepool deleted one instance of calico's files, leaving the other instance untouched (as expected). Testing #2: pool duplication x2. opened both poolpart folders directly, seeded both with calico, started duplication consistency check. drivepool left both sets of calico's files untouched (as expected). Testing #3: created folder alice at x1 and bob at x2. opened all poolpart folders, manually created second alice folder, seeded both alice and bob on both drives with calico, started duplication consistency check. drivepool deleted one instance of calico's files in alice (as expected), leaving the other instance untouched (as expected) and did not touch calico's files in bob (as expected). It might be possible to confuse DrivePool by manually creating ex nihilo (rather than copying) additional instances of a folder that is tagged with a non-default duplication count and seeding into those? Would have to test further. But you can (and should) avoid that by simply manually copying that folder (from the poolpart in which it exists to any poolparts in which it doesn't that you plan to seed into). TLDR: for your scenario create a unique folder in the pool. ensure its duplication level is showing x2. open the poolparts you plan to seed with your content. if the folder isn't there, copy it so it is (i.e. don't create a "New Folder" and rename it to match, make a direct copy instead). set a file placement rule to keep that folder's content only on those two drives and tell drivepool to respect file placement (if you want that). seed that folder's instances in your poolparts with your content. remeasure. it should leave them untouched.
  25. The quoted part is only relevant if you're wanting to seed - for example - a folder on E: named "\Bob" into a pool P: that already has a "\Bob". And nothing stops you from creating a new folder in the pool (e.g. "\fromAlice") and then seeding your folder under that (e.g. resulting in "P:\fromAlice\Bob" which would be separate from the existing "P:\Bob"). If you're wanting to prevent your files from getting spread across the rest of the drives in the pool, you will first need to ensure that balancing is turned off and that priority is given to the File Placement rules above all else (unless you want an exception for something). Then after seeding set the File Placement rules to the effect that those files/folders must be kept only on those two drives (and if desired, that no other files/folders may be kept on those two drives) and ensure those folders are set to 2x duplication. Then you can turn balancing back on (if you're using it).
×
×
  • Create New...