-
Posts
1034 -
Joined
-
Last visited
-
Days Won
104
Posts posted by Shane
-
-
-
>>It really seems like the issue is drivepool not running "properly". It's running enough to mount the pool drives and I can access them, but not enough to do the balancing or maintenance tasks or run the GUI.
Yes, and I'm not sure how to proceed here. I'm also wondering why M and K lost their drive letters; as I'm understanding it those should be via CloudDrive, not DrivePool.
Perhaps try a repair/reinstall of Microsoft .NET 4.8, just in case that's an issue, then try reinstalling DrivePool again (the latest release version is 2.3.8.1600 btw)?
But given Explorer is now hanging on This PC you might need to open a support ticket with StableBit to get their help with this.
-
In the GUI if you use the Cog icon -> Troubleshooting -> Service log... you can see DrivePool log warnings and errors as they occur, it can provide more information (like hopefully what directory); run the Troubleshooting -> Recheck duplication after and see what the log stream shows?
I would try the ACL fix I linked too on the pool if you haven't already.
If the culprit still can't be found/fixed you could open a support ticket with StableBit.
-
I can't say with absolute certainty that it couldn't have, but on the other hand all we've done outside the GUI (in terms of actual changes) so far is used dpcmd to ignore those poolparts, unless you manually copied anything between the poolparts (which should not cause DrivePool GUI's to hang anyway AFAIK).
Your settings are the *.json files in your C:\ProgramData\StableBit DrivePool\Service\ and C:\ProgramData\StableBit DrivePool\Service\Store\Json\ folders in case you wish to back those up.
DrivePool does a nightly duplication consistency check independent of balancing; I believe it will also automatically attempt to correct any duplication issues found rather than perform it as part of balancing.
How far along is CloudDrive in uploading F's cache to the cloud?
Did you try TreeComp (or similar) to look for differences between Q and L?
Regarding the .NET processing, I'm not sure. I would be inclined to wait until F is finished uploading and checking whether your content in Q and L match before proceeding with anything that might break drivepool (further).
-
It would mean 12 bytes.
You could try opening the service log (cog icon -> troubleshooting) and then repeat going into the balancing settings to see if the problem folder is indicated, or the saved service logs in "C:\ProgramData\StableBit DrivePool\Service\Logs\Service\".
I would suggest trying the fixes in the following thread: https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/
-
Hi karumba, latest and previous versions can be downloaded here: https://covecube.download/CloudDriveWindows/release/download/
-
"detach/reattach allowed for a change in cache location. It sounds like that's the path forward. Is it as simple as it sounds?"
Pretty much. When re-attaching a drive (or creating a new one) expand the "Advanced Settings" triangle icon and you should be able to choose the drive it uses for local cache (note: I'd never pick the boot drive if at all feasible, despite the manual's screenshot; I'd pick some other physical drive).
"Regarding destroying the M & K cloud drives via the GUI - I suspect eventually, once I confirm everything is squared away, that I will have to do this right? I've noticed that even though we did the ignore, they're still both constantly and hopelessly attempting to upload files to Onedrive."
Correct; the ignore just disconnected them from the pool, it doesn't affect their own nature as cloud drives.
-
"I'll answer my own question about the danger - which would be if anything from P: was written to L: without being duplicated to Q, after the Onedrives became quota-restricted, then the only place the files may be would be the cache drive. Correct?"
Correct. In theory if you're using real-time x2 duplication for everything in P, then Q should be up to date and the disconnection of M and K from L should allow P to automatically backfill L from Q (as and when it has room). In practice you could verify this by manually comparing the contents of the nested poolparts of P that are in M and K with the one in Q and copy any missing files (excluding anything in any folder named ".covefs", "$RECYCLE.BIN" and "System Volume Information") to where they should be in the latter. Fiddly but possible.
Specifically, you would compare the content of "M:\PoolPart.a5d0f5ec-ca52-48e9-a975-af691ace6a16\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\" with that of "Q:\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\", and the content of "K:\PoolPart.4d2b0ebb-14f2-4d9d-a192-641de224b2cb\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\" with that of "Q:\PoolPart.2fa37324-d52b-431d-8eaa-3d7175d11cd4\", to see if there's anything in M or K that is not in Q (there will, of course, be content in Q that is not in M or K, because Q should be at least the sum of M & K & L). Reminder, you must exclude the ".covefs", "$RECYCLE.BIN" and "System Volume Information" folders from the comparisons. If you don't already have a useful GUI tool for comparing folder trees, I can suggest TreeComp.
"And in that case, can I manually move the cache files for M: & K: to another drive to give F: some headroom?"
Unfortunately I don't know of a way to manually move a cloud drive's local cache files (I imagine/hope it's theoretically doable but I've never had to figure out how), and the GUI requires the upload cache to be empty (which we can't do for M and K being over-quota) to complete a detach before it can be safely re-attached to a different local drive. The alternatives would be detaching F (since it's the drive that's writable) instead to re-attach its cache to a different drive (one with plenty of room) or destroying the M and K cloud drives via the GUI (which I imagine you won't want to do if you're worried there's any unique files sitting in their caches).
-
P being duplicated across L and Q is great, as presuming P's duplication is up to date and Q has no issues of its own you should be able to proceed with:
- dpcmd ignore-poolpart L:\ PoolPart.a5d0f5ec-ca52-48e9-a975-af691ace6a16
- dpcmd ignore-poolpart L:\ PoolPart.4d2b0ebb-14f2-4d9d-a192-641de224b2cb
(So what you had but with a space between the L:\ and the Poolpart).
And then step 2 is unchanged; removing the "missing" drives from pool L, which should make L usable again.
And for step 3, if all your content is in P with nothing other than the nested poolpart (and any system folders) in L, M and K then you should just be able to select pool P and Cog icon -> Troubleshooting -> Recheck Duplication and it should proceed to fill back in L (and thus F) from Q rather than you having to manually copy your content from M and K's poolparts (which can get a little fiddly with nesting).
-
Ah, okay, L is itself being used as a drive for another pool. Going to do some testing and see if I avoid kicking the can down the road, so to speak, or need to rewrite my 1-2-3.
Question: I understand the onedrive accounts are over-quota and thus in a read-only state, but are the M and K drives still technically writable (in the sense that they can still accept files into whatever room remains in their upload caches) or are they (and thus the pool) also read-only?
-
Hi Jabby, given covefs.sys is the BSOD suspect I'd ask if you could open a support ticket with StableBit so they can troubleshoot the bug with you?
-
Could you please post the output of the following admin prompt commands?
- dpcmd list-poolparts L:\
- dir /ah /b /s L:\poolpart.*
- dir /ah /b /s M:\poolpart.*
- dir /ah /b /s K:\poolpart.*
- dir /ah /b /s F:\poolpart.*
And a screenshot of your DrivePool GUI with the L:\ pool selected, showing its Pooled Disks?
-
Using the Ordered File Placement balancer, to require each removed drive's content goes only to the new drive, might work but I'm not sure in this case so I think I'd suggest skipping to the bigger hammer in the toolbox:
-
Open a command prompt run as administrator; use the "dpcmd ignore-poolpart" command to disconnect each of the old poolparts from the pool (from the pool's perspective it's like you've just manually unplugged a physical drive). It also marks them with an ignore tag so they can't return to the pool unless the "dpcmd unignore-poolpart" command is used on them, but that's moot since we'll be removing them.
- usage: dpcmd ignore-poolpart p:\ foldername
- where p:\ is the pool drive root and foldername is the hidden poolpart folder on/representing the drive you want to disconnect from the pool.
- Use the Remove option in the GUI to remove the "missing" disks from the pool so that the pool is writable.
- Manually copy the content of the ignored poolparts on the old cloud drives into the pool (and thus to the new cloud drive).
(note for future readers: if you have nested pools or duplication, it's a little different, see below)
-
Open a command prompt run as administrator; use the "dpcmd ignore-poolpart" command to disconnect each of the old poolparts from the pool (from the pool's perspective it's like you've just manually unplugged a physical drive). It also marks them with an ignore tag so they can't return to the pool unless the "dpcmd unignore-poolpart" command is used on them, but that's moot since we'll be removing them.
-
Re the undeletable folder, possibly an ACL corruption issue; if it happens in the future the second post of this thread may help.
Re the leftover duplication error, you could try (in order):
- Manage Pool -> Re-measure...
-
Command prompt as Administrator
- dpcmd refresh-all-poolparts
- dpcmd reset-duplication-cache
-
Cog icon -> Troubleshooting -> Recheck duplication
(to see if the above fixed it)
-
Command prompt as Administrator
-
dpcmd check-pool-fileparts p:\ >p:\checkpartslog.txt
(where p:\ is your pool; can take a very long time for a big pool, search p:\checkpartslog.txt afterwards for problems, there'll be a summary at the bottom to let you know if the command found anything)
-
dpcmd check-pool-fileparts p:\ >p:\checkpartslog.txt
- If you run into more problems with undeletable files, or just to see if it helps, refer to the second post of the thread I linked at the top of this post; use the fix on the problem sections of your pool or on all the poolparts.
-
Cog icon -> Troubleshooting -> Recheck duplication
(to see if the above fixed it)
-
if all of the above fail, consider using Cog icon -> Reset all settings...
(carefully read the warning message and decide if you want to go ahead or continue to search for another fix)
-
7 hours ago, mmortal03 said:
Thanks for the detailed information above. Regarding your TLDR, and also your testing examples, you're talking about using a unique folder with folder duplication, but I've been trying all of this with file duplication: I want duplication across the entire pool. Hopefully there wouldn't be any difference here, but maybe that's where the issue lies?
Also, you were saying to copy, but to get the speed improvement (and to maintain the best data integrity by not re-copying anything) this would involve *moving* all the files to seed the poolparts folders.
DrivePool's duplication can be set at the pool level (all content placed in the set pool inherits this) and the folder level (all content placed in the set folder inherits this). Some folks use different levels of duplication for different content (e.g. they might have their pool set to x2 but particularly important folders set to x3); if your whole pool is at x2 and that's the way you want it then you don't have to worry about that.
The copying was referring to the suggestion of using a unique folder inside the poolparts IF you've already got other stuff in the pool that you want to avoid bumping into; either way your external content would still be moved - seeded - into the pool (whether under that folder or directly under the poolpart folders).
7 hours ago, mmortal03 said:Btw, during testing, I came across a tangential situation that also seems to be pretty dangerous: If you have a fresh install of DrivePool on a different computer that hasn't been configured yet (all defaults), and, without thinking to shut down the DrivePool service, you happen to plug into this computer some external drives that are part of a pool from a different computer, then DrivePool will detect these drives as a pool but start applying the *default settings* to these drives! Meaning, if you had file duplication set up on the other computer, and, obviously, that's not the default, then DrivePool will start destroying the duplication by removing the duplicates.
Thanks. That shouldn't happen, yes. Can you give an exact step-by-step of how to reproduce this?
-
Try adjusting the settings under Manage Pool -> Performance to see if that makes any difference (but if you're using duplication, Real-time duplication should be ticked)?
Try going back to a previous stable release of DrivePool?
Whether or not a previous version works, I would recommend opening a support ticket with Stablebit.
-
Going with a bit of an infodump here; there's a TLDR at the end. This should all be accurate as of v2.3.8.1600.
DrivePool uses ADS to tag folders with their duplication requirements (untagged folders inherit their level from the first tagged folder found heading rootwards towards and including the poolpart folder) where their level or a subfolder's level differs from their pool's level (and the poolpart folders themselves are tagged if the pool's base level is x2 or higher). This is, as far as I know, the only permanent record of duplication kept by DrivePool (there may be a backup in the hidden metadata or covefs folders in the pool); everything that follows on from that is done in RAM where possible for speed.
Duplication consistency checking involves looking at each drive's NTFS records (which Windows tries to cache in RAM) to see if each folder and file has the appropriate number of instances across the poolparts (e.g. if alicefolder declares or inherits a level of x2, then alicefolder should be on at least 2 drives and alicefolder\bobfile should be on only 2 drives) and is in the correct poolparts (per any file placement rules that apply) and has matching metadata (i.e. at least size and last-modified timestamp).
If everything lines up, it leaves the files alone. If it does not, it will either ensure the correct number of instances (if that's the problem) are on the correct poolparts (if that's the problem) or warn the user (if there is a metadata mismatch).
(It doesn't, as far as I'm aware, do any content comparison - I wish it had that as an option - leaving content integrity handling up to the user e.g. via SnapRAID, RAID1+, Multipar, etc).
Duplication consistency checking can be manually initiated or performed automatically on a daily schedule.
This means that DrivePool should not be deleting either of your two sets of seeded files, unless you don't have duplication turned on for the pool [1] or for the folder [2] into which you're seeding your content, because your content will inherit the duplication level of whatever folder it is being moved into.
[1] e.g. if you are moving content directly into poolpart.string\ rather than into poolpart.string\somefolder\ then your content will inherit the pool's duplication level.
[2] e.g. if you are moving content into poolpart.string\somefolder\ rather than directly into poolpart.string\ then your content will inherit somefolder's duplication level.Note: if you move a folder within a pool, it will keep its custom duplication level only if it has such - folders with inherited duplication will inherit their new parent's duplication. If instead you copy a folder within a pool, the copy will always inherit its new parent's duplication level.
Testing #0: created new pool across two drives. created identical external content on both drives, in a folder named calico.
Testing #1: pool duplication x1. opened both poolpart folders directly, seeded both with calico, started duplication consistency check. drivepool deleted one instance of calico's files, leaving the other instance untouched (as expected).
Testing #2: pool duplication x2. opened both poolpart folders directly, seeded both with calico, started duplication consistency check. drivepool left both sets of calico's files untouched (as expected).
Testing #3: created folder alice at x1 and bob at x2. opened all poolpart folders, manually created second alice folder, seeded both alice and bob on both drives with calico, started duplication consistency check. drivepool deleted one instance of calico's files in alice (as expected), leaving the other instance untouched (as expected) and did not touch calico's files in bob (as expected).
It might be possible to confuse DrivePool by manually creating ex nihilo (rather than copying) additional instances of a folder that is tagged with a non-default duplication count and seeding into those? Would have to test further. But you can (and should) avoid that by simply manually copying that folder (from the poolpart in which it exists to any poolparts in which it doesn't that you plan to seed into).
TLDR: for your scenario create a unique folder in the pool. ensure its duplication level is showing x2. open the poolparts you plan to seed with your content. if the folder isn't there, copy it so it is (i.e. don't create a "New Folder" and rename it to match, make a direct copy instead). set a file placement rule to keep that folder's content only on those two drives and tell drivepool to respect file placement (if you want that). seed that folder's instances in your poolparts with your content. remeasure. it should leave them untouched.
-
The quoted part is only relevant if you're wanting to seed - for example - a folder on E: named "\Bob" into a pool P: that already has a "\Bob". And nothing stops you from creating a new folder in the pool (e.g. "\fromAlice") and then seeding your folder under that (e.g. resulting in "P:\fromAlice\Bob" which would be separate from the existing "P:\Bob").
If you're wanting to prevent your files from getting spread across the rest of the drives in the pool, you will first need to ensure that balancing is turned off and that priority is given to the File Placement rules above all else (unless you want an exception for something). Then after seeding set the File Placement rules to the effect that those files/folders must be kept only on those two drives (and if desired, that no other files/folders may be kept on those two drives) and ensure those folders are set to 2x duplication. Then you can turn balancing back on (if you're using it).
-
I'd suggest opening a support ticket with StableBit.
-
Are you using sparsely allocated files (e.g. bittorrent)? Those can confuse DrivePool (or at least have done so in the past, might still affect current version). NTFS compression is another possible culprit if it's in use. Or is the disk space growing without any actual drive activity?
Does the Manage Pool -> Re-measure... tool correct the problem? (permanently? temporarily?)
-
A1: At least to an extent, in my experience the more you can ensure that the files are read concurrently from different drives the more DrivePool can beat a single drive (so long as your bus is big enough). Conversely, it can do a little-to-significantly worse (depending on access patterns) than a single drive if you can't ensure that.
Using 2x (or higher, YMMV) duplication and enabling read-striping on the pool can help greatly with this (e.g. if it goes to read two files and they're on the same disk, then if you'd turned on 2x duplication and striping it could've read each file from a different disk).
Some older summing/hashing applications may encounter problems with read-striping (I suspect because they try physical calls and DrivePool isn't physical). You'll need to test before going "live".
Incidentally if you're going to have millions of files to which you need fast access then make sure you've got enough RAM to keep the file table fully loaded (e.g. my disks currently contain ~3.2M files after duplication, using up ~3.5GB for the Metafile in RAM).
A2: DrivePool does not balance files when they are locked. Also do not use DrivePool duplication with VM images unless real-time duplication is both enabled and completed prior to running, to avoid consistency errors.
-
Hi CrazyHorse, pools and any duplication will be automatically recognised however you may need to recreate your balancer and file placement rules if you've customised them.
-
Ah, okay. Yes, I'd open a ticket with StableBit if Scanner keeps finding "bad" sectors where all other tools don't.
-
Modern drives will automatically attempt to repair bad sectors and/or replace them from a reserve built in for that purpose; since Scanner runs in the background it can trigger an alert before the drive can finish, so if Scanner and manually-run tools now can't find the problem that's most likely what happened and no reason for concern (unless it starts happening a lot).
What do I do with RAID?
in Hardware
Posted
The big advantage of hardware RAID (2+) is parity-based duplication and healing (remember to use scheduled scrubbing!), and complete transparency to your file system, and RAID 5 in particular can give great performance (especially for reads).
The big disadvantage of hardware RAID (2+) is that if more drives die than you've got parity for, you lose the entire array instead of just the failed drives.
The big advantage of DrivePool is you don't have to care about keeping all your drives the same size and type, you can remove drives as you please without having to rebuild the whole bloody array from scratch, adding a drive doesn't require (background) parity recomputation, you can freely mix SSD and HDD and use the former as cache, you can even add non-local drives via CloudDrive.
The big disadvantage of DrivePool is that if any bitrot happens it has no self-healing capability.
Late edit/add: one other disadvantage of DrivePool is that it can't create a file larger than the free space of its largest drive (since it doesn't use striping), which is something to keep in mind if you plan to be working with extremely large files.
So if money was no object and I had a big honking bay of drives, no particulars withstanding I'd build multiple small sets of RAID 5 (three to five disks to an array) and pool them with DrivePool (duplication x2) to get the best of both worlds while minimising the downsides of each.
One drive dies? No problem, pool's still writable, thanks RAID 5!
An array dies? No problem, pool's still readable, thanks DrivePool!
File bitrot happens? No problem, file's healable, thanks RAID 5!