-
Posts
1034 -
Joined
-
Last visited
-
Days Won
104
Everything posted by Shane
-
"1. pool was in good status as far as I know (but I guess maybe it wasn't?)" If it wasn't, this is something that should've been picked up the consistency checks that DrivePool does in the background. Agreed, very strange. You can have DrivePool show you the expected duplication levels of all folders in the pool and the actual space used via the GUI -> Manage Pool -> File Protection -> Folder Duplication... it will show an expandable color-coded folder tree of your pool, and beneath that it will begin calculating the actual size used before duplication (pale blue) and size used by duplication (dark blue); for a consistent pool with entirely x2 duplication those two sizes should finish roughly equal (I say roughly because system folders have their own special duplication levels which may throw the figure off a little). Note that if you see a plus sign after a folder's duplication level it means one or more sub-folders have a different duplication level (including possibly no duplication; "plus" here doesn't mean "or more", it means "or otherwise"). You can force DrivePool to recheck/repair duplication via the GUI -> Cog icon (top right) -> Troubleshooting -> Recheck duplication...
-
The only ways I can think of to do that would be to #1, have a pool exclusively for just those drives and that folder (whether by itself or as part of a bigger pool), #2 use File Placement and allow only that folder to use those drives (and I'm not sure if the disk space balancing algorithm is smart enough to figure out what you want from that), or #3 turn off automatic balancing and manually spread the files in that folder across the poolparts of those disks yourself.
-
It should be. Alternatively, if both have the same version of DrivePool, you could copy the known good one across to the other machine and then restart the latter's DrivePool service.
-
@Christopher (Drashna) Any ideas?
-
Try temporarily turning off all other balancers and requesting a re-measure?
-
To maximise the likelihood of balancing occurring, ensure you have "Automatic balancing - Triggers" set to 100% with "Or at least this much data needs to be moved" ticked and set to a small value (e.g. 100 MB).
-
For various reasons I removed the drive letters of the individual poolpart volumes and instead mounted them as folders under a path (e.g. "e:\disks\d1", "e:\disks\d2", "e:\disks\d3", etc) in a small empty volume (e.g. "e:\"). In the case of Windows Defender adding that path folder (e.g. "e:\disks\") to Defender's exclusions* prevents it double-scanning everything. If I want to completely prevent it scanning a particular file or folder in the pool I can then also add the file's or folder's pool path (e.g. "p:\noscanfolder\notthisfile"). *For Windows 10, open Windows Security -> Virus & threat protection -> Manage Settings -> Add or remove exclusions -> Add an exclusion -> Folder.
-
It should only compute as much as it needs to so that it can meet the requirements of each balancer (sorry, I realise that's vague). But if you've still got 100% cpu going on I'd recommend opening a ticket so StableBit can help directly.
-
My DP machine is a Ryzen 5 with 16GB RAM and about 750k files (a twentieth of yours) which doesn't take very long to balance but the vast majority of those files are balanced already; if your pool has a lot of steady churn from ssd to archive that would also multiply the problem, though I still would think a Ryzen 9 shouldn't be getting stuck at 100% for days (let alone weeks) regardless. Yeah, I'd stick with real-time x2 duplication. If you want files to offload from SSD to Archive pretty much as soon as they land to keep the SSD from filling, set Automatic balancing to "Balance immediately" and Triggers to "1GB" or similar. You might also try "Not more often than..." if you wish to try an hourly emptying or similar. To minimise bucket list computation, disable all non-essential Balancers. I don't know if it'd help, but maybe try having a Pool A = Pool B (SSDx2) + Pool C (HDDx10) arrangement where: Pool A = no duplication, balance immediately / not more often than, with 1GB trigger; use SSD Optimizer (SSD = Pool B, Archive = Pool C), no other balancers. Pool B = x2 real-time duplication, automatic balancing off, all balancers disabled (except maybe StableBit Scanner). Pool C = x2 real-time duplication, automatic balancing nightly (StableBit Scanner plus anything else you need). That way - in theory - pool A is where all your files "are" (from the user perspective) and only has one main job, "use B as a buffer for C", while pool C keeps the HDDs balanced as its own job scheduled separately from the cache emptying? Note that Pool B doesn't need Duplication Space Optimizer because the duplication multiplier matches the number of drives. If nothing seems to fix the CPU/bucket issue, I'd suggest opening a support ticket with StableBit.
-
The bucket lists are the files (and folders) that the balancer determines needs to be moved between poolparts. Does your data set involve a very large number of files? Do you have balancing set to immediately trigger on any amount of data so that the cache drives effectively act as a real-time-ish buffer or is it scheduled daily so files aren't moved off until a certain time? Is your duplication real-time or nightly, and at what multiplier(s)? If it helps I believe you can see what files are being added to the bucket lists via opening the Service Log ( DrivePool -> Settings -> Troubleshooting -> Service log... ) and setting the File Mover trace level from Informational to Verbose ( Tracing levels -> F -> File Mover -> Verbose). I'd be going with the SSD Optimizer balancer to have the two 8TB drives as cache, which I'm guessing is what you're doing, but even filled they shouldn't be taking weeks to empty unless there's some kind of bottleneck going on and 100% CPU for that length of time also seems a red flag. What CPU do you have? If you open Windows Task Manager can you check in Performance to see if there's high kernel times (right-click the graph) and in Details can you see any culprit process(es)?
-
Hi, how do you have your pool(s) arranged and what settings are you using for the balancing?
-
Do you have Automatic Balancing turned on (from the DrivePool app, Manage Pool -> Balancing...) and what is the Triggers section set to?
-
"1. Maximise the space I have" DrivePool can take any bunch of NTFS-formatted (or REFS if you use that) simple disks and pool their free space as a virtual drive; existing content is unaffected and you can continue using the disks individually if you want. Only caveat is that any one incoming file can't be larger than the largest amount of free space individually available on those disks. "2. Back up my desktop to it" I do nightly backups of my desktop and two laptops to my pool via a network share. Keep in mind the above caveat. "3. Improve drive performance, to hopefully better utilise the 10Gbps connection and improve read/write speeds as much as possible (e.g. similar benefits to a RAID0-like setup)" Short answer: If you want RAID0-like performance, you need an actual RAID0-like array. Nothing else comes close. Longer answer: DrivePool's real-time duplication is simultaneous to the disks involved, so pools are no slower or faster to write to regardless of duplication level; in that respect it is comparable to RAID1. DrivePool will also attempt, where files exist on multiple disks within a pool, to read from the disk it decides is likely to offer better performance. It also offers a read-striping option, but the performance boost is minor due to DrivePool operating at the file level rather than the block level (and its read-striping is not compatible with some hashing utilities) Additional: if you add one or more SSDs to a pool, you can set DrivePool to use those as an incoming cache (with scheduled emptying to the rest of the pool) or for specific folders (e.g. if you want certain files to always be saved to and kept on the SSDs). Additional #2: if a drive in a pool fails, the pool becomes read-only until the failure is resolved. This is much better than RAID0 (where the array just flat dies) but not as good as RAID1 or higher (where the array usually continues to be writable). "4. Have a local backup/duplicates on the drives of all the content to protect against drive failures (e.g. similar benefits to a RAID1 / RAID10-like setup)" DrivePool can be set to duplicate files across any number of disks in a pool, and this can be controlled down to the folder level (e.g. you might have a folder tree at x2 but certain folders within that tree at x1 and x3, or vice versa, etc). "As this is a DAS, it is also captured by my Backblaze account and will all be backed up to the cloud for an off-site, cloud-based backup as a last resort if ever needed." There is an issue with file IDs, which some backup utilities use to decide what needs to be backed up, not having guaranteed persistence in DrivePool's pools. If Backblaze uses these without checking their validity, there are workarounds (e.g. backing up the physical disks that form the pool, rather than the pool itself) but it is something to keep in mind (e.g. it may constrain how you set up duplication).
-
#1: If you used the GUI menu to Remove a present disk from the pool, then Add it to the pool again later, whatever may have been left on it won't show back up on in the pool (possible exception: if the disk removed was damaged in a way that prevented drivepool from marking it as no longer part of the pool). #2: However, if you physically disconnected a disk from the pool, then used the GUI menu to Remove the missing disk (as a pool is read-only with a missing disk), then later re-connect the missing disk, it will (at least based on my testing) show back up in the pool. Drivepool's GUI should then warn about duplication issues (even if you have duplication turned off for your content, DrivePool itself uses duplication for its pool management data and for handling system volume information) and those issues may not be resolvable without again Removing the disk this time while it is present. In the case of #2, especially where you have changed the files on the pool between disconnecting and reconnecting the disk, DrivePool attempting to merge the disk back into the pool, applying any balancing/duplication/placement rules you have set and running into errors is most likely not desirable. To prevent this happening, I recommend one of the following: If you want to keep some or all data from the old disk: Rename the hidden poolpart folder (e.g. put OLD in front of its name) on the old disk BEFORE reconnecting it to the DrivePool machine. Create a new, uniquely-named folder within the hidden poolpart folder on the old disk and move all of the old files and folders within that hidden poolpart folder into the new unique folder. Use DrivePool's menu to Add the old disk back into the pool. Move the unique folder from the old hidden poolpart folder into the new hidden poolpart folder that DrivePool has just created on that disk. All the old content from your old disk should now show up in the pool within the uniquely-named folder, for you to do with as you please. If you don't want to keep data from the old disk: Simply format the old disk or delete the hidden poolpart folder on it BEFORE reconnecting it to the DrivePool machine. Add the old disk back to the pool.
-
FWIW when I ran DrivePool in a MS Hyper-V VM a few times late last year, to mess with various pool setups, it worked without issue.
-
If you haven't found a solution yet, you might need to contact StableBit directly via a support request for this one?
-
Duplication at night - how to set particular time?
Shane replied to Interstellar's question in Nuts & Bolts
It's controlled by FileDuplication_DuplicateTime in C:\ProgramData\StableBit DrivePool\Service\Settings.json and defaults to start at 2am; if you wish to have a different start time, use the Override and replace the null with your preferred value (e.g. "05:00"). -
Use both SSDs and HDDs in pool, only use HDDs as duplicate/secondary?
Shane replied to TomWaitsForNoMan's question in Nuts & Bolts
I think you'd need to use hierarchical pools. Something fairly simple like: Pool A = Pool B + Pool C where Pool B = all SSD and Pool C = all HDD; Pool A duplication x2 but not real-time; Pools B and C duplication off. DrivePool should be able to detect that B is all SSDs and prefer it for reading and writing, and duplication happens nightly (as per settings.json file; default 2am) when not real-time. Note that "prefer" doesn't mean "guaranteed" however. But if you wanted to get a bit more clever, you could have something like: Pool A = Pool B + Pool C where Pool B = all SSD and Pool C = one SSD + rest HDD; Pool A duplication x2 real-time; Pools B and C duplication off; Pool C have SSD Optimiser enabled with the SSD as the cache and the Automatic balancing set to empty the cache to the archive HDDs nightly (or whatever). DrivePool would then be duplicating in real-time but still at SSD speeds, at least unless/until you filled the cache SSD (which would empty nightly or whatever), it should still prefer the all SSD pool B for reads, and even if it read from the pool C with HDDs it might still be reading (new files) from that pool's SSD cache. -
While SSDs have a strictly limited (albeit these days very large) number of read/write cycles, HDDs are all about the mechanical wear for which (as I understand it) environmental factors (temperature changes and extremes, vibration, shocks, etc) and usage/data patterns (platter spin on/off, motor head movement) contribute significantly. The reason DrivePool prefers the drive with the greatest free space is because due to the way Windows writes files, it isn't possible for the file system to guarantee the size of an incoming new file in advance - so with no way of knowing whether a file being saved into a pool is going to be 1MB, 1GB, 1TB or whatever the drive in the pool with the most free space is always picked to minimise the risk of being unable to complete writing the file. There is currently no option in DrivePool to change this to "lowest percentage of use"; the alternatives as far as I know are the SSD Optimizer balancer, the Ordered File Placement balancer, and the File Placement rules. For most of your drives, given they have same capacity, "save to drive with most free space" and "save to drive with least percentage used" would be the same anyway (which is why DATAEX8, being twice the size of all the others but one, was used first by DrivePool, and DATAEX7, being half the size of those others, presumably was used last). The equalizer should thus only be needing to move a small portion of your data between drives - except for DATAEX7 and DATAEX8 - and further you can adjust how aggressive the equalizer is via the "Automatic balancing" and "Automatic balancing - Triggers" sections. All that said, based on what you've mentioned so far, I'd be inclined to leave the equalizer set to the default of "Equalize by free space remaining" and set the Automatic balancing schedule to "Balance every day at X" where X is a time you won't be using the drives for other things; scheduled file balancing is performed sequentially, one file at a time, so it won't be thrashing the drives. You can use a lower balance ratio and/or have a higher "this much data needs to be moved" if you wish, too, to find your preferred sweet spot.
-
It's good you didn't post your activation ID in the forum. That's mean to be kept private between you and StableBit. Even with a deactivated license, DrivePool will still be able to read files (now if it was still able to write to the pool, that'd be odd; maybe there's a grace period). On the new box where the media app can't "find" the file in the pool, can it "find" the file if you access it directly via whichever physical drive's hidden poolpart folder? Can you still copy the file from the pool to a non-pool drive with explorer? If not, can you still copy the file from the poolpart folder with explorer?
-
When you say "Rebooted and DrivePool mounted the drive" do you mean the problem went away after the reboot? If so, it's possible something happened that caused the drive to briefly drop out but DrivePool wasn't able to reconnect (until the reboot)?
-
That may indicate some other piece of software (or Windows) has an update pending. If not, for that level of technical assistance I'd suggest using the Contact form to open a support ticket.
-
Unless directed otherwise by a File Placement rule or a balancer that can set file placement limits, DrivePool will always try to place incoming files on the drive with the most free space. To balance existing files across all the drives according to percentage used space, it looks like you need to do the following: Balancing -> Balancers: enable the "Disk Space Equalizer" balancer with its options set to "Equalize by the percent used". I'd also recommend making the "StableBit Scanner" balancer the highest priority if you're using that, and you can probably disable the "Volume Equalization" and "Drive Usage Limiter" balancers. Balancing -> Settings: turn on the Automatic balancing so that the (non-realtime, non-immediate) Balancers can do their thing. For now I'd suggest selecting "Balance immediately" with the "Not more often..." option unchecked.
-
Removing a drive doesn't make the pool read-only during the removal. It does prevent removing other drives (they'll be queued for removal instead) and I believe it may prevent some other background/scheduled tasks, but one should still be able to read and write files normally to the pool. Only problem I can think of is if you're removing drive X and you've got a file placement rule that says files can only be put on X; I'd assume you'd have to change or disable that rule.