Jump to content

Shane

Moderators
  • Posts

    792
  • Joined

  • Last visited

  • Days Won

    73

Everything posted by Shane

  1. Re the undeletable folder, possibly an ACL corruption issue; if it happens in the future the second post of this thread may help. Re the leftover duplication error, you could try (in order): Manage Pool -> Re-measure... Command prompt as Administrator dpcmd refresh-all-poolparts dpcmd reset-duplication-cache Cog icon -> Troubleshooting -> Recheck duplication (to see if the above fixed it) Command prompt as Administrator dpcmd check-pool-fileparts p:\ >p:\checkpartslog.txt (where p:\ is your pool; can take a very long time for a big pool, search p:\checkpartslog.txt afterwards for problems, there'll be a summary at the bottom to let you know if the command found anything) If you run into more problems with undeletable files, or just to see if it helps, refer to the second post of the thread I linked at the top of this post; use the fix on the problem sections of your pool or on all the poolparts. Cog icon -> Troubleshooting -> Recheck duplication (to see if the above fixed it) if all of the above fail, consider using Cog icon -> Reset all settings... (carefully read the warning message and decide if you want to go ahead or continue to search for another fix)
  2. DrivePool's duplication can be set at the pool level (all content placed in the set pool inherits this) and the folder level (all content placed in the set folder inherits this). Some folks use different levels of duplication for different content (e.g. they might have their pool set to x2 but particularly important folders set to x3); if your whole pool is at x2 and that's the way you want it then you don't have to worry about that. The copying was referring to the suggestion of using a unique folder inside the poolparts IF you've already got other stuff in the pool that you want to avoid bumping into; either way your external content would still be moved - seeded - into the pool (whether under that folder or directly under the poolpart folders). Thanks. That shouldn't happen, yes. Can you give an exact step-by-step of how to reproduce this?
  3. Try adjusting the settings under Manage Pool -> Performance to see if that makes any difference (but if you're using duplication, Real-time duplication should be ticked)? Try going back to a previous stable release of DrivePool? Whether or not a previous version works, I would recommend opening a support ticket with Stablebit.
  4. Going with a bit of an infodump here; there's a TLDR at the end. This should all be accurate as of v2.3.8.1600. DrivePool uses ADS to tag folders with their duplication requirements (untagged folders inherit their level from the first tagged folder found heading rootwards towards and including the poolpart folder) where their level or a subfolder's level differs from their pool's level (and the poolpart folders themselves are tagged if the pool's base level is x2 or higher). This is, as far as I know, the only permanent record of duplication kept by DrivePool (there may be a backup in the hidden metadata or covefs folders in the pool); everything that follows on from that is done in RAM where possible for speed. Duplication consistency checking involves looking at each drive's NTFS records (which Windows tries to cache in RAM) to see if each folder and file has the appropriate number of instances across the poolparts (e.g. if alicefolder declares or inherits a level of x2, then alicefolder should be on at least 2 drives and alicefolder\bobfile should be on only 2 drives) and is in the correct poolparts (per any file placement rules that apply) and has matching metadata (i.e. at least size and last-modified timestamp). If everything lines up, it leaves the files alone. If it does not, it will either ensure the correct number of instances (if that's the problem) are on the correct poolparts (if that's the problem) or warn the user (if there is a metadata mismatch). (It doesn't, as far as I'm aware, do any content comparison - I wish it had that as an option - leaving content integrity handling up to the user e.g. via SnapRAID, RAID1+, Multipar, etc). Duplication consistency checking can be manually initiated or performed automatically on a daily schedule. This means that DrivePool should not be deleting either of your two sets of seeded files, unless you don't have duplication turned on for the pool [1] or for the folder [2] into which you're seeding your content, because your content will inherit the duplication level of whatever folder it is being moved into. [1] e.g. if you are moving content directly into poolpart.string\ rather than into poolpart.string\somefolder\ then your content will inherit the pool's duplication level. [2] e.g. if you are moving content into poolpart.string\somefolder\ rather than directly into poolpart.string\ then your content will inherit somefolder's duplication level. Note: if you move a folder within a pool, it will keep its custom duplication level only if it has such - folders with inherited duplication will inherit their new parent's duplication. If instead you copy a folder within a pool, the copy will always inherit its new parent's duplication level. Testing #0: created new pool across two drives. created identical external content on both drives, in a folder named calico. Testing #1: pool duplication x1. opened both poolpart folders directly, seeded both with calico, started duplication consistency check. drivepool deleted one instance of calico's files, leaving the other instance untouched (as expected). Testing #2: pool duplication x2. opened both poolpart folders directly, seeded both with calico, started duplication consistency check. drivepool left both sets of calico's files untouched (as expected). Testing #3: created folder alice at x1 and bob at x2. opened all poolpart folders, manually created second alice folder, seeded both alice and bob on both drives with calico, started duplication consistency check. drivepool deleted one instance of calico's files in alice (as expected), leaving the other instance untouched (as expected) and did not touch calico's files in bob (as expected). It might be possible to confuse DrivePool by manually creating ex nihilo (rather than copying) additional instances of a folder that is tagged with a non-default duplication count and seeding into those? Would have to test further. But you can (and should) avoid that by simply manually copying that folder (from the poolpart in which it exists to any poolparts in which it doesn't that you plan to seed into). TLDR: for your scenario create a unique folder in the pool. ensure its duplication level is showing x2. open the poolparts you plan to seed with your content. if the folder isn't there, copy it so it is (i.e. don't create a "New Folder" and rename it to match, make a direct copy instead). set a file placement rule to keep that folder's content only on those two drives and tell drivepool to respect file placement (if you want that). seed that folder's instances in your poolparts with your content. remeasure. it should leave them untouched.
  5. The quoted part is only relevant if you're wanting to seed - for example - a folder on E: named "\Bob" into a pool P: that already has a "\Bob". And nothing stops you from creating a new folder in the pool (e.g. "\fromAlice") and then seeding your folder under that (e.g. resulting in "P:\fromAlice\Bob" which would be separate from the existing "P:\Bob"). If you're wanting to prevent your files from getting spread across the rest of the drives in the pool, you will first need to ensure that balancing is turned off and that priority is given to the File Placement rules above all else (unless you want an exception for something). Then after seeding set the File Placement rules to the effect that those files/folders must be kept only on those two drives (and if desired, that no other files/folders may be kept on those two drives) and ensure those folders are set to 2x duplication. Then you can turn balancing back on (if you're using it).
  6. I'd suggest opening a support ticket with StableBit.
  7. Are you using sparsely allocated files (e.g. bittorrent)? Those can confuse DrivePool (or at least have done so in the past, might still affect current version). NTFS compression is another possible culprit if it's in use. Or is the disk space growing without any actual drive activity? Does the Manage Pool -> Re-measure... tool correct the problem? (permanently? temporarily?)
  8. A1: At least to an extent, in my experience the more you can ensure that the files are read concurrently from different drives the more DrivePool can beat a single drive (so long as your bus is big enough). Conversely, it can do a little-to-significantly worse (depending on access patterns) than a single drive if you can't ensure that. Using 2x (or higher, YMMV) duplication and enabling read-striping on the pool can help greatly with this (e.g. if it goes to read two files and they're on the same disk, then if you'd turned on 2x duplication and striping it could've read each file from a different disk). Some older summing/hashing applications may encounter problems with read-striping (I suspect because they try physical calls and DrivePool isn't physical). You'll need to test before going "live". Incidentally if you're going to have millions of files to which you need fast access then make sure you've got enough RAM to keep the file table fully loaded (e.g. my disks currently contain ~3.2M files after duplication, using up ~3.5GB for the Metafile in RAM). A2: DrivePool does not balance files when they are locked. Also do not use DrivePool duplication with VM images unless real-time duplication is both enabled and completed prior to running, to avoid consistency errors.
  9. Hi CrazyHorse, pools and any duplication will be automatically recognised however you may need to recreate your balancer and file placement rules if you've customised them.
  10. Ah, okay. Yes, I'd open a ticket with StableBit if Scanner keeps finding "bad" sectors where all other tools don't.
  11. Modern drives will automatically attempt to repair bad sectors and/or replace them from a reserve built in for that purpose; since Scanner runs in the background it can trigger an alert before the drive can finish, so if Scanner and manually-run tools now can't find the problem that's most likely what happened and no reason for concern (unless it starts happening a lot).
  12. It's a kludge but you could use Disk Management to shrink the 8TB poolpart to match the size of the 3~4TB poolparts? When your pool is full enough you could then re-expand that poolpart.
  13. "1. pool was in good status as far as I know (but I guess maybe it wasn't?)" If it wasn't, this is something that should've been picked up the consistency checks that DrivePool does in the background. Agreed, very strange. You can have DrivePool show you the expected duplication levels of all folders in the pool and the actual space used via the GUI -> Manage Pool -> File Protection -> Folder Duplication... it will show an expandable color-coded folder tree of your pool, and beneath that it will begin calculating the actual size used before duplication (pale blue) and size used by duplication (dark blue); for a consistent pool with entirely x2 duplication those two sizes should finish roughly equal (I say roughly because system folders have their own special duplication levels which may throw the figure off a little). Note that if you see a plus sign after a folder's duplication level it means one or more sub-folders have a different duplication level (including possibly no duplication; "plus" here doesn't mean "or more", it means "or otherwise"). You can force DrivePool to recheck/repair duplication via the GUI -> Cog icon (top right) -> Troubleshooting -> Recheck duplication...
  14. Shane

    Drivepool balancing

    The only ways I can think of to do that would be to #1, have a pool exclusively for just those drives and that folder (whether by itself or as part of a bigger pool), #2 use File Placement and allow only that folder to use those drives (and I'm not sure if the disk space balancing algorithm is smart enough to figure out what you want from that), or #3 turn off automatic balancing and manually spread the files in that folder across the poolparts of those disks yourself.
  15. It should be. Alternatively, if both have the same version of DrivePool, you could copy the known good one across to the other machine and then restart the latter's DrivePool service.
  16. Shane

    Drivepool not balancing

    @Christopher (Drashna) Any ideas?
  17. Shane

    Drivepool not balancing

    Try temporarily turning off all other balancers and requesting a re-measure?
  18. Shane

    Drivepool not balancing

    To maximise the likelihood of balancing occurring, ensure you have "Automatic balancing - Triggers" set to 100% with "Or at least this much data needs to be moved" ticked and set to a small value (e.g. 100 MB).
  19. For various reasons I removed the drive letters of the individual poolpart volumes and instead mounted them as folders under a path (e.g. "e:\disks\d1", "e:\disks\d2", "e:\disks\d3", etc) in a small empty volume (e.g. "e:\"). In the case of Windows Defender adding that path folder (e.g. "e:\disks\") to Defender's exclusions* prevents it double-scanning everything. If I want to completely prevent it scanning a particular file or folder in the pool I can then also add the file's or folder's pool path (e.g. "p:\noscanfolder\notthisfile"). *For Windows 10, open Windows Security -> Virus & threat protection -> Manage Settings -> Add or remove exclusions -> Add an exclusion -> Folder.
  20. It should only compute as much as it needs to so that it can meet the requirements of each balancer (sorry, I realise that's vague). But if you've still got 100% cpu going on I'd recommend opening a ticket so StableBit can help directly.
  21. My DP machine is a Ryzen 5 with 16GB RAM and about 750k files (a twentieth of yours) which doesn't take very long to balance but the vast majority of those files are balanced already; if your pool has a lot of steady churn from ssd to archive that would also multiply the problem, though I still would think a Ryzen 9 shouldn't be getting stuck at 100% for days (let alone weeks) regardless. Yeah, I'd stick with real-time x2 duplication. If you want files to offload from SSD to Archive pretty much as soon as they land to keep the SSD from filling, set Automatic balancing to "Balance immediately" and Triggers to "1GB" or similar. You might also try "Not more often than..." if you wish to try an hourly emptying or similar. To minimise bucket list computation, disable all non-essential Balancers. I don't know if it'd help, but maybe try having a Pool A = Pool B (SSDx2) + Pool C (HDDx10) arrangement where: Pool A = no duplication, balance immediately / not more often than, with 1GB trigger; use SSD Optimizer (SSD = Pool B, Archive = Pool C), no other balancers. Pool B = x2 real-time duplication, automatic balancing off, all balancers disabled (except maybe StableBit Scanner). Pool C = x2 real-time duplication, automatic balancing nightly (StableBit Scanner plus anything else you need). That way - in theory - pool A is where all your files "are" (from the user perspective) and only has one main job, "use B as a buffer for C", while pool C keeps the HDDs balanced as its own job scheduled separately from the cache emptying? Note that Pool B doesn't need Duplication Space Optimizer because the duplication multiplier matches the number of drives. If nothing seems to fix the CPU/bucket issue, I'd suggest opening a support ticket with StableBit.
  22. The bucket lists are the files (and folders) that the balancer determines needs to be moved between poolparts. Does your data set involve a very large number of files? Do you have balancing set to immediately trigger on any amount of data so that the cache drives effectively act as a real-time-ish buffer or is it scheduled daily so files aren't moved off until a certain time? Is your duplication real-time or nightly, and at what multiplier(s)? If it helps I believe you can see what files are being added to the bucket lists via opening the Service Log ( DrivePool -> Settings -> Troubleshooting -> Service log... ) and setting the File Mover trace level from Informational to Verbose ( Tracing levels -> F -> File Mover -> Verbose). I'd be going with the SSD Optimizer balancer to have the two 8TB drives as cache, which I'm guessing is what you're doing, but even filled they shouldn't be taking weeks to empty unless there's some kind of bottleneck going on and 100% CPU for that length of time also seems a red flag. What CPU do you have? If you open Windows Task Manager can you check in Performance to see if there's high kernel times (right-click the graph) and in Details can you see any culprit process(es)?
  23. Hi, how do you have your pool(s) arranged and what settings are you using for the balancing?
  24. Shane

    Ron

    Do you have Automatic Balancing turned on (from the DrivePool app, Manage Pool -> Balancing...) and what is the Triggers section set to?
  25. "1. Maximise the space I have" DrivePool can take any bunch of NTFS-formatted (or REFS if you use that) simple disks and pool their free space as a virtual drive; existing content is unaffected and you can continue using the disks individually if you want. Only caveat is that any one incoming file can't be larger than the largest amount of free space individually available on those disks. "2. Back up my desktop to it" I do nightly backups of my desktop and two laptops to my pool via a network share. Keep in mind the above caveat. "3. Improve drive performance, to hopefully better utilise the 10Gbps connection and improve read/write speeds as much as possible (e.g. similar benefits to a RAID0-like setup)" Short answer: If you want RAID0-like performance, you need an actual RAID0-like array. Nothing else comes close. Longer answer: DrivePool's real-time duplication is simultaneous to the disks involved, so pools are no slower or faster to write to regardless of duplication level; in that respect it is comparable to RAID1. DrivePool will also attempt, where files exist on multiple disks within a pool, to read from the disk it decides is likely to offer better performance. It also offers a read-striping option, but the performance boost is minor due to DrivePool operating at the file level rather than the block level (and its read-striping is not compatible with some hashing utilities) Additional: if you add one or more SSDs to a pool, you can set DrivePool to use those as an incoming cache (with scheduled emptying to the rest of the pool) or for specific folders (e.g. if you want certain files to always be saved to and kept on the SSDs). Additional #2: if a drive in a pool fails, the pool becomes read-only until the failure is resolved. This is much better than RAID0 (where the array just flat dies) but not as good as RAID1 or higher (where the array usually continues to be writable). "4. Have a local backup/duplicates on the drives of all the content to protect against drive failures (e.g. similar benefits to a RAID1 / RAID10-like setup)" DrivePool can be set to duplicate files across any number of disks in a pool, and this can be controlled down to the folder level (e.g. you might have a folder tree at x2 but certain folders within that tree at x1 and x3, or vice versa, etc). "As this is a DAS, it is also captured by my Backblaze account and will all be backed up to the cloud for an off-site, cloud-based backup as a last resort if ever needed." There is an issue with file IDs, which some backup utilities use to decide what needs to be backed up, not having guaranteed persistence in DrivePool's pools. If Backblaze uses these without checking their validity, there are workarounds (e.g. backing up the physical disks that form the pool, rather than the pool itself) but it is something to keep in mind (e.g. it may constrain how you set up duplication).
×
×
  • Create New...