Jump to content

Shane

Moderators
  • Posts

    748
  • Joined

  • Last visited

  • Days Won

    66

Everything posted by Shane

  1. At least when testing on my own machines there is caching going on - but my opinion is that's being done by Windows since caching file system queries is part of a modern operating system's job description and having DrivePool do it too seems like it would be redundant (although I know dedicated caching programs e.g. PrimoCache do exist). Certainly there's nothing in DrivePool's settings that mentions caching. Whether a disk gets woken by a particular directory query is going to depend on whether that query can be answered by what's in the cache from previous queries.
  2. To quote Christopher from that thread, "StableBit DrivePool pulls the directory list for the pool directly from the disks, and merges them in memory, basically. So if you have something that is scanning the folders, you may see activity." There may be some directory/file list caching going on in RAM, whether at the Windows OS and/or DrivePool level, but DrivePool itself does not keep any form of permanent (disk-based, reboot-surviving) record of directory contents.
  3. Not currently. StableBit decided against making the software that way due to the risk of data conflicts - with one or more disks missing in the absence of a master directory tree record there is no way (other than the user being extremely careful) to prevent different files being created that occupy the same paths. You could submit a feature request for it as an optional setting. I know once or twice I've run into situations where I would've liked it, but I imagine if StableBit implemented it then it would be very much a Use At Own Risk thing.
  4. Note that hdparm only controls if/when the disks themselves decide to spin down; it does not control if/when Windows decides to spin the disks down, nor does it prevent them from being spun back up by Windows or an application or service accessing the disks, and the effect is (normally?) per-boot, not permanent. If you want to permanently alter the idle timer of a specific hard drive, you should consult the manufacturer. An issue here is that DrivePool does not keep a record of where files are stored, so I presume it would have to wake up (enough of?) the pool as a whole to find the disk containing the file you want if you didn't have (enough of?) the pool's directory stucture cached in RAM by the OS (or other utility).
  5. DrivePool defaults to storing logs in C:\ProgramData\StableBit DrivePool\Service\Logs\Service if that helps any.
  6. It shouldn't cause any issues. When all the drives are unplugged the pool will no longer show up, but DrivePool will redetect the pool when the drives are plugged back in.
  7. Hi EdwardK, see this recent post and links by blunderdome - apparently it is a firmware bug that can affect MX500 drives?
  8. Enabling duplication for your entire pool will default to doubling the used space of your pool. So yes, if you had 36TB of unduplicated files in the pool, enabling the default duplication (x2) would result in those files using 72TB of space (assuming you had that much space available). To answer a later question in your post: if for example you had drives A, B, C and D and you had a file on B (inside the PoolPart folder), enabling duplication x2 will put that file also on either A or C or D. Adding a drive to a pool will NOT put any pre-existing data on that drive into the pool (which is why the pool still looks empty when you add a used drive to it; it's only making use of the free space of that drive, it's not automatically adding files from it). Any pre-existing data will NOT be touched, remaining where it is. You have to move that data yourself if you want it in the pool. It's fine to move pre-existing data on a drive directly into that drive's hidden PoolPart folder to get it quickly into the pool; you just have to be careful to avoid accidentally overlapping anything already in the pool (e.g. on other drives) that has the same folder/file naming. To avoid that accidentally happening, I suggest creating a unique folder (e.g. maybe use that drive's serial number as the folder name) inside each PoolPart and moving your data into that if you're worried; once you've done that for all the pre-used drives you're adding to the pool you can then move them where you want in the pool normally. If you're moving data directly into the hidden PoolPart folder, after you've finished you should also tell DrivePool to re-measure that pool (in the GUI, click Manage Pool -> Re-measure...) so that it can "see" how much data is on each drive in the pool. This helps it perform any balancing accurately. If you use DrivePool's "Remove" function to remove a drive from the pool, the data inside that drive's PoolPart folder will be moved from that drive to other drives in the Pool as part of the operation. Any data on that drive that is outside of that drive's PoolPart folder will NOT be touched (because that data wasn't in the pool). DrivePool doesn't keep a record of which files are on which drives in the pool. You would need to keep your own records for that. Regarding using SnapRAID with DrivePool, #1 you should turn off (or be very careful with) DrivePool's balancing, #2 if you're only going to be using the drives for DrivePool with no data outside the poolparts then I suggest using the PoolPart folders as SnapRAID's mount points as per this post. Your mileage may vary however.
  9. The default log location is C:\ProgramData\StableBit Scanner\Service\Logs if that helps.
  10. Quoting from the description for current pending sector counts in Scanner, "If an unstable sector is subsequently written or read successfully, this value is decreased and the sector is not remapped." So I'd guess that between Scanner detecting the event and you opening it, the drive has been able to successfully re-read the sector on its own and taken it off the list.
  11. 1. DrivePool not waiting long enough for the drives. You can increase how long DrivePool waits for drives after booting by editing some or all of the following entries in its advanced settings file: CoveFs_WaitForVolumesOnMountMs CoveFs_WaitForKnownPoolPartsOnMountMs CoveFs_WaitForPoolsAndPoolPartsAfterMountMs 2. DrivePool appears to be scanning all drives in all pools, not just the drives in a particular pool, when checking metadata for file(s) in that particular pool. ... Assuming I'm understanding correctly that the above is what's happening, I don't know why that's happening. If it's instead that you've got a pool consisting of nvme and hdd drives and it's checking the hdd drives even though you yourself know the file(s) involved are only on the nvme drives, that would be because DrivePool does not keep its own records of directory contents across drives so it checks all the drives in the pool when looking for files. If you have the RAM space and the need is sufficient, perhaps consider dedicated disk caching software (e.g. I've heard from other DrivePool users that PrimoCache is very good at this task but have not tried it myself)? 3. DrivePool reporting files being duplicated that aren't set to be duplicated. I suspect this is DrivePool checking whether each folder needs to be duplicated or not, not actually duplicating them. If an affected pool is supposed to have no duplication at all, check that Manage Pool -> File protection -> Pool file duplication is actually unticked for that pool? You could also try editing FileDuplication_NoConsistencyCheckOnNewPool to True in the advanced settings file to completely turn off duplication consistency checking when pools connect but note that this will affect all pools and only applies on connection.
  12. If it's largely set at defaults, I've noticed it doesn't usually try to get the distribution across drives "perfect" just simply "good enough"? Attempting a "perfect" distribution can mean more inter-drive movement which means more activity/wear. In your case this might in part be due to the emptying limit on the Prevent Drive Overfill balancer? If that's enabled and involved, lowering the emptying limit might allow it to shift more data around.
  13. Hi Thranx, I think this might be better sent to Stablebit directly via the Contact form, as it could be an undiscovered bug in the UI? Please let us know how it goes though, that's a big shark pool and I hope you don't end up needing a bigger boat! (if it does turn out to be a UI issue on max drives per pool that can't be fixed quickly, perhaps you could try a pool of pools, with the underlayer of pools being made from the drives in each bay?)
  14. I think you can leave the StableBit DrivePool Shutdown service as Automatic. Re (un)mounting the drives, it's a reference to making sure the drives don't have anything remaining in the write cache before being physically turned off (e.g. via choosing the Eject option in the Windows system tray before physically pulling out a drive from the USB socket). I don't have such a script handy to do so, though.
  15. Uh, yes, I'd agree, it's not a task for DrivePool - because it doesn't do that. DrivePool just pools files, the specific content of those files is irrelevant to it. Post screenshot (redact specifics if necessary)?
  16. Hmm. Perhaps CrystalDiskMark? You can choose to select a folder as well as drives, so you could test drives mounted as folders that way. It won't scan them all at once, you'd have to record the results from each drive and compare them yourself.
  17. If on that PC you only use DrivePool with those drives - perhaps you could try changing the DrivePool service from Automatic to Manual, then only turn it on after you turn on the drives and turn it off before you turn off the drives. That way the service should never notice the drives are "missing" (because it wouldn't be while the service is running). Maybe put a pair of shortcuts on your desktop to start and stop the service (maybe also to mount and dismount the drives) respectively?
  18. It isn't (yet?) mentioned in the changelog, but it apparently has been added in the recent beta: I also suggest reading through the rest of the linked thread in case any of the other posts in it are relevant to your situation.
  19. If a disk in the pool goes missing then the pool should automatically change to read-only until the disk returns; we can test for that with a simple batch file. For example if the app was "c:\bin\monitor.exe" and your pool's drive letter was "p" then you could create an empty file called "testreadable.txt" in the root folder of your pool and create a batch file (e.g. "launchapp.bat") containing one line: COPY /Y p:\testreadable.txt p:\testwritable.txt && "c:\bin\monitor.exe" Launching that batch file would only result in launching the app if the file p:\testreadable.txt could be copied over to p:\testwritable.txt - which would indicate the pool was writable (all disks are present). Note that this doesn't help if a drive goes missing while the app is already running. You'd need something more for that.
  20. Correct. And: Try "resmon" (aka the Windows Resource Monitor) to see files that are currently being read/written. Open it, click on the "Disk" tab, find the "Disk Activity" section; you can click on a column header to sort (and toggle ascending/descending). You can also click the checkboxes in the "Processes with Disk Activity" section to filter by process. Try "everything" (made by voidtools.com) to quickly see all files that are on any letter-mounted physical NTFS volume on a machine (amongst other tricks). And you could for example type in "Acme\EpisodeOne" and it would immediately show you which disks in the pool (so long as those disks have drive letters) have files that match that string.
  21. Having done some looking it does appear that BypassIO is planned to be used for storage performance in general instead of just gaming (which is the main end-user impetus for it currently) and that it will eventually be expanded to interfaces beyond NVMe, with Microsoft pushing for drivers to at minimum be updated to respond to the (Windows 11) OS whether they do or do not support it. So I'd imagine Stablebit would want to add it, it'd just be a matter of priorities as to when. For anyone curious, you can check whether any drivers are preventing or only partially supporting BypassIO for a given drive/path via the command fsutil bypassio state path (e.g. "c:\"). Note that it might only tell you about one driver being an issue even if other drivers are also an issue. If you get "bypassio is an invalid parameter" then you're on an older version of Windows that doesn't have BypassIO.
  22. I have no idea whether it will fix whatever the problem is, but if you try it then I'd suggest checking that the config store folder is actually removed when the program is uninstalled before you reinstall?
  23. Correct. The rules are checked against the full path of files, which includes their folder(s). For example a rule of \TV-Shows\A* would match both \TV-Shows\Allshowslist.txt and \TV-Shows\AcmeShow\EpisodeOne.mp4 and place them accordingly. If you wanted to match the latter but not the former then you would instead use a rule of \TV-Shows\A*\* to do so.
  24. As far as I know, DrivePool only has basic wildcard support (asterisk and question mark), it doesn't have regular expression support. You'd need to make a feature request.
  25. Note that your linked site notes the feature isn't actually enabled yet in Diablo 4 - so it can't take advantage of DirectStorage even if you do install the game outside the pool - and the game still runs fine without it. So right now "an increasing number of PC games" appears to be... one game instead of zero? With just a handful in development, and we don't know whether any of those will require it to work? It is a nice tech though, if you have the high-end hardware to support it.
×
×
  • Create New...