Jump to content

Shane

Moderators
  • Posts

    756
  • Joined

  • Last visited

  • Days Won

    70

Posts posted by Shane

  1. It can be fiddly but as long as they're on the same subnet then yes; see https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Remote Control Details - Scanner has the same file in C:\Program Files (x86)\StableBit\Scanner\Service instead.

    TLDR:

    1. Your user account that is connecting must also be a user on the remote computer, and that user must be part of the built-in Administrators group.
    2. Make sure both machines are running the same version of DrivePool/Scanner.
    3. Make sure the Remote Control option is ticked in the Cog icon settings menu of DrivePool on both machines, and in the Settings->Scanner settings for Scanner likewise.
    4. Make a copy of the RemoteControl.default.xml file in the relevant folders to Remotecontrol.xml in the same folder (it shouldn't matter but don't mix Scanner and DrivePool). You'll need Administrator access to do this.
    5. Restart the DrivePool and Scanner services on both machines and launch the relevant GUI.
    6. You should see your machine's name up the top of the GUI, click it to see if the other one has shown up. You may have to wait a little while and try again.

    If the remote machine doesn't get discovered even after waiting, you can try editing the RemoteControl.xml file in the local folder to manually add the hostname or IPv4 address of the remote machine then restart the services again (note: a machine may then show up multiple times in the connection box, I think because once it's manually visible it can become discoverable as well). It can be tricky editing the file in its usual folder; alternatively move the file to your Desktop, edit/save it there, then move it back.

  2. When intending to remove multiple drives I strongly recommend using the Drive Usage Limiter balancer (make sure it has priority over everything else except optionally the StableBit Scanner balancer if the latter is present) to empty all of the drives you wish to get rid of before you proceed to Remove them. While the latter is able to queue multiple drives for removal that function currently operates dumbly and may evacuate each drive to other drives waiting in the queue - so it can potentially waste enormous amounts of time and bandwidth.

    As far as speeding up the automatic balancing itself, there's not really a way to make that go faster other than clicking on the Increase priority double-arrows adjacent to the Pool Organization bar if they're present (IMO it makes little difference if the pool is otherwise not in use but YMMV); for quicker results you'd have to turn balancing off and manually copy/move the data between the hidden poolparts in parallel yourself (being careful to avoid "crossing the streams" so to speak - it's safe if you know what you're doing but you're still proceeding at your own risk) and then perform a Re-measure afterwards.

  3. MitchC, first of all thankyou for posting this! My (early a.m.) thoughts:

    • (summarised) "DrivePool does not properly notify the Windows FileSystem Watcher API of changes to files and folders in a Pool."

    If so, this is certainly a bug that needs fixing. Indicating "I changed a file" when what actually happened was "I read a file" could be bad or even crippling for any cohabiting software that needs to respond to changes (as per your example of Visual Studio), as could neglecting to say "this folder changed" when a file/folder inside it is changed.

    • (summarised) "DrivePool isn't keeping FileID identifiers persistent across reboots, moves or renames."

    Huh. Confirmed, and as I understand it the latter two should be persistent @Christopher (Drashna)? However, attaining persistence across reboots might be tricky given a FileID is only intended to be unique within a volume while a DrivePool file can at any time exist across multiple volumes due to duplication and move between volumes due to balancing and drive replacement. Furthermore as Microsoft itself states "File IDs are not guaranteed to be unique over time, because file systems are free to reuse them". I.e. software should not be relying solely on these over time, especially not backup/sync software! If OneDrive is actually relying on it so much that files are disappearing or swapping content then that would seem to be an own-goal by Microsoft. Digging further, it also appears that FileID identifiers (at least for NTFS) are not actually guaranteed to be collision-free (it's just astronomically improbable in the new 64+64bit format as opposed to the old but apparently still in use 16+48bit format).

    • (quote) "the FileID numbers given out by DrivePool are incremental and very low.  This means when they do reset you almost certainly will get collisions with former numbers."

    Ouch. That's a good point. Any suggestions for mitigation until a permanent solution can be found? Perhaps initialising DrivePool's FileID counter using the system clock instead of initialising it to zero, e.g. at 100ns increments (FILETIME) even only an hour's uptime could give us a collision gap of roughly thirty-six billion?

    • (quote) "I would avoid any file backup/synchronization tools and DrivePool drives (if possible)."

    I disagree; rather, I would opine that any backup/synchronization tool that relies solely on FileID for comparisons should be discarded (if possible); a metric that's not reliable over time should ipso facto not be trusted by software that needs to be reliable over time.

    Incidentally, on the subject of file hashing I recommend ensuring Manage Pool -> Performance -> Read striping is un-ticked as I've found intermittent hashing errors in a few (not all) third-party tools when this is enabled; I don't know why this happens (maybe low-level disk calls that aren't compatible with non-physical volumes?) but disabling read-striping removes the problem and I've found the performance hit is minor.

  4. At least when testing on my own machines there is caching going on - but my opinion is that's being done by Windows since caching file system queries is part of a modern operating system's job description and having DrivePool do it too seems like it would be redundant (although I know dedicated caching programs e.g. PrimoCache do exist). Certainly there's nothing in DrivePool's settings that mentions caching.

    Whether a disk gets woken by a particular directory query is going to depend on whether that query can be answered by what's in the cache from previous queries.

  5. 4 hours ago, andrewds said:

    Is that true? I asked for clarity on this exactly several days ago but never got a response.

    To quote Christopher from that thread, "StableBit DrivePool pulls the directory list for the pool directly from the disks, and merges them in memory, basically.  So if you have something that is scanning the folders, you may see activity."

    There may be some directory/file list caching going on in RAM, whether at the Windows OS and/or DrivePool level, but DrivePool itself does not keep any form of permanent (disk-based, reboot-surviving) record of directory contents.

  6. Not currently. StableBit decided against making the software that way due to the risk of data conflicts - with one or more disks missing in the absence of a master directory tree record there is no way (other than the user being extremely careful) to prevent different files being created that occupy the same paths.

    You could submit a feature request for it as an optional setting. I know once or twice I've run into situations where I would've liked it, but I imagine if StableBit implemented it then it would be very much a Use At Own Risk thing.

  7. Note that hdparm only controls if/when the disks themselves decide to spin down; it does not control if/when Windows decides to spin the disks down, nor does it prevent them from being spun back up by Windows or an application or service accessing the disks, and the effect is  (normally?) per-boot, not permanent. If you want to permanently alter the idle timer of a specific hard drive, you should consult the manufacturer.

    9 hours ago, Katz777 said:

    My Media server gets light use and ideally I would like for the disks to spin down when not in use then for the individual drive in use to spin up when a media file is accessed from the Plex server which runs from my Windows 11 PC.

    An issue here is that DrivePool does not keep a record of where files are stored, so I presume it would have to wake up (enough of?) the pool as a whole to find the disk containing the file you want if you didn't have (enough of?) the pool's directory stucture cached in RAM by the OS (or other utility).

  8. Enabling duplication for your entire pool will default to doubling the used space of your pool. So yes, if you had 36TB of unduplicated files in the pool, enabling the default duplication (x2) would result in those files using 72TB of space (assuming you had that much space available). To answer a later question in your post: if for example you had  drives A, B, C and D and you had a file on B (inside the PoolPart folder), enabling duplication x2 will put that file also on either A or C or D.

    Adding a drive to a pool will NOT put any pre-existing data on that drive into the pool (which is why the pool still looks empty when you add a used drive to it; it's only making use of the free space of that drive, it's not automatically adding files from it). Any pre-existing data will NOT be touched, remaining where it is. You have to move that data yourself if you want it in the pool.

    It's fine to move pre-existing data on a drive directly into that drive's hidden PoolPart folder to get it quickly into the pool; you just have to be careful to avoid accidentally overlapping anything already in the pool (e.g. on other drives) that has the same folder/file naming. To avoid that accidentally happening, I suggest creating a unique folder (e.g. maybe use that drive's serial number as the folder name) inside each PoolPart and moving your data into that if you're worried; once you've done that for all the pre-used drives you're adding to the pool you can then move them where you want in the pool normally.

    If you're moving data directly into the hidden PoolPart folder, after you've finished you should also tell DrivePool to re-measure that pool (in the GUI, click Manage Pool -> Re-measure...) so that it can "see" how much data is on each drive in the pool. This helps it perform any balancing accurately.

    If you use DrivePool's "Remove" function to remove a drive from the pool, the data inside that drive's PoolPart folder will be moved from that drive to other drives in the Pool as part of the operation. Any data on that drive that is outside of that drive's PoolPart folder will NOT be touched (because that data wasn't in the pool).

    DrivePool doesn't keep a record of which files are on which drives in the pool. You would need to keep your own records for that.

    Regarding using SnapRAID with DrivePool, #1 you should turn off (or be very careful with) DrivePool's balancing, #2 if you're only going to be using the drives for DrivePool with no data outside the poolparts then I suggest using the PoolPart folders as SnapRAID's mount points as per this post. Your mileage may vary however.

  9. Quoting from the description for current pending sector counts in Scanner, "If an unstable sector is subsequently written or read successfully, this value is decreased and the sector is not remapped."

    So I'd guess that between Scanner detecting the event and you opening it, the drive has been able to successfully re-read the sector on its own and taken it off the list.

  10. 1. DrivePool not waiting long enough for the drives.
    You can increase how long DrivePool waits for drives after booting by editing some or all of the following entries in its advanced settings file:

    • CoveFs_WaitForVolumesOnMountMs
    • CoveFs_WaitForKnownPoolPartsOnMountMs
    • CoveFs_WaitForPoolsAndPoolPartsAfterMountMs

    2. DrivePool appears to be scanning all drives in all pools, not just the drives in a particular pool, when checking metadata for file(s) in that particular pool.
    ... Assuming I'm understanding correctly that the above is what's happening, I don't know why that's happening. If it's instead that you've got a pool consisting of nvme and hdd drives and it's checking the hdd drives even though you yourself know the file(s) involved are only on the nvme drives, that would be because DrivePool does not keep its own records of directory contents across drives so it checks all the drives in the pool when looking for files. If you have the RAM space and the need is sufficient, perhaps consider dedicated disk caching software (e.g. I've heard from other DrivePool users that PrimoCache is very good at this task but have not tried it myself)?

    3. DrivePool reporting files being duplicated that aren't set to be duplicated.
    I suspect this is DrivePool checking whether each folder needs to be duplicated or not, not actually duplicating them. If an affected pool is supposed to have no duplication at all, check that Manage Pool -> File protection -> Pool file duplication is actually unticked for that pool? You could also try editing FileDuplication_NoConsistencyCheckOnNewPool to True in the advanced settings file to completely turn off duplication consistency checking when pools connect but note that this will affect all pools and only applies on connection.

  11. If it's largely set at defaults, I've noticed it doesn't usually try to get the distribution across drives "perfect" just simply "good enough"? Attempting a "perfect" distribution can mean more inter-drive movement which means more activity/wear.

    In your case this might in part be due to the emptying limit on the Prevent Drive Overfill balancer? If that's enabled and involved, lowering the emptying limit might allow it to shift more data around.

  12. Hi Thranx, I think this might be better sent to Stablebit directly via the Contact form, as it could be an undiscovered bug in the UI? Please let us know how it goes though, that's a big shark pool and I hope you don't end up needing a bigger boat! :)

    (if it does turn out to be a UI issue on max drives per pool that can't be fixed quickly, perhaps you could try a pool of pools, with the underlayer of pools being made from the drives in each bay?)

  13. I think you can leave the StableBit DrivePool Shutdown service as Automatic.

    Re (un)mounting the drives, it's a reference to making sure the drives don't have anything remaining in the write cache before being physically turned off (e.g. via choosing the Eject option in the Windows system tray before physically pulling out a drive from the USB socket). I don't have such a script handy to do so, though.

  14. If on that PC you only use DrivePool with those drives - perhaps you could try changing the DrivePool service from Automatic to Manual, then only turn it on after you turn on the drives and turn it off before you turn off the drives. That way the service should never notice the drives are "missing" (because it wouldn't be while the service is running).

    Maybe put a pair of shortcuts on your desktop to start and stop the service (maybe also to mount and dismount the drives) respectively?

×
×
  • Create New...