Jump to content

Shane

Moderators
  • Posts

    756
  • Joined

  • Last visited

  • Days Won

    70

Posts posted by Shane

  1. If a disk in the pool goes missing then the pool should automatically change to read-only until the disk returns; we can test for that with a simple batch file. For example if the app was "c:\bin\monitor.exe" and your pool's drive letter was "p" then you could create an empty file called "testreadable.txt" in the root folder of your pool and create a batch file (e.g. "launchapp.bat") containing one line:

    COPY /Y p:\testreadable.txt p:\testwritable.txt && "c:\bin\monitor.exe"

    Launching that batch file would only result in launching the app if the file p:\testreadable.txt could be copied over to p:\testwritable.txt - which would indicate the pool was writable (all disks are present).

    Note that this doesn't help if a drive goes missing while the app is already running. You'd need something more for that.

  2. Correct. And:

    On 7/5/2023 at 10:41 AM, paulml said:

    I couldn't find an activity log to test out the rule and check to see for myself.

    Try "resmon" (aka the Windows Resource Monitor) to see files that are currently being read/written. Open it, click on the "Disk" tab, find the "Disk Activity" section; you can click on a column header to sort (and toggle ascending/descending). You can also click the checkboxes in the "Processes with Disk Activity" section to filter by process.

    Try "everything" (made by voidtools.com) to quickly see all files that are on any letter-mounted physical NTFS volume on a machine (amongst other tricks). And you could for example type in "Acme\EpisodeOne" and it would immediately show you which disks in the pool (so long as those disks have drive letters) have files that match that string.

  3. Having done some looking it does appear that BypassIO is planned to be used for storage performance in general instead of just gaming (which is the main end-user impetus for it currently) and that it will eventually be expanded to interfaces beyond NVMe, with Microsoft pushing for drivers to at minimum be updated to respond to the (Windows 11) OS whether they do or do not support it.

    So I'd imagine Stablebit would want to add it, it'd just be a matter of priorities as to when.

    For anyone curious, you can check whether any drivers are preventing or only partially supporting BypassIO for a given drive/path via the command fsutil bypassio state path (e.g. "c:\").

    Note that it might only tell you about one driver being an issue even if other drivers are also an issue. If you get "bypassio is an invalid parameter" then you're on an older version of Windows that doesn't have BypassIO.

  4. Correct. The rules are checked against the full path of files, which includes their folder(s).

    For example a rule of \TV-Shows\A* would match both \TV-Shows\Allshowslist.txt and \TV-Shows\AcmeShow\EpisodeOne.mp4 and place them accordingly.

    If you wanted to match the latter but not the former then you would instead use a rule of \TV-Shows\A*\* to do so.

  5. 19 hours ago, virtd said:

    There is an increasing number of PC games starting to use DirectStorage, and without support for BypassIO in DrivePool they won't work. Those games (e.g. Forspoken, Diablo 4) must currently be installed outside the pool to take advantage of DirectStorage.

    Note that your linked site notes the feature isn't actually enabled yet in Diablo 4 - so it can't take advantage of DirectStorage even if you do install the game outside the pool - and the game still runs fine without it.

    So right now "an increasing number of PC games" appears to be... one game instead of zero? With just a handful in development, and we don't know whether any of those will require it to work?

    It is a nice tech though, if you have the high-end hardware to support it.

  6. Here's a FAQ on what Other etc is, with a followup post where Alex goes into more detail on Other; in your case I'd suggest it's the metadata, directory entries and slack space.

    For whatever vague comparison it's worth, my 39.1TB pool with 4KB clusters has 18.0GB of Other. Note that (64/4)x(8.4/39.1)x18 ≈ 62GB, so by those factors your pool appears to have proportionately less "Other" than mine... and as a fraction of total capacity your Other is consuming less than 0.65% of your pool. I wouldn't worry about it! B)

  7. Short answer: There's no one true answer.

    Long answer: If you want maximum performance while you're working, you want it to not (have to) balance during those times. If you want maximum longevity, you want it to move files as little as possible. But it might have to balance more often if you're dealing with sufficiently large files compared to the available space on your drives.

    For example: you might set DrivePool to only balance at midnight, to prefer to keep files within pooldrive:\work-in-progress\ duplicated on drives A and B, and to prefer to keep files within pooldrive:\completed-projects\ duplicated on any two of drives C, D, E and F; basically this would mean A and B wear faster but C, D, E and F last much longer.

    I'd also suggest getting StableBit Scanner (if you haven't already) as it can be used to keep an eye on the remaing life of your SSDs, be set to notify/email about issues, etc, and in conjunction with DrivePool it can automatically evacuate files on unhealthy drives to good ones.

  8. Duplication measurement is the total used space; e.g. if you had a 1MB file duplicated across drive A and drive B, the Duplicated measurement would be 2MB.

    For finding the Other, I suggest a program like JAM TreeSize which (when run elevated) should be able to accurately report the consumption of a drive including system folders.

  9. Uninstalling the DrivePool software will just make your pooling unavailable until it is reinstalled; the content of the pool is still there in the poolpart folder on each drive, and those poolpart folders will be found and re-used by DrivePool when it is reinstalled.

    I believe Repair just checks to see if any needed files are missing and replaces them; I don't know if it would (or how well it would) detect damaged configurations/files.

    You could certainly try uninstalling and reinstalling to see if that does better than a repair in solving the problem; it won't harm your pool.

  10. Could be a file permissions issue? Maybe check if other balancers are also not keeping changes?

    The config folder is "C:\ProgramData\StableBit DrivePool\Service\Store\Json"

    If you're unable to spot anything different, I'd suggest opening a ticket to report it.

  11. Ah, okay, you've got multiple google drives in the pool; with default settings that makes the default removal method non-viable because normally DrivePool will remove files from the drive being removed to the remaining drive(s) with the most free space - i.e. most of the time it'll be going to be your other google drives... that you also want to remove. So we'll need to use a different method.

    Are the "Google Drive #" disks each on separate accounts each with a 5TB limit or all on one account with one 5TB limit?

    Is the "Cloud Media" the NAS or is it also a google drive?

    How fast is your average download speed from google drive directly? via CloudDrive? Is your speed high enough to empty or copy the google drives before google switches them to read-only?

    If it's fast enough (allow a margin of error), you could add your NAS to the pool and then use the Drive Usage Limiter balancer to empty the google drives into it.

    If it's not fast enough, I'd recommend switching your cloud drives to read-only now (by manually "breaking" - this is reversible - the pool if necessary) and then begin copying the cloud drives / pool parts to your NAS.

    If you'd prefer to set up a chat (e.g. discord/teams/zoom/whatever) instead of going back and forth day(s) at a time feel free to Message me (rest the cursor over my user icon, should show an option to send a Message) with your preferred chat details/time(zone)s.

  12. When you remove a drive from a pool, (normally) the data in that drive is removed to the rest of the pool first, which is limited by the speed of the connection. If you're using duplication, you can make it quicker by ticking the "Duplicate files later" checkbox at the beginning of the removal process.

    Edit: If you've got multiple drives in the cloud that are all part of the same pool, it could become even slower if it's moving data from the drive  in the cloud that is being removed to the other drives that are in the cloud. To avoid that you'd need to set a balancing rule that would ensure the data was moved to only your local drives.

    ... When you say "remove a pool", do you literally mean "remove a pool" or do you mean to say "remove a drive from a pool"?

     

  13. Oh, ouch. "Same used space" + "pool folders missing" usually means a corrupted file system. With both K and L missing their poolparts I'd suggest a memory test and if possible a controller test too; was the drivepool gone before or after you tried snapraid's fix commands?

    If you don't already have a backup of the pool elsewhere and need the data within, I'd also suggest making a raw disk image of those remaining drives (K, L, M), maybe on a different machine, before seeing if disk recovery software can restore it.

     

  14. 12 hours ago, RumbleDrum said:

    My main pool "Y" is now inside a new pool "E".

    Other way around, actually. I can understand the confusion but when the DrivePool interface says "this pool is part of pool (name)" it means that "this pool" is supplying free space to the named pool. So instead of "inside" one should say "under" or "supporting", as pool Y is "part of" pool E in the sense that pool Y is "supplying free space to" pool E.

    12 hours ago, RumbleDrum said:

    I don't recall doing anything other than removing the old drive from the pool, changing the drive letter of the replacement drive to match the old drive, and then adding the new drive to the existing pool. Lo and behold, I now have Pool "E" and Pool "Y". The contents of Pool "Y" (the original) shows three drives (P, Q and R) which is expected. The contents of Pool "E" shows only Pool "Y" - no individual drives. 

    Yeah, that's really weird. Only thing I can think of is that some pool structure metadata that should've been flushed with the removal of the old drive... wasn't... and then nesting somehow happened. Or the new drive already had a poolpart on it? Normally with the screenshots you've provided I'd just say "in drivepool E: (the one with all grey used space) click Remove on Y: (the single "drive" in the pooled section) and then the new pool should be gone" - the same as what VapechiK suggested in their fourth paragraph. If you're truly not worried about starting from scratch, I'd try that and see what happens.

    Otherwise - if you were worried about starting from scratch - I'd first check what "dpcmd list-poolparts y:" and "dpcmd list-poolparts e:" would return (run as administrator in a command prompt), followed by what "dir X:\poolpart.* /s /b /ad" command run on each of the drives (replacing X with P, Q and then R) would return (ditto); if it returned a normal "pool in a pool" structure then proceed as above otherwise it'd be time to either manually reconstruct the original pool or open a support ticket.

  15. Yes, assuming access to the NAS is via shares (e.g. \\nas1\share1) you'd add the share(s) you intend to store the new cloud drive on via the File Share provider in CloudDrive and then create the new cloud drive(s) on it. Note that if you plan to create a single cloud drive initially smaller than 16TB and make it larger than 16TB later, you'll need to change the default values it initially uses (cluster size and/or file system) to be able to do so (and if you do run into that issue with an existing cloud drive, you can just create additional cloud drives and use DrivePool to pool them unless you actually need a single "physical" volume to be larger than 16TB for some reason).

  16. CloudDrive is pretty good at avoiding corruption from interruptions, although we did just have someone with a problem that a cloud drive became inaccessible (not corrupted, just unable to be mounted) on Dropbox when that provider set the content read-only (due to an expired account) which has apparently since been fixed in a new CloudDrive update so...

    First, make sure you have the latest version of CloudDrive. As noted, this apparently fixes an issue where a cloud drive that becomes read-only due to an action on the provider's end can still be mounted by CloudDrive. @Christopher (Drashna) can you confirm whether my understanding is accurate?

    Make sure you complete any existing pending / in-progress uploads to your cloud drive from the local cache well before the transition date; as soon as that's done you  should change the cloud drive to a read-only state. That way there's no risk of anything bad happening from google flicking the switch while CloudDrive is in the middle of writing to the cloud drive.

    Note, while the manual also states that cloud drives that are part of a DrivePool pool cannot be set read-only (this way). So if your cloud drive is being used by DrivePool you are going to need to remove the cloud drive from the drive pool before you can change the cloud drive to read-only; if you don't and your cloud drive becomes read-only anyway (due to Google) then your DrivePool pool will be unable to write to the cloud drive and will have trouble operating (I don't think that itself would cause corruption; I'd expect you'd just be unable to perform tasks that expect the cloud drive to be writable - which given Windows loves to write to disk could be problematic).

    If you want to remove the cloud drive from drive pool but you are concerned that the amount of data that would need to be moved locally is too much to fit locally and/or complete transferring normally in the month remaining, it is possible to manually split (so to speak) a drive pool; you could then manually complete moving the google-stored read-only pool content into your local-stored working pool afterwards.

    TLDR: 1. update to latest CloudDrive version. 2. remove the cloud drive from the drive pool (split the pool manually if necessary to do so in time). 3. complete any pending/current uploads from the local cache to the cloud drive. 4. change the cloud drive to read-only in CloudDrive itself. 5. manually complete moving pool content from google if you had to split it in step 2.

  17. Sometimes escalating is necessary and gets things done ASAP, but it's also important to step back for a moment and ask, "Am I taking out my frustration on someone who is at least trying to be helpful?" As I understand it StableBit is a small team, and a thing I've found dealing with customer support at many companies over decades is that they're almost always overworked. Politeness and thanking them for what they are able to do, e.g. "thankyou for the explanation, did you miss my last request about the older versions?", can go a long way.

    Joining the company's community forums to (constructively) complain about a frustrating problem or experience? Sure, go for it! Emotional insults, no. As a volunteer I'm not always around but I do enjoy helping people on these forums; I've never had to officially warn someone but the button is still there (spammers don't count, that's a different button) and the opening post did get reported to the moderators for incendiary content.

    I don't know if your request for previous versions was answered in another response, but for future reference those can be found here. I hope CloudDrive works out for you now that the problem's solved.

    Please do feel free to ask the community (in a new thread) if you need further help with CloudDrive or other StableBit software.

  18. I've seen something similar happen when Windows did a major update (e.g. 10 to 11) and it ate the virtual driver along the way.

    In the Windows Device Manager, can you see any instances of Covecube Virtual Disk under Disk drives? Can you see Covecube Disk Enumerator under Storage controllers? Any yellow triangles or other alert icons on them?

    If you're in a hurry you could try renaming all the PoolPart.guidstring folders to have "X" or "Old" or whatever in front (don't change the unique identifier that comes after PoolPart) and then with any luck you should be able to add those disks to a new pool (and then just move all your data from the old poolpart to the new poolpart on each disk), but if you have the time it would be great if you could open a ticket with Stablebit so they can take look at why DrivePool is able to detect the old pool but no longer able to load it.

×
×
  • Create New...