Jump to content

Shane

Moderators
  • Posts

    745
  • Joined

  • Last visited

  • Days Won

    66

Everything posted by Shane

  1. That's quite a weird one but so long as the free space is remaining the same thing I'd think it harmless. You might want to file a bug report anyway?
  2. That's weird. Did you turn off the SSD Optimizer altogether or does it have Ordered Placement turned on (if the latter, try turning it off)? Do you have any File Placement rules set? What other balancers do you have active?
  3. Note that it isn't possible to have DrivePool choose the destination drive(s) based on the size of the incoming file, as the latter isn't something it can obtain in advance due to the way Windows file operations work. At best, you could request a feature to have DrivePool optionally report its free space to Windows based on the current destination drive rather than that of the free space of the pool as a whole (although I'm not sure whether the balancers would be able to override that as-is). Pro: for copying single files where Windows/applications check the file's size versus the destination drive free space you would know in advance whether the transfer was completable. Con: for copying multiple files where Windows/applications instead check the aggregated size versus the free space they could refuse to perform the transfer even though there'd be enough room on the pool (e.g. copying a hundred 10GB files as one Explorer operation to an empty 10TB pool with a 500GB buffer drive could be rejected due to "insufficient" space on the buffer drive even though DrivePool would be able to handle the operation).
  4. Your suspicion is correct; if you've told DrivePool via the SSD Optimizer to use particular drive(s) as a buffer for new files then new files are going to be limited by the free space of those drive(s). Note also that the "Fill SSD drives up to X%" option only causes a new file to bypass the buffer if the drives marked as SSD are already X% full before copying the new file.
  5. There would be some extra wear, but honestly I think not much. Maybe vaguely comparable to doing a chkdsk of every drive (without checking for bad sectors)? If you're using enterprise/NAS drives I would expect it to be a drop in a bucket kind of situation. Since hotswap!.exe appears to work to eject your drives but DP still does a re-measure, I'd suggest enabling CoveFs_IsDiskRemovable and then adding the DISKPART script to see if that works, e.g.: DrivePool_RunningFile check to ensure nothing is running stop the stablebit drivepool service to ensure nothing begins running DISKPART script to offline the relevant pool drive (log the errorlevel - if it's reliably zero and stable, keep using it) hotswap!.exe to eject the relevant poolpart drives restart the stablebit drivepool service
  6. I'd suggest contacting Stablebit support with a copy of your error logs as described in this article.
  7. File Placement operates at a real-time level below that which would allow seeing the size of files, but if you're using CloudDrive then as I understood it (normally) its chunking system should automatically take care of the problem you're worried about as CloudDrive drives normally exist on the remote cloud storage service as a series of small files (chunks). So for example if you had a 50GB CloudDrive drive that was created using 10MB chunks it would normally exist on the cloud storage as a collection of up to (approximately) fifty thousand chunks (files), and if you copied a 10GB file to it, that 10GB file should be broken up into one thousand 10MB chunks to be uploaded to the cloud service (again, this is approximate). If you hover the mouse cursor over an existing CloudDrive drive's size in the CloudDrive it should show a tooltip indicating the drive's chunk size. Have you run into problems uploading files larger than 5GB via the CloudDrive? Incidentally I believe that the Balancer plugin system might be capable of doing what you wanted for existing files (i.e. see the size of files that are already in the pool and move them accordingly) but someone would have to code a new Balancer plugin to make use of that.
  8. Windows obtains the removability status from the hardware, so I'd suggest checking the documentation for the HBA to see if there's a internal setting or manufacturer utility to mark the drives/buses as hot-pluggable. (There did indeed used to be such a key and yeah Microsoft removed it for their ineffable reasons.) You might also try this piece of software http://mt-naka.com/hotswap/index_enu.htm as it does have command line functionality - but it may not support your hardware. Also, disclaimer, I haven't tried it myself; proceed with caution.
  9. Did some looking. AFAICT there's no official DP method (command-line or GUI) to gracefully offline a pool. Tagging @Christopher (Drashna) ? What I have so far found seems to work, with a bit of experimentation today, was the following: 1. Ensure that all relevant poolpart volumes appear to Windows as ejectable drives, then set the override for CoveFs_IsDiskRemovable to true and restart Windows; the pool drive composed from those volumes should now itself show up as an ejectable drive (but doing so via the Windows GUI will just have DrivePool immediately redetect and remount the pool). 2. In your batch file the command diskpart /s scriptfile.txt should now* be usable to take the pool offline where scriptfile.txt is the name of the file containing the following text (replacing pooldriveletter with the letter of your pool drive): SELECT VOLUME pooldriveletter OFFLINE VOLUME 3. If the command was successful then the errorlevel for the diskpart command should be zero (if you need to check for that in your script) and the designated pool should gracefully disappear from DrivePool; you should then be able to eject / depower the relevant physical drives. I might still recommend using DrivePool_RunningFile to ensure that DrivePool isn't in the middle of any maintenance operations, as I don't know if interrupting those in this manner would cause DrivePool to want to recheck the pool, but if you find it doesn't then I guess it won't be needed! How are you scripting the JBODs to power on/off? *I found that using diskpart to offline a non-removable pool drive would sometimes crash diskpart/drivepool/windows so I don't recommend trying that.
  10. Glad to hear it went smooth. Are you using duplication instead of backups? Won't help with accidentally deleting a file but at least if a drive karks it that should keep you going. No plans to visit Tassie any time soon sorry, I'm stuck in Queensland at present. I'll just have to raise a glass in your direction I guess. The green on black does bring back memories of my uni days with the old vax/vms terminals!
  11. Scanner does monitor (e.g. every minute or so, can be configured) SMART data and also regularly (e.g. monthly, depends on your setup choices) scans both the file system and every sector on your drives to check that they are readable, and can attempt repairs. The scanning as a (nice) side-effect can also prevent some bit rot, as the act of scanning ensures the drive will be regularly fully powered up and that the drive's own detection/correction features will (or at least should) automatically examine/repair/reallocate cells/sectors in the background as Scanner reads them. Note that SSDs are much more susceptible to charge decay than HDDs, as the former relies on a much faster but less stable storage method; it can vary widely by the type of SSD but the general takeaway is that a SSD/HDD that's been sitting unused for X months/years respectively might not keep your data intact (the bigger the X, the smaller the trust). Anyway, aside from the problem mentioned with SSDs above, drive-based bit rot (as opposed to other sources and causes, e.g. due to bad RAM, EM spikes, faulty controllers, using non-ECC memory in a system that doesn't get cold-booted regularly, etc) is by itself quite rare, but it is yet another reason to keep backups. TLDRs: if you keep necessary data on a SSD, I suggest keeping it powered continuously or at least regularly. Regularly scan your SSDs and HDDs. Keep backups. If you're not using an ECC setup, consider disabling "Fast Start" in Windows 10/11 and restarting your PC occasionally (e.g. once a month) if you're not already doing so.
  12. Step 1: Right-click D:\DRIVEPOOL and choose Properties. If it reports 0 files and 0 folders then proceed to step 2, else halt. Step 2: Without stopping the DrivePool service, rename the D:\DRIVEPOOL folder to something else (e.g. D:\OLDPOOL). If nothing complains or breaks then proceed to delete the renamed folder, else halt. Trip was good, but these old bones are feeling the miles afterwards. That green is very garish on a white background btw. P.S. You mentioned "If I did the wrong thing, there goes decades worth of stuff" - next step might be to plan/organise backing up your pool?
  13. Default pool behaviour is to use whichever drive has the most free space at the time; your "Lightroom" volume has the least free space so unless you change the default behaviour it will be used only after all of the others have enough files placed on them that they all have less free space remaining than it.
  14. DrivePool returning RAW for pools to chkdsk is normal behaviour (because a pool is not itself a physical drive with sectors to check, even if it's made up from them). It may just be that your guess of another application grabbing the directory at the time is correct, and that xcopy and powershell copy just have better handling of such situations (which I can believe).
  15. If you followed https://stablebit.com/Support/CloudDrive/Manual?Section=Reattaching your Drive it should have warned you that it was still attached to the old machine. If you didn't get that warning, I'd recommend contacting Stablebit.
  16. Some proprietary enclosures do weird stuff that prevents a drive from being readable if it's shucked and put in a standards-friendly enclosure, but so long as the drive can still be seen by Windows as normal then DrivePool should see it too.
  17. Pretty much as VapechiK says. Here's a how-to list based on your screenshot at the top of this topic: Create a folder, e.g. called "mounts" or "disks" or whatever, in the root of any physical drive that ISN'T your drivepool and IS going to be always present: You might use your boot drive, e.g. c:\mounts You might use a data drive that's always plugged in, e.g. x:\mounts (where "x" is the letter of that drive) Create empty sub-folders inside the folder you created, one for each drive you plan to "hide" (remove the drive letter of): I suggest a naming scheme that makes it easy to know which sub-folder is related to which drive. You might use the drive's serial number, e.g. c:\mounts\12345 You might have a labeller and put your own labels on the actual drives then use that for the name, e.g. c:\mounts\501 Open up Windows Disk Management and for each of the drives: Remove any existing drive letters and mount paths Add a mount path to the matching empty sub-folder you created above. Reboot the PC (doesn't have to be done straight away but will clear up any old file locks etc). That's it. The drives should now still show up in Everything, as sub-folders within the folder you created, and in a normal file explorer window the sub-folder icons should gain a small curved arrow in the bottom-left corner as if they were shortcuts. P.S. And speaking of shortcuts I'm now off on a road trip or four, so access is going to be intermittent at best for the next week.
  18. Hi Bear; you need to mount the poolpart drives to paths on a NTFS drive outside the pool structure. So in your case don't mount them inside D:\ and don't mount them inside any *:\PoolPart.* folder. That way #1 their content will still be visible in Everything (which I use too) and #2 you're not risking a recursive loop in your file system!
  19. Hi bhoard, I think it's likely the drive was able to recover from whatever the power issues did to it. If Scanner still doesn't pick anything up on the next scheduled scan, personally I'd be comfortable continuing to use the drive. Also maybe consider getting a UPS if power issues are an ongoing problem in your area?
  20. If the Windows 10 Fast Startup feature is enabled, which is the default, when you do a normal Shutdown it will snapshot an image of the current kernel, drivers, and system state in memory to disk and then on boot it'll load from the image but if you do a Restart it doesn't take the snapshot and instead performs a Normal Start where it goes through the normal process of loading the kernel, the drivers and system state component-by-component from the boot drive etc. So if it's the Restart that makes the pool drive unavailable, that would suggest the issue is occurring during the normal full boot process. I'd try looking in DrivePool's Cog->Troubleshooting->Service Log after a Restart fails to make the drive available, to see if there are any clues there - e.g. it might just be something is timing out the find-and-mount routine, in which case you can increase the CoveFs_WaitFor***Ms entries as described in https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings until it picks up. If you're still stuck after looking at the Log, you can request a support ticket via StableBit's contact form. EDIT: made it clearer that only the CoveFs_WaitFor***Ms entries (i.e. the ones ending in Ms) are to be changed. The wiki mentions CoveFs_WaitForVolumesOnMount without the Ms, that's a deprecated entry that isn't used by newer versions of DrivePool.
  21. It also occurs to me that some users may have the opposite issue: where they use apps that rely on the presence of folders that may be empty at times for whatever reason, so they wouldn't want those folders being automatically removed by DrivePool. So any "cleanup empty folders" function implemented by DrivePool would have to ensure that it only removes "excess" instances of such folders within the poolparts, the same as it does with files. How tricky would it be to extend DrivePool's duplication-checking to check for excess instances of empty folders in addition to excess instances of files (e.g. currently, if duplication is set to x3 and there are x4 instances of a file, DrivePool will remove the 4th instance of that file - can DrivePool be made to apply that to empty folders too)?
  22. You could request it as a feature via the contact form? @Christopher (Drashna) perhaps there could be an option to schedule it alongside the regular pool consistency checks, and/or as a dpcmd command which could be run manually or scheduled (e.g. via Task Scheduler) by those for whom it's a significant issue?
  23. Hi baChewie, if you'd like Stablebit to take a look at / ask for logs from your affected setup directly, you could submit a support request via the contact form to get a ticket going.
  24. Hmm. I tried adding the entry and experienced the same result as you. I've looked at the change log for DrivePool and I suspect that CoveFs_WaitForVolumesOnMount was actually replaced with CoveFs_WaitForVolumesOnMountMs when the latter was added as of version 2.2.0.891 - so if your version is that or later then I believe you should use the latter entry (and setting a value of 0 for it would be equivalent to setting a value of false for the former entry). @Christopher (Drashna) does the wiki need to be amended to indicate that the first entry has been deprecated in favor of the second entry, or is something else going on?
×
×
  • Create New...