Jump to content

Shane

Moderators
  • Posts

    905
  • Joined

  • Last visited

  • Days Won

    86

Reputation Activity

  1. Like
    Shane reacted to Bear in Running Out of Drive Letters   
    I duplicate all HDD's, except the ones that have the OS's on them, with those, I use 'minitool partition wizard'.
    The 4 bay enclosures I linked to above, I have 2 of the 8 bay ones, with a total of 97.3TB & I now have only 6.34TB free space out of that.  It works out cheaper to get the little 4 bay ones, and they take HDD's up to 18TB - sweet
    If you like the black & green, you could always get a pint of XXXX & put green food dye into it, you don't have to wait till St. Patrick's Day.  That would bring back uni days as well     🤣   🍺   👍
    " A pirate walks into a bar with a steering wheel down his daks
     He walks up to the bar & orders a drink
     The barman looks over the bar and says, "do you know you have a steering wheel down your daks?"
     The pirate goes, "aye, and it's driving me nuts""   🤣   🤣   🤣
    🍺   👍   🍺   cheers
     
  2. Like
    Shane got a reaction from JazzMan in Commandline copy poltergeists.   
    Scanner does monitor (e.g. every minute or so, can be configured) SMART data and also regularly (e.g. monthly, depends on your setup choices) scans both the file system and every sector on your drives to check that they are readable, and can attempt repairs.
    The scanning as a (nice) side-effect can also prevent some bit rot, as the act of scanning ensures the drive will be regularly fully powered up and that the drive's own detection/correction features will (or at least should) automatically examine/repair/reallocate cells/sectors in the background as Scanner reads them. Note that SSDs are much more susceptible to charge decay than HDDs, as the former relies on a much faster but less stable storage method; it can vary widely by the type of SSD but the  general takeaway is that a SSD/HDD that's been sitting unused for X months/years respectively might not keep your data intact (the bigger the X, the smaller the trust).
    Anyway, aside from the problem mentioned with SSDs above, drive-based bit rot (as opposed to other sources and causes, e.g. due to bad RAM, EM spikes, faulty controllers, using non-ECC memory in a system that doesn't get cold-booted regularly, etc) is by itself quite rare, but it is yet another reason to keep backups.
    TLDRs: if you keep necessary data on a SSD, I suggest keeping it powered continuously or at least regularly. Regularly scan your SSDs and HDDs. Keep backups. If you're not using an ECC setup, consider disabling "Fast Start" in Windows 10/11 and restarting your PC occasionally (e.g. once a month) if you're not already doing so.
  3. Like
    Shane got a reaction from JazzMan in Added a drive to the DrivePool but it is never used?   
    Default pool behaviour is to use whichever drive has the most free space at the time; your "Lightroom" volume has the least free space so unless you change the default behaviour it will be used only after all of the others have enough files placed on them that they all have less free space remaining than it.
  4. Thanks
    Shane got a reaction from jkadlec in External drives to a JBOD enclosure.   
    Yes, that's correct.
  5. Thanks
    Shane got a reaction from jkadlec in External drives to a JBOD enclosure.   
    Some proprietary enclosures do weird stuff that prevents a drive from being readable if it's shucked and put in a standards-friendly enclosure, but so long as the drive can still be seen by Windows as normal then DrivePool should see it too.
  6. Like
    Shane got a reaction from Bear in Running Out of Drive Letters   
    Pretty much as VapechiK says. Here's a how-to list based on your screenshot at the top of this topic:
    Create a folder, e.g. called "mounts" or "disks" or whatever, in the root of any physical drive that ISN'T your drivepool and IS going to be always present: You might use your boot drive, e.g. c:\mounts You might use a data drive that's always plugged in, e.g. x:\mounts (where "x" is the letter of that drive) Create empty sub-folders inside the folder you created, one for each drive you plan to "hide" (remove the drive letter of): I suggest a naming scheme that makes it easy to know which sub-folder is related to which drive. You might use the drive's serial number, e.g. c:\mounts\12345 You might have a labeller and put your own labels on the actual drives then use that for the name, e.g. c:\mounts\501 Open up Windows Disk Management and for each of the drives: Remove any existing drive letters and mount paths Add a mount path to the matching empty sub-folder you created above. Reboot the PC (doesn't have to be done straight away but will clear up any old file locks etc). That's it. The drives should now still show up in Everything, as sub-folders within the folder you created, and in a normal file explorer window the sub-folder icons should gain a small curved arrow in the bottom-left corner as if they were shortcuts.
    P.S. And speaking of shortcuts I'm now off on a road trip or four, so access is going to be intermittent at best for the next week.
  7. Like
    Shane reacted to VapechiK in Running Out of Drive Letters   
    hi
    yes, what DaveJ suggested is your best bet.
    and Shane is correct (as usual).  you have (in)effectively mounted your pool drives into a folder on the pool and this is causing Everything to fail and WILL cause other problems down the road.  to quote Shane:   "Late edit for future readers: DON'T mount them as folders inside the pool drive itself, nor inside a poolpart folder. That risks a recursive loop, which would be bad."
     
    1.  on your C (Bears) drive, recreate the D:\DrivePool folder where you mounted your drives 301 302 etc.  so you now have C:\DrivePool with EMPTY folders for all your drives that are in the pool.  DO NOT try to drag and drop the DrivePool folder on D to C  mmm mmm bad idea.  just do this manually as you did before.
    2.  STOP the DrivePool service (win + R, type 'services.msc' find StableBit DrivePool Service and Stop it).
    3.  go to Disk Management and as in https://wiki.covecube.com/StableBit_DrivePool_Q4822624   remount all the drives from D:\DrivePool into the drive folders in C:\DrivePool.  windows may/will throw some warnings about the change.  ignore them and remount all 16 from D:\DrivePool to C:\DrivePool. 
    4.  reboot
    now your file explorer should show Bears C:, DrivePool D:, and maybe G X and Y too, idk...
    enable show hidden files and folders and navigate to C:\DrivePool.  doubleclicking any of the drive folders will show the contents of the drive if any and a hidden PoolPart.xxxx folder.  these PoolPart folders are where the 'POOL' lives.  and this is where/how to access your data from 'outside' the pool.  be careful they are not deleted.
    5.  go to the folder DrivePool on D and delete it.  totally unnecessary after the remount from D to C and now it is just a distraction.
    6.  life is good.
     
    some advice:
    for simplicity's sake, i would rename C:\DrivePool to C:\Mounts or something similar.  having your pool and the folder where its drives are mounted with both the same name WILL only confuse someone at some point and bad things could happen.
    hope this helps
    cheers
  8. Thanks
    Shane reacted to baChewie in SMB access slow (high latency) to get started when shared dir is in a Drivepool   
    I'm stubborn, so I had to figure this out myself.
    Wireshark showed:
    SMB2 131 Create Response, Error: STATUS_OBJECT_NAME_NOT_FOUND SMB2 131 Create Response, Error: STATUS_FILE_IS_A_DIRECTORY SMB2 131 GetInfo Response, Error: STATUS_ACCESS_DENIED SMB2 166 Lease Break Notification I thought it might be NTFS permissions, but even after re-applying security settings per DP's KB: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 I still had issues.

    The timer is 30 seconds, adding 5 seconds for the SMB handshake collapse. It's the oplock break via the Lease Break Ack Timer.
    This MS KB helped: Cannot access shared files or folders on a drive in Windows Server 2012 or Windows Server 2012 R2
    Per MS (above) to disable SMB2/3 leasing entirely, do this:
     
    REG ADD HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters /v DisableLeasing /t REG_DWORD /d 1 /f I didn't need to restart SMB2/3, the change was instant and file lookups and even a simple right-click in Explorer came up instantly. A process that took 8days+ finished in an hour or so
    Glad to be rid of this problem. Leases are disabled yes, but SMB oplock's are still available.
  9. Like
    Shane got a reaction from JazzMan in Pool not coming back online after reboot   
    If the Windows 10 Fast Startup feature is enabled, which is the default, when you do a normal Shutdown it will snapshot an image of the current kernel, drivers, and system state in memory to disk and then on boot it'll load from the image but if you do a Restart it doesn't take the snapshot and instead performs a Normal Start where it goes through the normal process of loading the kernel, the drivers and system state component-by-component from the boot drive etc.
    So if it's the Restart that makes the pool drive unavailable, that would suggest the issue is occurring during the normal full boot process. I'd try looking in DrivePool's Cog->Troubleshooting->Service Log after a Restart fails to make the drive available, to see if there are any clues there - e.g. it might just be something is timing out the find-and-mount routine, in which case you can increase the CoveFs_WaitFor***Ms entries as described in https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings until it picks up.
    If you're still stuck after looking at the Log, you can request a support ticket via StableBit's contact form.
    EDIT: made it clearer that only the CoveFs_WaitFor***Ms entries (i.e. the ones ending in Ms) are to be changed. The wiki mentions CoveFs_WaitForVolumesOnMount without the Ms, that's a deprecated entry that isn't used by newer versions of DrivePool.
  10. Like
    Shane reacted to Christopher (Drashna) in What in the world have I messed up on my balancers?   
    If you want to use the SSD Optimizer and use the rest of the pool,  the "simplest" option may be to use hierarchical pools.  Eg, add the SSD/NVMe drives to a pool, add the hard drives to another pool,   and then add both of these pools to a pool.  Enable the SSD optimizer on the "pool of pools", and then enable the balancers you want on the sub-pools.  
  11. Like
    Shane got a reaction from gtaus in Reasonable time to evacuate a drive?   
    Likely very little difference versus plugged directly into the motherboard.
    It is possible to evacuate/move files much faster if you don't mind stopping the DrivePool service while you manually move them yourself (from hidden poolpart to hidden poolpart) then restarting the service and requesting a re-measure. Basically a tradeoff between speed and comfort, if that makes sense.
  12. Like
    Shane reacted to thestillwind in I cannot work   
    Well, this is why always online functionnality is really bad. They need to add at least a grace period or something because my tools aren't working. Before the stablebit cloud thing, I never had any problem with stablebit software.
  13. Thanks
    Shane reacted to fattipants2016 in Cleaning up empty folders?   
    I run double parity, also. It's 1/2 for peace of mind and 1/2 because I'm a nerd and multiple parity is seriously cool stuff.
    Worked great! I've simulated a few scenarios in the past by moving files to another disk, but this was the first time I needed it.
     
    I couldn't leave well enough alone, though, and finally found a solution that works Even with write-protected folders.
    From an administrator-level command prompt type: "Robocopy C:\Test C:\Test /B /S /Move"
    I don't understand exactly why it works, but you're basically moving a folder onto itself, but skipping (deleting) empty folders.
    /B allows Robocopy to properly set attributes and permissions for system folders like Recycle Bin and .covefs. Bad stuff happens if you don't use it.
    I wrote a batch to run this script on all 7 of my Drivepool disks, and it completes in under a minute. Good stuff.
  14. Like
    Shane reacted to ZagD in Stablebit Scanner makes drives disappear ???   
    I found the reason for this strange behaviour.
    I shucked the 2TB Seagate drive that caused the issue and placed it in an external enclosure. It wasn't immediately recognised, because there was some sort of "locking" at the drive's USB controller level, so after removing its controller it would appear as non-initialised in Windows.
    I re-connected the shucked drive to its original controller and copied its contents to another drive, then used the clean command in diskpart, re-initialised it, copied back the poolpart folder from the intermediate drive and reconnected it to the pool. Everything went well and the drive was recognised without a glitch.
    Now that the drive is in the enclosure and not in its original case with the "locked" controller, Scanner works fine and no more strange disappearing acts occur!
    In short... the original drive's USB controller 'dunnit.
  15. Like
    Shane reacted to ZagD in clean disk in diskpart and volume ID   
    Resolving my own issue.
    After copying all poolpart folders to an intermediate drive, cleaning the shucked drives & re-initialising them, I copied back their respective poolpart folders and the pool was automatically recognised. It only needed a recalculation of duplication.
    So it appears that the poolpart folders contain all information to recreate the pool, regardless of the disk's volume ID. The reason I started this thread was because I saw in a post by Christopher that Drivepool uses the volume ID to identify the disks and I was worried that after cleaning the disks, I wouldn't be able to get Drivepool to recognise them as being part of the pool. Happy ending I guess!
     
  16. Like
    Shane got a reaction from murphdogg in Drivepool+Snapraid Balancing due to Damaged Drive   
    Yes, that's correct. If you're using Snapraid to protect your pool then normally you'd want DrivePool's balancing plug-in for Scanner either turned off or at least set to not move files out.
  17. Like
    Shane reacted to GameOver in FreeFIleSync error   
    Thanx.  the recieving has 64GB the sending has 32GB Ram and during the process it's not using all the 32GB. 
    Still weird it comes up with that error but if you cancel and resync again it works til it errors on another file. 

    Also to note, have no issues with copying to the drives via other means just FFS
     
     
  18. Like
    Shane got a reaction from Christopher (Drashna) in iDrive e2 error   
    Perhaps you could ask Stablebit for an extension of the trial, to test clouddrive with idrive again, via the contact form?
  19. Like
    Shane reacted to Christopher (Drashna) in Google Drive + Read-only Enforced + Allocated drive   
    For reference, the beta versions have some changes to help address these: 

     
    .1648 * Fixed an issue where a read-only force attach would fail to mount successfully if the storage provider did not have write access and the drive was marked as mounted. .1647 * Fixed an issue where read-only mounts would fail to mount drives when write access to the storage provider was not available. .1646 * [Issue #28770] Added the ability to convert Google Drive cloud drives stored locally into a format compatible with the Local Disk provider. - Use the "CloudDrive.Convert GoogleDriveToLocalDisk" command.  
  20. Like
    Shane got a reaction from red in Google Drive + Read-only Enforced + Allocated drive   
    It's the same for both local and cloud drives being removed from a pool: "Normally when you remove a drive from the pool the removal process will duplicate protected files before completion. But this can be time consuming so you can instruct it to duplicate your files later in the background."
    So normally: for each file on the drive being removed it ensures the duplication level is maintained on the remaining drives by making copies as necessary and only then deleting the file from the drive being removed. E.g. if you've got 2x duplication normally, any file that was on the removed drive will still have 2x duplication on your remanining drives (assuming you have at least 2 remaining drives).
    With duplicate files later: for each file on the drive being removed it only makes a copy on the remaining drives if none already exist, then deletes the file from the drive being removed. DrivePool will later perform a background duplication pass after removal completes. E.g. if you've got 2x duplication normally, any file that was on the removed drive will only be on one of your remaining drives until the background pass happens later.
    In short, DFL means "if at least one copy exists on the remaining drives, don't spend any time making more before deleting the file from the drive being removed."
    Note #1: DFL will have no effect if files are not duplicated in the first place.
    Note #2: if you don't have enough time to Remove a drive from your pool normally (even with "duplicate files later" ticked), it is possible to manually 'split' the drive off from your pool (by stopping the DrivePool service, renaming the hidden poolpart folder in the drive to be removed - e.g. from poolpart.identifier to xpoolpart.identifier - then restarting the DrivePool service) so that you should then be able to set a cloud drive read-only. This will have the side-effect of making your pool read-only as well, as the cloud drive becomes "missing" from the pool, but you could then manually copy the remaining files in the cloud poolpart into a remaining connected poolpart and then - once you're sure you've gotten everything - fix the pool by forcing removal of the missing drive. Ugly but doable if you're careful.
  21. Thanks
    Shane reacted to MitchC in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11   
    To start, while new to DrivePool I love its potential I own multiple licenses and their full suite.  If you only use drivepool for basic file archiving of large files with simple applications accessing them for periodic reads it is probably uncommon you would hit these bugs.  This assumes you don't use any file synchronization / backup solutions.  Further, I don't know how many thousands (tens or hundreds?) of DrivePool users there are, but clearly many are not hitting these bugs or recognizing they are hitting these bugs, so this IT NOT some new destructive my files are 100% going to die issue.  Some of the reports I have seen on the forums though may be actually issues due to these things without it being recognized as such. As far as I know previously CoveCube was not aware of these issues, so tickets may not have even considered this possibility.
    I started reporting these bugs to StableBit ~9 months ago, and informed I would be putting this post together ~1 month ago.  Please see the disclaimer below as well, as some of this is based on observations over known facts.
    You are most likely to run into these bugs with applications that: *) Synchronize or backup files, including cloud mounted drives like onedrive or dropbox *) Applications that must handle large quantities of files or monitor them for changes like coding applications (Visual Studio/ VSCode)

    Still, these bugs can cause silent file corruption, file misplacement, deleted files, performance degradation, data leakage ( a file shared with someone externally could have its contents overwritten by any sensitive file on your computer), missed file changes, and potential other issues for a small portion of users (I have had nearly all these things occur).  It may also trigger some BSOD crashes, I had one such crash that is likely related.  Due to the subtle nature some of these bugs can present with, it may be hard to notice they are happening even if they are.  In addition, these issues can occur even without file mirroring and files pinned to a specific drive.  I do have some potential workarounds/suggestions at the bottom.
    More details are at the bottom but the important bug facts upfront:
    Windows has a native file changed notification API using overlapped IO calls.  This allows an application to listen for changes on a folder, or a folder and sub folders, without having to constantly check every file to see if it changed.  Stablebit triggers "file changed" notifications even when files are just accessed (read) in certain ways.  Stablebit does NOT generate notification events on the parent folder when a file under it changes (Windows does).  Stablebit does NOT generate a notification event only when a FileID changes (next bug talks about FileIDs).  
    Windows, like linux, has a unique ID number for each file written on the hard drive.  If there are hardlinks to the same file, it has the same unique ID (so one File ID may have multiple paths associated with it). In linux this is called the inode number, Windows calls it the FileID.  Rather than accessing a file by its path, you can open a file by its FileID.  In addition it is impossible for two files to share the same FileID, it is a 128 bit number persistent across reboots (128 bits means the number of unique numbers represented is 39 digits long, or has the uniqueness of something like the MD5 hash).  A FileID does not change when a file moves or is modified.  Stablebit, by default, supports FileIDs however they seem to be ephemeral, they do not seem to survive across reboots or file moves.  Keep in mind FileIDs are used for directories as well, it is not just files. Further, if a directory is moved/renamed not only does its FileID change but every file under it changes. I am not sure if there are other situations in which they may change.  In addition, if a descendant file/directory FileID changes due to something like a directory rename Stablebit does NOT generate a notification event that it has changed (the application gets the directory event notification but nothing on the children).
    There are some other things to consider as well, DrivePool does not implement the standard windows USN Journal (a system of tracking file changes on a drive).  It specifically identifies itself as not supporting this so applications shouldn't be trying to use it with a drivepool drive. That does mean that applications that traditionally don't use the file change notification API or the FileIDs may fall back to a combination of those to accomplish what they would otherwise use the USN Journal for (and this can exacerbate the problem).  The same is true of Volume Shadow Copy (VSS) where applications that might traditionally use this cannot (and drivepool identifies it cannot do VSS) so may resort to methods below that they do not traditionally use.

    Now the effects of the above bugs may not be completely apparent:
    For the overlapped IO / File change notification  This means an application monitoring for changes on a DrivePool folder or sub-folder will get erroneous notifications files changed when anything even accesses them. Just opening something like file explorer on a folder, or even switching between applications can cause file accesses that trigger the notification. If an application takes actions on a notification and then checks the file at the end of the notification this in itself may cause another notification.  Applications that rely on getting a folder changed notification when a child changes will not get these at all with DrivePool.  If it isn't monitoring children at all just the folder, this means no notifications could be generated (vs just the child) so it could miss changes.
    For FileIDs It depends what the application uses the FileID for but it may assume the FileID should stay the same when a file moves, as it doesn't with DrivePool this might mean it reads or backs up, or syncs the entire file again if it is moved (perf issue).  An application that uses the Windows API to open a File by its ID may not get the file it is expecting or the file that was simply moved will throw an error when opened by its old FileID as drivepool has changed the ID.   For an example lets say an application caches that the FileID for ImportantDoc1.docx is 12345 but then 12345 refers to ImportantDoc2.docx due to a restart.  If this application is a file sync application and ImportantDoc1.docx is changed remotely when it goes to write those remote changes to the local file if it uses the OpenFileById method to do so it will actually override ImportantDoc2.docx with those changes.
    I didn't spend the time to read Windows file system requirements to know when Windows expects a FileID to potentially change (or not change).  It is important to note that even if theoretical changes/reuse are allowed if they are not common place (because windows uses essentially a number like an md5 hash in terms of repeats) applications may just assume it doesn't happen even if it is technically allowed to do so.  A backup of file sync program might assume that a file with specific FileID is always the same file, if FileID 12345 is c:\MyDocuments\ImportantDoc1.docx one day and then c:\MyDocuments\ImportantDoc2.docx another it may mistake document 2 for document 1, overriding important data or restore data to the wrong place.  If it is trying to create a whole drive backup it may assume it has already backed up c:\MyDocuments\ImportantDoc2.docx if it now has the same File ID as ImportantDoc1.docx by the time it reaches it (at which point DrivePool would have a different FileID for Document1).

    Why might applications use FileIDs or file change notifiers? It may not seem intuitive why applications would use these but a few major reasons are: *) Performance, file change notifiers are a event/push based system so the application is told when something changes, the common alternative is a poll based system where an application must scan all the files looking for changes (and may try to rely on file timestamps or even hashing the entire file to determine this) this causes a good bit more overhead / slowdown.  *)  FileID's are nice because they already handle hardlink file de-duplication (Windows may have multiple copies of a file on a drive for various reasons, but if you backup based on FileID you backup that file once rather than multiple times.  FileIDs are also great for handling renames.  Lets say you are an application that syncs files and the user backs up c:\temp\mydir with 1000 files under it.  If they rename c:\temp\mydir to c:\temp\mydir2 an application use FileIDS can say, wait that folder is the same it was just renamed. OK rename that folder in our remote version too.  This is a very minimal operation on both ends.  With DrivePool however the FileID changes for the directory and all sub-files.  If the sync application uses this to determine changes it now uploads all these files to the system using a good bit more resources locally and remotely.  If the application also uses versioning this may be far more likely to cause a conflict with two or more clients syncing, as mass amounts of files are seemingly being changed.
    Finally, even if an application is trying to monitor for FileIDs changing using the file change API, due to notification bugs above it may not get any notifications when child FileIDs change so it might assume it has not.

    Real Examples
    OneDrive
    This started with massive onedrive failures.  I would find onedrive was re-uploading hundreds of gigabytes of images an videos multiple times a week.  These were not changing or moving.  I don't know if the issue is onedrive uses FileIDs to determine if a file is already uploaded, or if it is because when it scanned a directory it may have triggered a notification that all the files in that directory changed and based on that notification it reuploads.  After this I noticed files were becoming deleted both locally and in the cloud.  I don't know what caused this, it might have been because the old file it thought was deleted as the FileID was gone and while there was a new file (actually the same file) in its place there may have been some odd race condition.   It is also possible that it queued the file for upload, the FileID changed and when it went to open it to upload it found it was 'deleted' as the FileID no longer pointed to a file and queued the delete operation.   I also found that files that were uploaded into the cloud in one folder were sometimes downloading to an alternate folder locally.  I am guessing this is because the folder FileID changed.  It thought the 2023 folder was with ID XYZ but that now pointed to a different folder and so it put the file in the wrong place.  The final form of corruption was finding the data from one photo or video actually in a file with a completely different name.  This is almost guaranteed to be due to the FileID bugs.  This is highly destructive as backups make this far harder to correct.  With one files contents replaced with another you need to know when the good content existed and in what files were effected.  Depending on retention policies the file contents that replaced it may override the good backups before you notice.  I also had a BSOD with onedrive where it was trying to set attributes on a file and the CoveFS driver corrupted some memory.  It is possible this was a race condition as onedrive may have been doing hundreds of files very rapidly due to the bugs.  I have not captured a second BSOD due to it, but also stopped using onedrive on DrivePool due to the corruption.   Another example of this is data leakage.  Lets say you share your favorite article on kittens with a group of people.   Onedrive, believing that file has changed, goes to open it using the FileID however that file ID could essentially now correspond to any file on your computer now the contents of some sensitive file are put in the place of that kitten file, and everyone you shared it with can access it.
    Visual Studio Failures
    Visual studio is a code editor/compiler.  There are three distinct bugs that happen.  First, when compiling if you touched one file in a folder it seemed to recompile the entire folder, this due likely to the notification bug.  This is just a slow down, but an annoying one.  Second, Visual Studio has compiler generated code support.  This means the compiler will generate actual source code that lives next to your own source code.   Normally once compiled it doesn't regenerate and compile this source unless it must change but due to the notification bugs it regenerates this code constantly and if there is an error in other code it causes an error there causing several other invalid errors.  When debugging visual studio by default will only use symbols (debug location data) as the notifications from DrivePool happen on certain file accesses visual studio constantly thinks the source has changed since it was compiled and you will only be able to breakpoint inside source if you disable the exact symbol match default.  If you have multiple projects in a solution with one dependent on another it will often rebuild other project deps even when they haven't changed, for large solutions that can be crippling (perf issue).  Finally I often had intellisense errors showing up even though no errors during compiling, and worse intellisense would completely break at points.  All due to DrivePool.

    Technical details / full background & disclaimer
    I have sample code and logs to document these issues in greater detail if anyone wants to replicate it themselves.
    It is important for me to state drivepool is closed source and I don't have the technical details of how it works.  I also don't have the technical details on how applications like onedrive or visual studio work.  So some of these things may be guesses as to why the applications fail/etc.
    The facts stated are true (to the best of my knowledge) 

    Shortly before my trial expired in October of last year I discovered some odd behavior.  I had a technical ticket filed within a week and within a month had traced down at least one of the bugs.  The issue can be seen https://stablebit.com/Admin/IssueAnalysis/28720 , it does show priority 2/important which I would assume is the second highest (probably critical or similar above).  It is great it has priority but as we are over 6 months since filed without updates I figured warning others about the potential corruption was important.  

    The FileSystemWatcher API is implemented in windows using async overlapped IO the exact code can be seen: https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/System.IO.FileSystem.Watcher/src/System/IO/FileSystemWatcher.Win32.cs#L32-L66
    That corresponds to this kernel api:
    https://learn.microsoft.com/en-us/windows/win32/fileio/synchronous-and-asynchronous-i-o
    Newer api calls use GetFileInformationByHandleEx to get the FileID but with older stats calls represented by nFileIndexHigh/nFileIndexLow.  

    In terms of the FileID bug I wouldn't normally have even thought about it but the advanced config (https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings) mentions this under CoveFs_OpenByFileId  "When enabled, the pool will keep track of every file ID that it gives out in pageable memory (memory that is saved to disk and loaded as necessary).".   Keeping track of files in memory is certainly very different from Windows so I thought this may be the source of issue.  I also don't know if there are caps on the maximum number of files it will track as if it resets FileIDs in situations other than reboots that could be much worse. Turning this off will atleast break nfs servers as it mentions it right in the docs "required by the NFS server".
    Finally, the FileID numbers given out by DrivePool are incremental and very low.  This means when they do reset you almost certainly will get collisions with former numbers.   What is not clear is if there is the chance of potential FileID corruption issues.  If when it is assigning these ids in a multi-threaded scenario with many different files at the same time could this system fail? I have seen no proof this happens, but when incremental ids are assigned like this for mass quantities of potential files it has a higher chance of occurring.
    Microsoft mentions this about deleting the USN Journal: "Deleting the change journal impacts the File Replication Service (FRS) and the Indexing Service, because it requires these services to perform a complete (and time-consuming) scan of the volume. This in turn negatively impacts FRS SYSVOL replication and replication between DFS link alternates while the volume is being rescanned.".  Now DrivePool never has the USN journal supported so it isn't exactly the same thing, but it is clear that several core Windows services do use it for normal operations I do not know what backups they use when it is unavailable. 

    Potential Fixes
    There are advanced settings for drivepool https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings beware these changes may break other things.
    CoveFs_OpenByFileId - Set to false, by default it is true.  This will disable the OpenByFileID API.  It is clear several applications use this API.  In addition, while DrivePool may disable that function with this setting it doesn't disable FileID's themselves.  Any application using FileIDs as static identifiers for files may still run into problems. 
    I would avoid any file backup/synchronization tools and DrivePool drives (if possible).  These likely have the highest chance of lost files, misplaced files, file content being mixed up, and excess resource usage.   If not avoiding consider taking file hashes for the entire drivepool directory tree.  Do this again at a later point and make sure files that shouldn't have changed still have the same hash.
    If you have files that rarely change after being created then hashing each file at some point after creation and alerting if that file disappears or hash changes would easily act as an early warning to a bug here being hit.
  22. Like
    Shane reacted to VapechiK in All in One plug-in   
    hello
    i cannot speak to the functionality of the All In One plugin when running the latest beta, as i do not run beta software of any kind (all software is beta lol).  i am running 2.3.2.1493 release final.  i will upgrade to the latest and greatest release when it is available i suppose.  that said, here's what happens when i tinker with the AIO settings in DrivePool...
     
    I
     
    click OK and the DP UI crashes.
     
    this happens when the AIO is selected (ticked or unticked), then any another plug-in is selected (again either ticked or unticked), then finally reselecting the AIO plug-in.  sometimes it will error when 'cancel' is clicked and only AIO was selected, though not always.  it's saving UI error reports; dark mode using the AIO is barely readable, the contrasts are way off and only running the light theme in the DrivePool gui does it look 'normal' but it still fails the above.  in the past i have un/reinstalled it with reboots etc. to no avail.  i will give it another go when i upgrade DP; for now the OFP & DUL plug-ins do all that i need.   
    another thing: 
    DrivePool 2.3.3.1505 gave us an update to the Disk Space Equalizer
     .1501 * [Issue #28751] Added an option to equalize by used disk space to the Disk Space Equalizer balancer.  
    &
     .1504 * Added an explicit placement limit setting to the Disk Space Equalizer balancer.  
    the AIO in its current state does not include these options.
     
    perhaps @methejuggler will update their code soon  or maybe  @Alex  will take over its maintenance? 
    i will be adding another pool of 3 or 4 ssds soon and would love to have the AIO updated and stable.
    cheers
  23. Like
    Shane reacted to PetarK in DrivePool needs Reset after restart, Windows 11 latest   
    This solution provided by Covecube is working. Posting here so it's visible to others.

    https://wiki.covecube.com/StableBit_DrivePool_F3540
  24. Like
    Shane got a reaction from Katz777 in How to have drives go to sleep while the StableBit DrivePool service is running?   
    To quote Christopher from that thread, "StableBit DrivePool pulls the directory list for the pool directly from the disks, and merges them in memory, basically.  So if you have something that is scanning the folders, you may see activity."
    There may be some directory/file list caching going on in RAM, whether at the Windows OS and/or DrivePool level, but DrivePool itself does not keep any form of permanent (disk-based, reboot-surviving) record of directory contents.
  25. Like
    Shane got a reaction from Katz777 in How to have drives go to sleep while the StableBit DrivePool service is running?   
    At least when testing on my own machines there is caching going on - but my opinion is that's being done by Windows since caching file system queries is part of a modern operating system's job description and having DrivePool do it too seems like it would be redundant (although I know dedicated caching programs e.g. PrimoCache do exist). Certainly there's nothing in DrivePool's settings that mentions caching.
    Whether a disk gets woken by a particular directory query is going to depend on whether that query can be answered by what's in the cache from previous queries.
×
×
  • Create New...