Jump to content

Shane

Moderators
  • Posts

    905
  • Joined

  • Last visited

  • Days Won

    86

Reputation Activity

  1. Like
    Shane reacted to servonix in File Placement Rules not being respected as pool is filling up   
    Each file placement rule is set to never allow files to be placed on other disks.
    Since posting this I have actually been able to correct the issue. I added another file placement rule to the list for .svg images in my media library as I noticed emby fetched one for a newly added movie. When Drivepool started balancing I noticed it placing files onto the SSD's excluded from the media file placement rules. I stopped the balancing, then started again and the issue started correcting itself. All media files on the SSD's were moved to the hard drives, and all images/metadata were moved to the correct SSD's.
    I strongly suspect that somehow the scanner balancer combined with old file placement rules from using those SSD's with the optimizer plugin were at fault.
    One of my drives had developed a couple of unreadable sectors a few weeks ago and started evacuating files onto the SSD's. At the time the SSD's were included in the file placement rules from previously being used with the ssd optimizer plugin. I had stopped the balancing, removed the culprit drive from the pool choosing to duplicate later and did a full reformat of the drive to overwrite the bad sectors. When the format completed I did a full surface scan and the drive showed as healthy, so I was comfortable re-adding it back into the pool. Drivepool reduplicated the data back onto the drive and a balancing pass was also done to correct the balancing issues from the drive being partially evacuated. The file placement rules were also changed at this time to exclude those SSD's from being used by my media folders and metadata/images. Everything was working as expected until the other day.
    For whatever reason only a small portion of newly added files were actually written to the excluded SSD's. It was literally only stuff added within the last couple of days despite a significant amount of data being added since re-adding that drive back into the pool and adjusting file placement rules to exclude the SSD's that were previously used by the optimizer. It's as if some kind of flag set by the scanner balancer or the old file placement rules weren't fully cleared.
    Regardless of the cause I'm glad that everything seems to be chugging along like normal now. If I start noticing any more weird behavior I intend on enabling logging to help support identify what's going on under the hood.
  2. Thanks
    Shane got a reaction from epfromer in Markers on Drive Pool disk list   
    The light blue, dark blue, orange and red triangle markers on a Disk's usage bar indicates the amounts of those types of data that DrivePool has calculated should be on that particular Disk to meet certain Balancing limits set by the user, and it will attempt to accomplish that on its next balancing run. If you hover your mouse pointed over a marker you should get a tooltip that provides information about it.
    https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Disks List (the section titled Balancing Markers)
  3. Thanks
    Shane got a reaction from Fox01 in covefs.sys causing my computer to crash?   
    If you uninstall DrivePool 2.3.11.1663 and install DrivePool 2.3.8.1600 (download folder link) does the BSOD stop happening?
  4. Thanks
    Shane got a reaction from sardonicus in Lost the option to manually rebalance after adding a drive.   
    Directly under the pie graph, "Manage Pool" is clickable and it highlights when you place the mouse cursor over it; I've linked a screenshot:

  5. Like
    Shane reacted to IanR in Drive Pool 2.3.10.1661_x64 on WHS2011 - Failing to start.   
    With 1600 Installed, go into Settings -> Select Updates... -> Settings -> Disable Automatic Updates
    Mine now runs without the constant notification that an update is available.
  6. Like
    Shane reacted to IanR in Drive Pool 2.3.10.1661_x64 on WHS2011 - Failing to start.   
    Thanks Christopher, I now have Drive Pool back up and running.
    For the record, I uninstalled DrivePool, without deleting the ProgramData folder, and then installed StableBit.DrivePool_2.3.8.1600_x64_Release.exe from the above link.
    After a reboot, DrivePool started and picked up the pool config (and license) without any input from me.
    Many Thanks!
  7. Like
    Shane reacted to TomTiddler in Just got this error message, not sure what to do ...   
    Thanx for the response Shane, what followed was a slight comedy of errors ( this was a disk which was mounted to avoid using s drive letter, turns out it's quite difficult to run chkdsk on boot for a 'mounted' drive). I finally realized "Wait a minute - this is part of a drive pool, and it's replicated!!!". So I just removed the drive, and replaced it. Sorry to have bothered you 😄
  8. Thanks
    Shane got a reaction from Hamby in Markers on Drive Pool disk list   
    The light blue, dark blue, orange and red triangle markers on a Disk's usage bar indicates the amounts of those types of data that DrivePool has calculated should be on that particular Disk to meet certain Balancing limits set by the user, and it will attempt to accomplish that on its next balancing run. If you hover your mouse pointed over a marker you should get a tooltip that provides information about it.
    https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Disks List (the section titled Balancing Markers)
  9. Like
    Shane got a reaction from o999 in Does DrivePool allow files to be split over different physical disks?   
    Hi David,
    DrivePool does not split any files across disks. Each file is stored entirely on a disk (or if duplication is enabled, each copy of that file is stored entirely on a separate disk to the others).
    You could use CloudDrive on top of DrivePool to take advantage of the former's block-based storage, but note that doing so would give up one of the advantages of DrivePool (that if a drive dies the rest of the pool remains intact); if you did this I would strongly recommend using either DrivePool's duplication (which uses N times as much space to allow N-1 drives to die without losing any data from the pool, though it will be read-only until the dead drive is removed) or hardware RAID5 in arrays of three to five drives each or RAID1 in pairs or similar (which allows one drive per array/pair to die without losing any data and would also allow the pool to remain writable while you replaced the dead drive).
  10. Like
    Shane reacted to SaintTDI in After removing 2 HDDs from Drivepool, I know have 42GB of duplicated files but I don't know where I can find them   
    thank you for the replies !
    The Re-measure worked perfectly  
  11. Like
    Shane got a reaction from roirraWedorehT in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11   
    FWIW, digging through Microsoft's documentation, I found these two entries in the file system protocols specification:
    https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/2d3333fe-fc98-4a6f-98a2-4bb805aff407
    https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/98860416-1caf-4c80-a9ab-8d61e1ccf5a5
    In short, if a file system cannot provide a file ID that is both unique within a given volume and stable until deleted, then it must set the field to either zero (indicating the file system does not support file IDs) or maxint (indicating the file system cannot given a particular file a unique ID) as per the specification.
  12. Like
    Shane got a reaction from roirraWedorehT in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11   
    MitchC, first of all thankyou for posting this! My (early a.m.) thoughts:
    (summarised) "DrivePool does not properly notify the Windows FileSystem Watcher API of changes to files and folders in a Pool." If so, this is certainly a bug that needs fixing. Indicating "I changed a file" when what actually happened was "I read a file" could be bad or even crippling for any cohabiting software that needs to respond to changes (as per your example of Visual Studio), as could neglecting to say "this folder changed" when a file/folder inside it is changed.
    (summarised) "DrivePool isn't keeping FileID identifiers persistent across reboots, moves or renames." Huh. Confirmed, and as I understand it the latter two should be persistent @Christopher (Drashna)? However, attaining persistence across reboots might be tricky given a FileID is only intended to be unique within a volume while a DrivePool file can at any time exist across multiple volumes due to duplication and move between volumes due to balancing and drive replacement. Furthermore as Microsoft itself states "File IDs are not guaranteed to be unique over time, because file systems are free to reuse them". I.e. software should not be relying solely on these over time, especially not backup/sync software! If OneDrive is actually relying on it so much that files are disappearing or swapping content then that would seem to be an own-goal by Microsoft. Digging further, it also appears that FileID identifiers (at least for NTFS) are not actually guaranteed to be collision-free (it's just astronomically improbable in the new 64+64bit format as opposed to the old but apparently still in use 16+48bit format).
    (quote) "the FileID numbers given out by DrivePool are incremental and very low.  This means when they do reset you almost certainly will get collisions with former numbers." Ouch. That's a good point. Any suggestions for mitigation until a permanent solution can be found? Perhaps initialising DrivePool's FileID counter using the system clock instead of initialising it to zero, e.g. at 100ns increments (FILETIME) even only an hour's uptime could give us a collision gap of roughly thirty-six billion?
    (quote) "I would avoid any file backup/synchronization tools and DrivePool drives (if possible)." I disagree; rather, I would opine that any backup/synchronization tool that relies solely on FileID for comparisons should be discarded (if possible); a metric that's not reliable over time should ipso facto not be trusted by software that needs to be reliable over time. EDIT 2024-10-22: However, as MitchC has pointed out, determining whether your tools are using FileID can be difficult and the risk of finding out the hard way is substantial.
    Incidentally, on the subject of file hashing I recommend ensuring Manage Pool -> Performance -> Read striping is un-ticked as I've found intermittent hashing errors in a few (not all) third-party tools when this is enabled; I don't know why this happens (maybe low-level disk calls that aren't compatible with non-physical volumes?) but disabling read-striping removes the problem and I've found the performance hit is minor.
  13. Thanks
    Shane reacted to MitchC in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11   
    Mostly.  As I think you mentioned earlier in this thread that doesn't disable FileIds and applications could still get the FileID of a file.  Depending how that ID is used it could still cause issues.  An example below is snapraid which doesn't use OpenByFileID but does trust that the same FileID is the same file.
    For the biggest problems (data loss, corruption, leakage) this is correct.  Of course, one generally can't know if an application is using FileIDs (especially if not open source) it is likely not mentioned in the documentation.  It also doesn't mean your favorite app may not start to do so tomorrow, and then all the sudden the application that worked perfectly for 4 years starts to silently corrupt random data.  By far the most likely apps to do this are backup apps, data sync apps, cloud storage apps, file sharing apps, things that have some reason to potentially try to track what files are created/moved/deleted/etc.  
    The other issue (and sure if I could go back in time I would split this thread in two) of the change notification bugs in DrivePool won't directly lead to data loss (although can greatly speed up the process above) .  It will, however, have the potential for odd errors and performance issues in a wide range of applications.  The file change API is used by many applications, not just the app types listed above (which often will use it if they run 24/7) but any app that interfaces with many files at once (IE coding IDE's/compilers, file explorers, music or video catalogs, etc).  This API is common, easy to use for developers, and generally can greatly increase performance of apps as they no longer need to manually check if every file they can just install one event listener on a parent directory and even if they only care about the notifications for some of the files in the directories under it they can just ignore the change events they don't care about.  It may be very hard to trace these performance issues or errors to drive pool due to how they may present themselves.  You are far more likely to think the application is buggy or at fault.
    Short Example of Disaster
    As it is a complex issue to understand I will give a short example of how FileIDs being reused can be devastating. 
    Lets say you use Google Drive or some other cloud backup / sharing application and it relies on the fact that as long as FileID 123 around it is always pointing to the same file.  This is all but guaranteed with NTFS.
    You only use Google Drive to backup your photos from your phone, from your work camera, or what have you.   You have the following layout on your computer:
    c:\camera\work\2021\OfficialWiringDiagram.png with file ID 1005
    c:\camera\personal\nudes\2024Collection\VeryTasteful.png with file ID 3909
    c:\work\govt\ClassifiedSatPhotoNotToPostOnTwitter.png with file ID 6050
    You have OfficialWiringDiagram.png shared with the office as its an important reason anytime someone tries to figure out where the network cables are going.
    Enter drive pool.  You don't change any of these files but DrivePool generates a file changed notification for OfficialWiringDiagram.png.  GoogleDrive says OK I know that file, I already have it backed up and it has file ID 1005.  It then opens File ID 1005 locally reads the new contents, and uploads it to the cloud overriding the old OfficialWiringDiagram.png.  Only problem is you rebooted, so 1005 was OfficialWiringDiagram.png before, but now file 1005 is actually your nude file VeryTasteful.png.  So it has just backed up your nude file into the cloud but as "OfficialWiringDiagram.png", and remember that file is shared to the cloud.  Next time someone goes to look at the office wiring diagram they are in for a surprise.  Depending on the application if 'ClassifiedSatPhotoNotToPostOnTwitter.png' became FileID 1005 even though it got a change notification for the path "c:\camera\work\2021\OfficialWiringDiagram.png" which is under the main folder it monitors ("c:\camera") when it opens File 1005 it instead now gets a file completely outside your camera folder and reads the highly sensitive file from c:\work\govt and now a file that should never be uploaded is shared to the entire office.
     
    Now you follow many best practices.  Google drive you restrict to only the c:\camera folder, it doesn't backup or access files anywhere else.  You have a Raid 6 SSD setup incase of drive failure, and image files from prior years are never changed, so once written to the drive they are not likely to move unless the drive was de-fragmented meaning pretty low chance of conflicts or some abrupt power failure causing it to be corrupted.   You even have some photo scanner that checks for corrupt photos just to be safe.  Except none of these things will save you from the above example.   Even if you kept 6 months of backup archives offsite in cold storage (made perfectly and not effected by the bug) and all deleted files are kept for 5 years, if you don't reference OfficialWiringDiagram.png but once a year you might not notice it was changed and the original data overwritten until after all your backups are corrupted with the nude and the original file might be lost forever.
    FileIDs are generally better than relying on file paths, if they used file paths when you renamed or moved file 123 to a new name in the same folder it would break anyone you previously had shared the file with if only file names are used.   If instead when you rename "BobsChristmasPhoto.png" to "BobsHolidayPhoto.png" the application knows it is the file being renamed as it still has File ID 123 then it can silently update on the backend the sharing data so when people click the existing link it still loads the photo.  Even if an application uses moderate de-duplication techniques like hashing the file to tell if it has just moved, if you move a file and slightly change it (say you clear the photo location metadata out that your phone put there) it would think it is an all new file without File IDs.
    FileID collisions are not just possible but basically guaranteed with drive pool.  With the change notification bug a sync application might think all your files are changing often as even reading the file or browsing the directory might trigger a notification it has changed.  This means it is backing up all those files again, which might be tens of thousands of photos.   As any time you reboot the File ID changes that means if it syncs that file after the reboot uploading the wrong contents (as it used File ID) and then you had a second computer it downloaded that file to you could put yourself in a never ending loop for backups and downloads that overrides one file with another file at random.  As the FileID it was known last time for might not exist when it goes to back it up (which I assume would trigger many applications to fall back to path validation) only part of your catalog would get corrupted each iteration.  The application might also validate that if the file is renamed it stayed within the root directory it interacts with.  This means if your christmas photo's file ID now pointed to something under "c:\windows" it would fall back to file paths as it knows that is not under the "c:\camera" directory it works with.
    This is not some hypothetical situation these are actual occurrences and behaviors I have seen happen to files I have hosted on drivepool.  These are not two-bit applications written by some one person dev team these are massively used first party applications, and commercial enterprise applications.
     
    If you can and you care about your data I would.  The convenience of drivepool is great, there are countless users it works fine for (at least as far as they know), but even with high technical understanding it can be quite difficult to detect what applications are effected by this. 
    If you thought you were safe because you use something like snapraid it won't stop this sort of corruption.  As far as snapraid is concerned you just deleted a file and renamed another on top of it.  Snapraid may even contribute further to the problem as it (like many) uses the windows FileID as the Windows equivalent of an inode number https://github.com/amadvance/snapraid/blob/e6b8c4c8a066b184b4fa7e4fdf631c2dee5f5542/cmdline/mingw.c#L512-L518  .  Applications assume inodes and FileIDs that are the same as before are the same file.  That is unless you use DrivePool, oops.  
    Apps might use timestamps in addition to FileIDs although timestamps can overlap say if you downloaded a zip archive and extracted it with Windows native (by design choice it ignores timestamps even if the zip contained them).
    SnapRAID can even use some advanced checks with syncing but in a worst case where a files content has actually changed but the FileID in question has the same size/timestamp SnapRAID assumes it is actually unmodified and leaves the parity data alone.  This means if you had two files with the same size/timestamp anywhere on the drive and one of them got the FileID of the other it would end up with incorrect parity data associated with that file.   Running a snapraid fix could actually result in  corruption as snapraid would believe the parity data is correct but the content on disk it thinks go with it does not.  Note:  I don't use snapraid but was asked this question and reading the manual here and the source above I believe this is technically correct.  It is great SnapRAID is open source and has such technical documentation plenty of backup / sync programs don't and you don't know what checking they do.
  14. Like
    Shane reacted to JC_RFC in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11   
    To be fair to Stablebit I used Drivepool for the past few years and have NEVER lost a single file because of Drivepool. The elaborateness OR simpleness of how you use Drivepool within itself is not really of concern.
    What is being warned of here though is if you use any special applications that might expect FileID to behave as per NTFS there will be risks with that.
    My example is that I use Freefilesync quite a bit to maintain files between my pool, my htpc and another backup location. When I move files on one drive, freefilesync using fileid recognises the file was moved so syncs a "move" on the remote filesystem as well. This saves potentially hours of copying and then deleting. It does not work on Drivepool because the fileid changes on each reboot. In this case Freefilesync fails "SAFE" in that it does the copy and delete instead, so I only suffer performance issues.
    What could happen though is that you use another app that say cleans old files, or moves files around that does not fail safe if a fileid is repeated for a different file etc and in doing so you do suffer file loss. This will only happen if you use some third party application that makes changes to files. It's not the type of thing a word processor or a pc game etc are going to be doing (typically in case someone jumps in with a it could be possible argument).
    So generally Drivepool is safe, and for you most likely of nothing to worry about, but if you do use an application now or in the future that is for cleaning up or syncing etc then be very careful in case it uses fileid and causes data loss because of this issue.
    For day to day use, in my experience you can continue to use it as is. If you want to add to the group of us that would like this improved, feel free to add your voice to the request as otherwise I don't see any update for this in the foreseeable future.
  15. Like
    Shane got a reaction from haoma in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11   
    Hi haoma.
    The corruption isn't being caused by DrivePool's duplication feature (and while read-striping can have some issues with some older or... I'll say "edge-case" utilities, so I usually just leave it off anyway, that's also not the cause here).
    The corruption risk comes in if any app relies on a file's ID number to remain unique and stable unless the file is modified or deleted, as that number is being changed by DrivePool even when the file is simply renamed, moved or if the drivepool machine is restarted - and in the latter case being re-used for completely different files.
    TLDR: far as I know currently the most we can do is to change the Override value for CoveFs_OpenByFileId from null to false (see Advanced Settings). At least as of this post date it doesn't fix the File ID problem, but it does mean affected apps should either safely fall back to alternative methods or at least return some kind of error so you can avoid using them with a pool drive.
  16. Thanks
    Shane got a reaction from ToooN in Japanese Translation does not work   
    Hi ToooN, try also editing the section of the 'Settings.json' file as below, then close the GUI, restart the StableBit DrivePool service and re-open the GUI?
    'C:\ProgramData\StableBit DrivePool\Service\Settings.json'
    "DrivePool_CultureOverride": { "Default": "", "Override": null to
    "DrivePool_CultureOverride": { "Default": "", "Override": "ja" I'm not sure if "ja" needs to be in quotes or not in the Settings.json file. Hope this helps!
    If it still doesn't change you may need to open a support ticket directly so StableBit can investigate.
  17. Like
    Shane reacted to Rivers in Pool duplication enabled, unduplicated files persist   
    Hello,
    I have a 12 disk pool using identical Toshiba N300 8TB drives. This system runs on Server 2022, and for years I've had no issues. This morning, I noticed that my 89.1 TB pool has 62.3 TB of Duplicated files, 1.21 TB of Unduplicated files, and 52.7 GB of "other". I have pool duplication enabled across the entire array, to include two 1 TB SATA SSDs used as scratch discs, for a total of 14 disks in the array. I have Manage Pool>File Protection>Pool File Duplication enabled. I am also using the SSD plugin. Up to now, everything has worked perfectly. After many years of using your software, this is my first issue. Please let me know if there's any other info I can provide to assist in figuring this out.
    Thank you
     
    Edit: I am running v2.3.8.1600
    Second Edit: I have solved the problem. I forgot that I had excluded a folder from the duplication process. The software works perfectly as always... the problem was located directly behind the screen.
  18. Like
    Shane reacted to ToooN in Japanese Translation does not work   
    RESULT: Success!!
    Thank you Drashna.
    I installed StableBit.DrivePool_2.3.10.1661_x64_Release.exe. 
    Successfully converted to Japanese!
    Thanks for the quick fix.
    Index of /DrivePoolWindows/release/download URL
    https://covecube.download/DrivePoolWindows/release/download/
  19. Thanks
    Shane reacted to Christopher (Drashna) in Japanese Translation does not work   
    Yup, something broke on our end.  This should be fixed now, and updating should get the fixed version.
  20. Like
    Shane reacted to dslabbekoorn in "Duplication Inconsistant" for files that don't exist!?   
    Hard Disk Sentinel Pro found an additional three HDD's that were "bad" or failing, on top of the 2 i already replaced, all due to bad sectors & running out of space to replace them.  Some were over 10 years old & all were over 5 years.  Pulled out the bad ones and went shopping.    Got my money's worth from them.  All new drives are server grade refurbs from the same company I bought one of the old drives from, they're online now, but still stand by their products & honor warranties so I feel pretty secure.  And as I read in another post, a sever crash/issue is a great excuse to upgrade your hardware.  My pool "accidentally" grew by 12Tb after I got it fixed.  😁  Best news was I turned on Pool Duplication and when I looked at was duplicating at the folder level, I found 2 miscellaneous folders that were not duplicating, changed them manually and YAY!  Measuring, Duplication and Balancing all finished OK.  Amazing what a hundred bucks or so of new equipment will do.  So my original issues were all equipment related after all. 
  21. Like
    Shane reacted to MitchC in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11   
    Sorry but even in my mission-not important environment I am not a fan of data loss or leakage.   Also, extremely low is an understatement.   NTFS supports 2^32 possible files on a drive.  The MFT file index is actually a 48 bit entry, that means you could max out your new MFT records 65K times prior to it needing to loop around.  The sequence number (how many times that specific MFT record is updated) is an additional 16 bits on its own so if you could delete and realloc a file to the exact same MFT record you still would need to do so with that specific record 65K times.  If an application is monitoring for file changes, hopefully it catches one of those:)
    It is nearly impossible to know how an application may use FileID especially as it may only be used as a fallback due to other features drivepool does not implement and maybe they combine FileID with something else.   If an application says hey I know file 1234  and on startup it checks file 1234. If that file exists it can be near positive its the same file if is gone it simply removes file 1234 from its known files and by the time 1234 it reused it hasn't known about it in forever.
    The problem here is not necessarily when FileIDs change  id wager most applications could probably handle file ids changing even though the file has not fine (you might get extra transfer, or backed up extra data, or performance loss temporarily).  It is the FileID reuse that is what leads to the worst effects of data loss, data leakage, and corruption.  The file id is 64 bits, the max file space is 32 bits (and realistically most people probably have a good bit fewer than 4 billion files). DrivePool could randomly assign file ids willy nilly every boot and probably cause far fewer disasters.  DrivePool could use underlying FIleIDs likely through some black magic hackery.  The MFT counter is 48 bit, but I doubt those last 9 bits are touched on most normal systems.   If DrivePool assigned an incremental number to each drive  and then overwrote those 9 bits of the FileID from the underlying system with the drive ID you would support 512 hard drives in one drive 'pool' and still have nearly the same near zero file collision of FileID, while also having a stable file ID.   It would only change the FIleID if a file moved in the background from one drive to another(and not just mirrored).   It could even keep it the same with a zero byte ID file left behind on a ghost folder if so desired, but the file ID changing is probably far less a problem.  A backup restore program that deleted the old file and created it again would also change the FileID and I doubt that causes issues.
    That said, it is not really my job to figure out how to solve this problem in a commercial product.
    As you mentioned back in December it is unquestionable that drivepool is doing the wrong thing:
    it uses MUST in caps.  
    My problem isn't that this bug exists (although that sucks). My problem is this has been and continues to be handled exceptionally poorly by Stablebit even though it can pose significant risk to users without them even knowing it.  I likely spent more of my time investigating their bug then they have.  We are literally looking at nearly two years now since my initial notification and users can make the same mistakes now as back then despite the fact they could be warned or prevented from doing so.
  22. Like
    Shane reacted to MitchC in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11   
    Shane, as always, has done a great job summarizing everything and I certainly agree with most of it.  I do want to provide some clarification, and also differ on a few things:
    *) This is not about DrivePool being required to precisely emulate NTFS and all its features, that is probably a never going to happen.  At best DrivePool may be able to provide a driver level drive implementation that could allow it to be formatted in the way Shane describes CloudDrive does. One of the things this critical bug is made worse by is the fact DrivePool specifically doesn't implement VSS or similar
    *) The two issues here are not the same, or one causing the other.  They are distinct, but the incorrect file changed bug makes the FileID problem potentially so much worse (or maybe in unlucky situations it to happen at all).   Merely by browsing a folder can cause file change notifications to fire on the files on it in certain situations.  This means unmodified files an application listening to the notification would believe have been modified.  It is possible if this bug did not exist then only would written files have the potential for corruption rather than all files.

    These next two points are not facts but IMO:
    *) DrivePool claims to be NTFS if it cannot support certain NTFS features it should break them as cleanly as possible (not as compatible as possible as it might currently).  FileID support should be as disabled as possible by DrivePool.  Open by file ID clearly banned.  I don't know what would happen if FileID returned 0 or claimed not available on the system even thought it is an NTFS volume.  There are things DrivePool could potentially due to minimize the fatal damage this FileID bug can cause (ie not resetting to zero) but honestly even then all FileID support should be as turned off as possible.   If a user wants to enable these features DrivePool should provide a massive disclaimer about the possible damage this might cause.
    *) DrivePool has an ethical responsibility to its users it is currently violating.  It has a feature that can cause massive performance problems, data loss, and data corruption.  It has other bugs that accelerate these issues.  DrivePool is aware of this, they should warn users using these features that unexpected behaviors and possible irreversible damage could occur.  It annoys me the effort I had to exert to research this bug.  As a developer if I had a file system product users were paying for and it could cause silent corruption I would find this highly disturbing and do what I could to protect other users.   It is critical to remember this can result in corruption of the worst kind.  Corruption that normal health monitoring tools would not detect (files can still be read and written) but it can corrupt files that are not being 'changed' in the background at random rates.  It wouldn't matter if you kept daily backups for 6 months if you didn't detect this for 9 months you would have archived the corruption into those backups and have no way of recovering that data.  It can happen slowly and literally only validating the file contents against some known good would show it.  Now StableBit may feel they skirt some of the responsibility as they don't do the corruption directly, some other application relying on drivepool's drive acting as NTFS says it will, and DrivePool tries to pretend to do to get the data loss.  The problem is drivepools incorrect implementation is the direct reason this corruption occurs, and the applications that can cause it are not doing anything wrong.
  23. Like
    Shane got a reaction from Thronic in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11   
    As I understood it the original goal was always to aim for having the OS see DrivePool as of much as a "real" NTFS volume as possible. I'm probably not impressed nearly enough that Alex got it to the point where DrivePool became forget-it's-even-there levels of reliable for basic DAS/NAS storage (or at least I personally know three businesses who've been using it without trouble for... huh, over six years now?). But as more applications exploit the fancier capabilities of NTFS (if we can consider File ID to be something fancy) I guess StableBit will have to keep up. I'd love a "DrivePool 3.0" that presents a "real" NTFS-formatted SCSI drive the way CloudDrive does, without giving up that poolpart readability/simplicity.
    On that note while I have noticed StableBit has become less active in the "town square" (forums, blogs, etc) they're still prompt re support requests and development certainly hasn't stopped with beta and stable releases of DrivePool, Scanner and CloudDrive all still coming out.
    Dragging back on topic - any beta updates re File ID, I'll certainly be posting my findings.
  24. Like
    Shane reacted to Thronic in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11   
    As I interpreted it, the first is largely caused due to second.
    Interesting. I won't consider that critical, for me, as long as it creates a gentle userspace error and won't cause covefs to bsod.
    That's kind of my point. Hoping and projecting what should be done, doesn't help anyone or anything. Correctly emulating a physical volume with exact NTFS behavior, would. I strongly want to add I mean no attitude or any kind of condescension here, but don't want to use unclear words either - just aware how it may come across online. As a programmer working with win32 api for a few years (though never virtual drive emulation) I can appreciate how big of a change it can be to change now. I assume DrivePool was originally meant only for reading and writing media files, and when a project has gotten as far as this has, I can respect that it's a major undertaking - in addition to mapping strict NTFS proprietary behavior in the first place - to get to a perfect emulation.
    It's just a particular hard case of figuring out which hardware is bugging out. I never overclock/volt in BIOS - I'm very aware of its pitfalls and also that some MB may do so by default - it's checked. If it was a kernel space driver problem I'd be getting bsod and minidumps, always. But as the hardware freezes and/or turns off... smells like hardware issue. RAM is perfect, so I'm suspecting MB or PSU. First I'll try to see if I can replicate it at will, at that point I'd be able to push in/outside the pool to see if DP matters at all. But this is my own problem... Sorry for mentioning it here.
     
    Thank you for taking the time to reply on a weekend day. It is what it is, I suppose.
  25. Like
    Shane reacted to haoma in Helldivers 2 problems, when installed on a DrivePool pool   
    is there any update on this?
    I'm interested in buying drivepool and i want to use it not just for storage but also gaming.
×
×
  • Create New...