Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 03/20/24 in Posts

  1. DrivePool does not use encryption (that's CloudDrive). However, in the event that you have used Windows Bitlocker to encrypt the physical drives on which your pool is stored then you will need to ensure you have those key(s) saved (which Bitlocker would have prompted you to do during the initial encryption process).
    2 points
  2. hello 1. make note of or take screenshots of your DrivePool settings if you have changed them from the default settings in any way. if you take SSs step 2 is important. DP saves your pool data in Alternate Data Streams on the drives themselves but doesn't save any customized balancer/file placement rules etc. from Manage Pool ^ Balancing... under the pie chart in the GUI. also take note of Manage Pool ^ Performance > settings as well. 2. make sure all your user data (i.e. all docs, pics, DLs, etc.) from your C:\Users\[your user name]\ have been saved/backed up elsewhere. 3. yes deactivate your license - cogwheel with downpointing arrow in upper right corner/Manage license/deactivate. in fact you should do this for all licensed 3rd party software on your machine. if you are reinstalling on the EXACT same hardware it *shouldn't* much matter but better safe than hassled later. 4. power OFF machine, and unplug/detach ALL drives EXCEPT your win10 drive from the mobo and any USB ports. IOWs ONLY the win10 boot drive where you want to clean install win11 is attached. 5. install windoze 11 and update to latest version, all new windows update security patches, etc etc. 6. DL and install the latest version of DP from https://stablebit.com/DrivePool/Download and reactivate the license. 7. power OFF your machine and reconnect your DrivePool drives and power ON. 8. in the DP GUI Manage Pool ^ Balancing... ensure all is reconfigured and set up as it was before the reinstall. SAVE. Manage Pool ^ Performance > as well. if it were me, i would reboot here. 9. it is important to remeasure the pool before using it normally. Manage Pool > Remeasure... Remeasure. *NOTE* if you never messed with the settings and all was left at default before, steps 1 and 8 can probably be omitted/ignored. my own pool is fairly customized, so i included them as part of the procedure I would follow. cheers
    2 points
  3. I think that's not currently possible. It does sound useful and you might wish to submit a feature request (e.g. perhaps "when setting per-folder duplication, can we please also be able to set whether duplication for a folder is real-time or scheduled") to see if it's feasible?
    1 point
  4. I'm sure you figured it out already... From the images you posted, it just looks like a simple change is needed. The pool called ORICO BOX is fine as is. The one in the first image is not correct. You should have: A pool that has 12TB1 & 12TB2 with NO duplication set. (lets give it drive letter W:) A pool called ORICO BOX with NO duplication set. (with the assorted drives) (Lets call it drive letter X:) Now, drive W: essentially has 24TB of storage since anything written to W: will only be saved to ONE of the two drives. You can set the balancing plugin to make them fill up equally with new data. Drive X: essentially has 28TB of storage since anything written to X: will only be saved to ONE of the five drives. At this point, you make ANOTHER new pool, Lets call it Z: In it, put Drivepool W: and Drivepool X:. Set the duplication settings to 2X for the entire pool. Remember, you are only setting Drivepool Z: to 2X duplication, no other drivepools need changed. What this should do (if I didn't make a dumb typo): Any file written to drive Z: will have one copy stored on either 12TB1 OR 12TB2, AND a duplicate copy will be stored on ONE of the five Orico Box drives. You must read & write your files on drive Z: to make this magic happen. Draw it out as a flowchart on paper and it is much easier to visualise.
    1 point
  5. This is mainly an informational post. This is concerning Windows 10. I have 13 drives pooled and i have every power management function set so as to not allow Windows to control power or in any way shut down the drives or anything else. I do not allow anything on my server to sleep either. I received a security update from Windows about 5 days ago. After the update I began daily to receive notices that my drives were disconnected. Shortly after any of those notices (within 2 minutes) I received a notice that all drives have been reconnected. There was never any errors resulting from whatever triggered the notices. I decided to check and I found that one of my USB controllers had its power control status changed. I changed it back to not allowing Windows to control its power and I have not received any notices since. I do not know for sure but I am 99% sure that the Windows update toggled that one controller's power control status to allow windows to turn it off when not being used. I cannot be 100% sure that I have had it always turned off but, until the update, I received none of the notices I started receiving after the update. I suggest, if anyone starts receiving weird notices about several drives becoming lost from the pool, that you check the power management status of your drives. Sometimes Windows updates are just not able to resist changing things. They also introduce gremlins. You just have to be careful to not feed them after midnight and under no circumstances should you get an infested computer wet.
    1 point
  6. Future reference for anyone else who runs into this issue, I fixed it with the following settings: Under file placement settings, uncheck "unless drive is being emptied", but leave "file placement rules respect real-time..." checked. This is important because the SSD optimizer empties the drive, which is why it was overriding file placement rules. In file placement rules, Folder A should have the desired archive drive checked as well as the SSD cache drive.
    1 point
  7. Thanks VapechiK You gave me back the courage ! I am not sure if I will remove my drives during the re-installation process but I decided to re-install the OS and also Thanks Shane to double check and give me the fact that encryption option was in stablebit cloudrive (not drivepool) and I've never used bitlocker option so I think I am safe from that encryption key problem
    1 point
  8. no need to fear... as long as you have backups all is good. DrivePool does not use encryption by default. no 'encryption keys' exist so no worries there. it is a non-issue. CloudDrive provides encryption, but if you are not using it, there is no need to even consider it in this discussion. refering to my initial reply: step 1. for your convenience. nothing else. step 2. DUH! step 3. is an all-around good idea. self-explanatory. step 4. this is the best practice. mostly just to ensure no screw-ups. no 'real need' to do so. if you are confident you know which drive to re-install to... go for it. if you manually format your C:\ drive (or when asked during installation process) make sure you choose GPT partition (should be selected by default). step 5. self-explanatory. step 6. self-explanatory. step 7. self-explanatory. step 8. if you care, reset your settings the way you had them before the reinstall. if you are good with the default settings, don't worry about it. a REBOOT is a good idea here. step 9. anytime you reinstall/reset/change settings etc. in DrivePool, it is best practice to remeasure after you save changes. if you go with the default settings, it will probably remeasure all by itself upon reboot. if not... Manage Pool ^ Remeasure > Remeasure. very simple procedure actually. nothing to fear
    1 point
  9. A final, I hope, follow-up to this. I have fixed the problems by simply going to the OS level and going to each drive and making "Everyone" the owner, with full rights, to each of the PoolPart... directories on the drives. That seems to have cleared everything up and the duplicating functioning is restored. I probably made this more trouble than it needed to be but I plead age and poor health that influenced my decisions to the more complicated side. I lost a few files but at least i did not need to rebuild my entire pool. I wish there was a tool available that will fix this kind of issue automagically but doing it manually is not too bad as long as the correct choice is made early on. Unfortunately some of my early decisions led me down the wrong path so it took me several times longer than it should have. Again, I plead old brain syndrome.
    1 point
  10. To start, while new to DrivePool I love its potential I own multiple licenses and their full suite. If you only use drivepool for basic file archiving of large files with simple applications accessing them for periodic reads it is probably uncommon you would hit these bugs. This assumes you don't use any file synchronization / backup solutions. Further, I don't know how many thousands (tens or hundreds?) of DrivePool users there are, but clearly many are not hitting these bugs or recognizing they are hitting these bugs, so this IT NOT some new destructive my files are 100% going to die issue. Some of the reports I have seen on the forums though may be actually issues due to these things without it being recognized as such. As far as I know previously CoveCube was not aware of these issues, so tickets may not have even considered this possibility. I started reporting these bugs to StableBit ~9 months ago, and informed I would be putting this post together ~1 month ago. Please see the disclaimer below as well, as some of this is based on observations over known facts. You are most likely to run into these bugs with applications that: *) Synchronize or backup files, including cloud mounted drives like onedrive or dropbox *) Applications that must handle large quantities of files or monitor them for changes like coding applications (Visual Studio/ VSCode) Still, these bugs can cause silent file corruption, file misplacement, deleted files, performance degradation, data leakage ( a file shared with someone externally could have its contents overwritten by any sensitive file on your computer), missed file changes, and potential other issues for a small portion of users (I have had nearly all these things occur). It may also trigger some BSOD crashes, I had one such crash that is likely related. Due to the subtle nature some of these bugs can present with, it may be hard to notice they are happening even if they are. In addition, these issues can occur even without file mirroring and files pinned to a specific drive. I do have some potential workarounds/suggestions at the bottom. More details are at the bottom but the important bug facts upfront: Windows has a native file changed notification API using overlapped IO calls. This allows an application to listen for changes on a folder, or a folder and sub folders, without having to constantly check every file to see if it changed. Stablebit triggers "file changed" notifications even when files are just accessed (read) in certain ways. Stablebit does NOT generate notification events on the parent folder when a file under it changes (Windows does). Stablebit does NOT generate a notification event only when a FileID changes (next bug talks about FileIDs). Windows, like linux, has a unique ID number for each file written on the hard drive. If there are hardlinks to the same file, it has the same unique ID (so one File ID may have multiple paths associated with it). In linux this is called the inode number, Windows calls it the FileID. Rather than accessing a file by its path, you can open a file by its FileID. In addition it is impossible for two files to share the same FileID, it is a 128 bit number persistent across reboots (128 bits means the number of unique numbers represented is 39 digits long, or has the uniqueness of something like the MD5 hash). A FileID does not change when a file moves or is modified. Stablebit, by default, supports FileIDs however they seem to be ephemeral, they do not seem to survive across reboots or file moves. Keep in mind FileIDs are used for directories as well, it is not just files. Further, if a directory is moved/renamed not only does its FileID change but every file under it changes. I am not sure if there are other situations in which they may change. In addition, if a descendant file/directory FileID changes due to something like a directory rename Stablebit does NOT generate a notification event that it has changed (the application gets the directory event notification but nothing on the children). There are some other things to consider as well, DrivePool does not implement the standard windows USN Journal (a system of tracking file changes on a drive). It specifically identifies itself as not supporting this so applications shouldn't be trying to use it with a drivepool drive. That does mean that applications that traditionally don't use the file change notification API or the FileIDs may fall back to a combination of those to accomplish what they would otherwise use the USN Journal for (and this can exacerbate the problem). The same is true of Volume Shadow Copy (VSS) where applications that might traditionally use this cannot (and drivepool identifies it cannot do VSS) so may resort to methods below that they do not traditionally use. Now the effects of the above bugs may not be completely apparent: For the overlapped IO / File change notification This means an application monitoring for changes on a DrivePool folder or sub-folder will get erroneous notifications files changed when anything even accesses them. Just opening something like file explorer on a folder, or even switching between applications can cause file accesses that trigger the notification. If an application takes actions on a notification and then checks the file at the end of the notification this in itself may cause another notification. Applications that rely on getting a folder changed notification when a child changes will not get these at all with DrivePool. If it isn't monitoring children at all just the folder, this means no notifications could be generated (vs just the child) so it could miss changes. For FileIDs It depends what the application uses the FileID for but it may assume the FileID should stay the same when a file moves, as it doesn't with DrivePool this might mean it reads or backs up, or syncs the entire file again if it is moved (perf issue). An application that uses the Windows API to open a File by its ID may not get the file it is expecting or the file that was simply moved will throw an error when opened by its old FileID as drivepool has changed the ID. For an example lets say an application caches that the FileID for ImportantDoc1.docx is 12345 but then 12345 refers to ImportantDoc2.docx due to a restart. If this application is a file sync application and ImportantDoc1.docx is changed remotely when it goes to write those remote changes to the local file if it uses the OpenFileById method to do so it will actually override ImportantDoc2.docx with those changes. I didn't spend the time to read Windows file system requirements to know when Windows expects a FileID to potentially change (or not change). It is important to note that even if theoretical changes/reuse are allowed if they are not common place (because windows uses essentially a number like an md5 hash in terms of repeats) applications may just assume it doesn't happen even if it is technically allowed to do so. A backup of file sync program might assume that a file with specific FileID is always the same file, if FileID 12345 is c:\MyDocuments\ImportantDoc1.docx one day and then c:\MyDocuments\ImportantDoc2.docx another it may mistake document 2 for document 1, overriding important data or restore data to the wrong place. If it is trying to create a whole drive backup it may assume it has already backed up c:\MyDocuments\ImportantDoc2.docx if it now has the same File ID as ImportantDoc1.docx by the time it reaches it (at which point DrivePool would have a different FileID for Document1). Why might applications use FileIDs or file change notifiers? It may not seem intuitive why applications would use these but a few major reasons are: *) Performance, file change notifiers are a event/push based system so the application is told when something changes, the common alternative is a poll based system where an application must scan all the files looking for changes (and may try to rely on file timestamps or even hashing the entire file to determine this) this causes a good bit more overhead / slowdown. *) FileID's are nice because they already handle hardlink file de-duplication (Windows may have multiple copies of a file on a drive for various reasons, but if you backup based on FileID you backup that file once rather than multiple times. FileIDs are also great for handling renames. Lets say you are an application that syncs files and the user backs up c:\temp\mydir with 1000 files under it. If they rename c:\temp\mydir to c:\temp\mydir2 an application use FileIDS can say, wait that folder is the same it was just renamed. OK rename that folder in our remote version too. This is a very minimal operation on both ends. With DrivePool however the FileID changes for the directory and all sub-files. If the sync application uses this to determine changes it now uploads all these files to the system using a good bit more resources locally and remotely. If the application also uses versioning this may be far more likely to cause a conflict with two or more clients syncing, as mass amounts of files are seemingly being changed. Finally, even if an application is trying to monitor for FileIDs changing using the file change API, due to notification bugs above it may not get any notifications when child FileIDs change so it might assume it has not. Real Examples OneDrive This started with massive onedrive failures. I would find onedrive was re-uploading hundreds of gigabytes of images an videos multiple times a week. These were not changing or moving. I don't know if the issue is onedrive uses FileIDs to determine if a file is already uploaded, or if it is because when it scanned a directory it may have triggered a notification that all the files in that directory changed and based on that notification it reuploads. After this I noticed files were becoming deleted both locally and in the cloud. I don't know what caused this, it might have been because the old file it thought was deleted as the FileID was gone and while there was a new file (actually the same file) in its place there may have been some odd race condition. It is also possible that it queued the file for upload, the FileID changed and when it went to open it to upload it found it was 'deleted' as the FileID no longer pointed to a file and queued the delete operation. I also found that files that were uploaded into the cloud in one folder were sometimes downloading to an alternate folder locally. I am guessing this is because the folder FileID changed. It thought the 2023 folder was with ID XYZ but that now pointed to a different folder and so it put the file in the wrong place. The final form of corruption was finding the data from one photo or video actually in a file with a completely different name. This is almost guaranteed to be due to the FileID bugs. This is highly destructive as backups make this far harder to correct. With one files contents replaced with another you need to know when the good content existed and in what files were effected. Depending on retention policies the file contents that replaced it may override the good backups before you notice. I also had a BSOD with onedrive where it was trying to set attributes on a file and the CoveFS driver corrupted some memory. It is possible this was a race condition as onedrive may have been doing hundreds of files very rapidly due to the bugs. I have not captured a second BSOD due to it, but also stopped using onedrive on DrivePool due to the corruption. Another example of this is data leakage. Lets say you share your favorite article on kittens with a group of people. Onedrive, believing that file has changed, goes to open it using the FileID however that file ID could essentially now correspond to any file on your computer now the contents of some sensitive file are put in the place of that kitten file, and everyone you shared it with can access it. Visual Studio Failures Visual studio is a code editor/compiler. There are three distinct bugs that happen. First, when compiling if you touched one file in a folder it seemed to recompile the entire folder, this due likely to the notification bug. This is just a slow down, but an annoying one. Second, Visual Studio has compiler generated code support. This means the compiler will generate actual source code that lives next to your own source code. Normally once compiled it doesn't regenerate and compile this source unless it must change but due to the notification bugs it regenerates this code constantly and if there is an error in other code it causes an error there causing several other invalid errors. When debugging visual studio by default will only use symbols (debug location data) as the notifications from DrivePool happen on certain file accesses visual studio constantly thinks the source has changed since it was compiled and you will only be able to breakpoint inside source if you disable the exact symbol match default. If you have multiple projects in a solution with one dependent on another it will often rebuild other project deps even when they haven't changed, for large solutions that can be crippling (perf issue). Finally I often had intellisense errors showing up even though no errors during compiling, and worse intellisense would completely break at points. All due to DrivePool. Technical details / full background & disclaimer I have sample code and logs to document these issues in greater detail if anyone wants to replicate it themselves. It is important for me to state drivepool is closed source and I don't have the technical details of how it works. I also don't have the technical details on how applications like onedrive or visual studio work. So some of these things may be guesses as to why the applications fail/etc. The facts stated are true (to the best of my knowledge) Shortly before my trial expired in October of last year I discovered some odd behavior. I had a technical ticket filed within a week and within a month had traced down at least one of the bugs. The issue can be seen https://stablebit.com/Admin/IssueAnalysis/28720 , it does show priority 2/important which I would assume is the second highest (probably critical or similar above). It is great it has priority but as we are over 6 months since filed without updates I figured warning others about the potential corruption was important. The FileSystemWatcher API is implemented in windows using async overlapped IO the exact code can be seen: https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/System.IO.FileSystem.Watcher/src/System/IO/FileSystemWatcher.Win32.cs#L32-L66 That corresponds to this kernel api: https://learn.microsoft.com/en-us/windows/win32/fileio/synchronous-and-asynchronous-i-o Newer api calls use GetFileInformationByHandleEx to get the FileID but with older stats calls represented by nFileIndexHigh/nFileIndexLow. In terms of the FileID bug I wouldn't normally have even thought about it but the advanced config (https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings) mentions this under CoveFs_OpenByFileId "When enabled, the pool will keep track of every file ID that it gives out in pageable memory (memory that is saved to disk and loaded as necessary).". Keeping track of files in memory is certainly very different from Windows so I thought this may be the source of issue. I also don't know if there are caps on the maximum number of files it will track as if it resets FileIDs in situations other than reboots that could be much worse. Turning this off will atleast break nfs servers as it mentions it right in the docs "required by the NFS server". Finally, the FileID numbers given out by DrivePool are incremental and very low. This means when they do reset you almost certainly will get collisions with former numbers. What is not clear is if there is the chance of potential FileID corruption issues. If when it is assigning these ids in a multi-threaded scenario with many different files at the same time could this system fail? I have seen no proof this happens, but when incremental ids are assigned like this for mass quantities of potential files it has a higher chance of occurring. Microsoft mentions this about deleting the USN Journal: "Deleting the change journal impacts the File Replication Service (FRS) and the Indexing Service, because it requires these services to perform a complete (and time-consuming) scan of the volume. This in turn negatively impacts FRS SYSVOL replication and replication between DFS link alternates while the volume is being rescanned.". Now DrivePool never has the USN journal supported so it isn't exactly the same thing, but it is clear that several core Windows services do use it for normal operations I do not know what backups they use when it is unavailable. Potential Fixes There are advanced settings for drivepool https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings beware these changes may break other things. CoveFs_OpenByFileId - Set to false, by default it is true. This will disable the OpenByFileID API. It is clear several applications use this API. In addition, while DrivePool may disable that function with this setting it doesn't disable FileID's themselves. Any application using FileIDs as static identifiers for files may still run into problems. I would avoid any file backup/synchronization tools and DrivePool drives (if possible). These likely have the highest chance of lost files, misplaced files, file content being mixed up, and excess resource usage. If not avoiding consider taking file hashes for the entire drivepool directory tree. Do this again at a later point and make sure files that shouldn't have changed still have the same hash. If you have files that rarely change after being created then hashing each file at some point after creation and alerting if that file disappears or hash changes would easily act as an early warning to a bug here being hit.
    1 point
  11. Nice find. Makes sense to me. Hopefully Chris/Alex can make Drivepool compliant with those requirements.
    1 point
  12. FWIW, digging through Microsoft's documentation, I found these two entries in the file system protocols specification: https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/2d3333fe-fc98-4a6f-98a2-4bb805aff407 https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/98860416-1caf-4c80-a9ab-8d61e1ccf5a5 In short, if a file system cannot provide a file ID that is both unique within a given volume and stable until deleted, then it must set the field to either zero (indicating the file system does not support file IDs) or maxint (indicating the file system cannot given a particular file a unique ID) as per the specification.
    1 point
  13. Response from Freefilesync developer. I read through the Microsoft docs you posted earlier and others, and I agree with the Freefilesync developer. It appears the best way to track all files on a volume on NTFS is to use fileid which is expected to stay persistent. This requires no extra overhead or work as the Filesystem maintains FileID’s automatically. ObjectID requires extra overhead and is only really intended to track special files like shortcuts for link tracking etc. Any software that is emulating an NTFS system should therefore provide FileID’s and guarantee they stay persistent with a file on that volume. I am seeing the direct performance impact from this and agree with Mitch that there can be other adverse side affects potentially much worse than just performance issues if someone uses software that expects FileID’s to behave as per Microsoft’s documentation. Finally also note that ObjectID is not supported by the Refs filesystem, whereas FileID is. https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/ns-ntifs-_file_objectid_information ReFS doesn't support object IDs. ReFS uses 128-bit file IDs, so can't cleanly distinguish between file ID versus object ID when processing an open by ID.
    1 point
  14. Thanks for the reply Chris. Note: the beta does not fix the FileID change on rename or copy issue. I have posted your comment on the Freefilesync forums and will see if Object-ID is an option for consideration there. Meanwhile I'd still think that it would be better if file-id behaved more like regular ntfs volumes and stayed persistent. From the same document you referenced.... It mentions that with the FAT file system it is not safe to assume file-id will not change over time, but with NTFS it appears the file-id is indeed persistent for the life of the file.
    1 point
  15. I'll post it here too. There is a fix in the latest betas involving memory corruptions of file IDs. However, ... the issue may also be the wrong API being used:
    1 point
  16. Very well explained Mitch. I just discovered this issue as well while trying to work out why my installation of Freefilesync wasn't behaving as expected. Drivepool indeed changes fileid every time a file is renamed or moved which is not correct NTFS behaviour. The result is that if i move say 100GB of data on my drivepool from one folder to another (or rename a large group of files) when I run freefilesync for backup instead of mirroring the file moves or renames, it needs to delete and recopy every moved or renamed file. Over the network this can take hours instead of less than a second so the impact is substantial.
    1 point
  17. Shane

    Running Out of Drive Letters

    Pretty much as VapechiK says. Here's a how-to list based on your screenshot at the top of this topic: Create a folder, e.g. called "mounts" or "disks" or whatever, in the root of any physical drive that ISN'T your drivepool and IS going to be always present: You might use your boot drive, e.g. c:\mounts You might use a data drive that's always plugged in, e.g. x:\mounts (where "x" is the letter of that drive) Create empty sub-folders inside the folder you created, one for each drive you plan to "hide" (remove the drive letter of): I suggest a naming scheme that makes it easy to know which sub-folder is related to which drive. You might use the drive's serial number, e.g. c:\mounts\12345 You might have a labeller and put your own labels on the actual drives then use that for the name, e.g. c:\mounts\501 Open up Windows Disk Management and for each of the drives: Remove any existing drive letters and mount paths Add a mount path to the matching empty sub-folder you created above. Reboot the PC (doesn't have to be done straight away but will clear up any old file locks etc). That's it. The drives should now still show up in Everything, as sub-folders within the folder you created, and in a normal file explorer window the sub-folder icons should gain a small curved arrow in the bottom-left corner as if they were shortcuts. P.S. And speaking of shortcuts I'm now off on a road trip or four, so access is going to be intermittent at best for the next week.
    1 point
  18. VapechiK

    Running Out of Drive Letters

    hi yes, what DaveJ suggested is your best bet. and Shane is correct (as usual). you have (in)effectively mounted your pool drives into a folder on the pool and this is causing Everything to fail and WILL cause other problems down the road. to quote Shane: "Late edit for future readers: DON'T mount them as folders inside the pool drive itself, nor inside a poolpart folder. That risks a recursive loop, which would be bad." 1. on your C (Bears) drive, recreate the D:\DrivePool folder where you mounted your drives 301 302 etc. so you now have C:\DrivePool with EMPTY folders for all your drives that are in the pool. DO NOT try to drag and drop the DrivePool folder on D to C mmm mmm bad idea. just do this manually as you did before. 2. STOP the DrivePool service (win + R, type 'services.msc' find StableBit DrivePool Service and Stop it). 3. go to Disk Management and as in https://wiki.covecube.com/StableBit_DrivePool_Q4822624 remount all the drives from D:\DrivePool into the drive folders in C:\DrivePool. windows may/will throw some warnings about the change. ignore them and remount all 16 from D:\DrivePool to C:\DrivePool. 4. reboot now your file explorer should show Bears C:, DrivePool D:, and maybe G X and Y too, idk... enable show hidden files and folders and navigate to C:\DrivePool. doubleclicking any of the drive folders will show the contents of the drive if any and a hidden PoolPart.xxxx folder. these PoolPart folders are where the 'POOL' lives. and this is where/how to access your data from 'outside' the pool. be careful they are not deleted. 5. go to the folder DrivePool on D and delete it. totally unnecessary after the remount from D to C and now it is just a distraction. 6. life is good. some advice: for simplicity's sake, i would rename C:\DrivePool to C:\Mounts or something similar. having your pool and the folder where its drives are mounted with both the same name WILL only confuse someone at some point and bad things could happen. hope this helps cheers
    1 point
  19. Sorry, should also mention this is confirmed by StableBit and can be easily reproduced. The attached powershell script is a basic example of the file monitoring api. Run it by "monitor.ps1 my_folder" where my folder is what you want to monitor. Have a file say hello.txt inside. Open that file in notepad. It should instantly generate a monitoring file change event. Further tab away from notepad and tab back to it, you will again get a changed event for that file. Run the same thing on a true NTFS system and it will not do the same. You can also reproduce the lack of notifications for other events by changing the IncludeSubdirectories variable in it and doing some of the tests I mention above. watcher.ps1
    1 point
  20. So this is correct, as the documentation you linked to states. One item I mentioned though, is the fact that even if it can be re-used if in practice it isn't software may make the wrong assumption that it won't. Not good on that software but it may be a practical exception that one might try to meet. Further, that documentation also states: "In the NTFS file system, a file keeps the same file ID until it is deleted. " As DrivePool identifies itself as NTFS it is breaking that expectation. I am not sure how well things work if you just disable File IDs, maybe software will fallback to a more safe behavior (even if less performant). In addition, I think the biggest issue is silent file corruption. I think that can only happen due to File ID collisions (rather than just the FIle ID changing). It is a 128 bit number, GUID's are 128 bits. Just randomize the sucker the first time you assign a file ID (rather than using the incremental behavior currently). Aside from it being more thread safe as you don't have a single locked increment counter it is highly unlikely you would hit a collision. Could you run into a duplicate ? sure. Likely? Probably not. Maybe over many reboots (or whatever resets the ID's in drivepool beside that) but as long as whatever app that uses the FileID has detected it is gone before it is reused it eventually colliding would likely not have much effect. Not perfect but probably an easier solution. Granted apps like onedrive may still think all the files are deleted and re-upload them if the FileID's change (although that may be more likely due to the notification bug). Sure. Except one doesn't always know how tools work. I am only making a highly educated guess this is what OneDrive is using, but only made this after significant file corruption and research. One would hope you don't need to have corruption before figuring out the tool you are using uses the FileID. In addition, FileID may not be the primary item a backup/sync tool uses but something like USF may be a much more common first choice. It may only fall back to other options when that is not available. Is it possible the 5-6 apps I have found that run into issues are the only ones out there that uses these things? Sure. I just would guess I am not that lucky so there are likely many more that use these features. I did see either you (or someone else) who posted about the file hashing issue with the read striping. It is a big shame, reporting data corruption (invalid hash values or rather returning the wrong read data which is what would lead to that) is another fairly massive problem. Marking good data bad because of an inconsistent read can lead to someone thinking they lost data and trashing it, or restoring an older version that may cause newer data to be lost in an attempt to fix. I would look into a more consistent read striping repro test but at the end of the day these other things stop me from being able to use drivepool for most things I would like to.
    1 point
  21. MitchC, first of all thankyou for posting this! My (early a.m.) thoughts: (summarised) "DrivePool does not properly notify the Windows FileSystem Watcher API of changes to files and folders in a Pool." If so, this is certainly a bug that needs fixing. Indicating "I changed a file" when what actually happened was "I read a file" could be bad or even crippling for any cohabiting software that needs to respond to changes (as per your example of Visual Studio), as could neglecting to say "this folder changed" when a file/folder inside it is changed. (summarised) "DrivePool isn't keeping FileID identifiers persistent across reboots, moves or renames." Huh. Confirmed, and as I understand it the latter two should be persistent @Christopher (Drashna)? However, attaining persistence across reboots might be tricky given a FileID is only intended to be unique within a volume while a DrivePool file can at any time exist across multiple volumes due to duplication and move between volumes due to balancing and drive replacement. Furthermore as Microsoft itself states "File IDs are not guaranteed to be unique over time, because file systems are free to reuse them". I.e. software should not be relying solely on these over time, especially not backup/sync software! If OneDrive is actually relying on it so much that files are disappearing or swapping content then that would seem to be an own-goal by Microsoft. Digging further, it also appears that FileID identifiers (at least for NTFS) are not actually guaranteed to be collision-free (it's just astronomically improbable in the new 64+64bit format as opposed to the old but apparently still in use 16+48bit format). (quote) "the FileID numbers given out by DrivePool are incremental and very low. This means when they do reset you almost certainly will get collisions with former numbers." Ouch. That's a good point. Any suggestions for mitigation until a permanent solution can be found? Perhaps initialising DrivePool's FileID counter using the system clock instead of initialising it to zero, e.g. at 100ns increments (FILETIME) even only an hour's uptime could give us a collision gap of roughly thirty-six billion? (quote) "I would avoid any file backup/synchronization tools and DrivePool drives (if possible)." I disagree; rather, I would opine that any backup/synchronization tool that relies solely on FileID for comparisons should be discarded (if possible); a metric that's not reliable over time should ipso facto not be trusted by software that needs to be reliable over time. Incidentally, on the subject of file hashing I recommend ensuring Manage Pool -> Performance -> Read striping is un-ticked as I've found intermittent hashing errors in a few (not all) third-party tools when this is enabled; I don't know why this happens (maybe low-level disk calls that aren't compatible with non-physical volumes?) but disabling read-striping removes the problem and I've found the performance hit is minor.
    1 point
  22. The "Other" and "Unusable" sizes displayed in the DrivePool GUI are often a source of confusion for new users. Please feel free to use this topic to ask questions about them if the following explanation doesn't help. Unduplicated: the total size of the files in your pool that aren't duplicated (i.e. exists on only one disk in the pool). If you think this should be zero and it isn't, check whether you have folder duplication turned off for one or more of your folders (e.g. in version 2.x, via Pool Options -> File Protection -> Folder Duplication). Duplicated: the total size of the files in your pool that are duplicated (i.e. kept on more than one disk in the pool; a 3GB file on two disks is counted as 6GB of duplicated space in the pool, since that's how much is "used up"). Other: the total size of the files that are on your pooled disks but not in your pool and all the standard filesystem metadata and overhead that takes up space on a formatted drive. For example, the hidden protected system folder "System Volume Information" created by Windows will report a size of zero even if you are using an Administrator account, despite possibly being many gigabytes in size (at least if you are using the built-in Explorer; other apps such as JAM's TreeSize may show the correct amount). Unusable for duplication: the amount of space that can't be used to duplicate your files, because of a combination of the different sizes of your pooled drives, the different sizes of your files in the pool and the space consumed by the "Other" stuff. DrivePool minimises this as best it can, based on the settings and priorities of your Balancers. More in-depth explanations can also be found elsewhere in the forums and on the Covecube blog at http://blog.covecube.com/ Details about "Other" space, as well as the bar graphs for the drives, are discussed here: http://blog.covecube.com/2013/05/stablebit-drivepool-2-0-0-230-beta/
    1 point
  23. To clarify (and make it simple to find), here is Alex's official definition of that "Other" space:
    1 point
  24. fleggett1

    Drive question.

    I think I'm gonna give up on having a pool. Maybe a computer. I was trying to clear that 18 TB Exos out using diskpart. This Exos has the label of ST1800NM. Another 20 GB Exos that I had bought a few months ago has the label ST2000NM. I was tired, bleary-eyed, more than a little frustrated with all these problems, wasn't thinking 100% straight, and selected the ST2000NM in diskpart, and cleaned it. Problem is this drive had gigabytes of data that was critical to the pool. GIGABYTES. I still can't believe I made such a simple, rookie, and yet devastating mistake. My God. I don't know if any of the file table can be salvaged, as I just did a simple "clean" and not a "clean all", but I've got a recovery request into Rossmann Repair Group to see if they can do anything with it. I know there are some tools in the wild that I could probably run myself on the drive, but I don't trust myself to do much of anything atm. I should never have thrown money at an AM5 system. I also probably should've stayed away from the Sabrent and anything like it. Instead, I should've done what any sane person would've done and assembled a proven AM4 or Intel platform in a full tower and attached the drives directly to the motherboard. Yeah, the cable management would've been a nightmare, but literally anything would be better than this. My goal of staving-off obsolescence as much as possible has instead kicked me in the teeth while I was already lying prone in a ditch. If, by some miracle, Rossmann is able to recover the data, I'm going to take a long and hard look at my PC building strategy. Hell, maybe I'll throw money at a prebuilt or one of those cute HDMI-enabled NUCs that're all the rage. I just know that I'm exhausted and am done with all of this, at least for the time being.
    0 points
×
×
  • Create New...