Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 05/27/13 in Posts

  1. To start, while new to DrivePool I love its potential I own multiple licenses and their full suite. If you only use drivepool for basic file archiving of large files with simple applications accessing them for periodic reads it is probably uncommon you would hit these bugs. This assumes you don't use any file synchronization / backup solutions. Further, I don't know how many thousands (tens or hundreds?) of DrivePool users there are, but clearly many are not hitting these bugs or recognizing they are hitting these bugs, so this IT NOT some new destructive my files are 100% going to die issue. Some of the reports I have seen on the forums though may be actually issues due to these things without it being recognized as such. As far as I know previously CoveCube was not aware of these issues, so tickets may not have even considered this possibility. I started reporting these bugs to StableBit ~9 months ago, and informed I would be putting this post together ~1 month ago. Please see the disclaimer below as well, as some of this is based on observations over known facts. You are most likely to run into these bugs with applications that: *) Synchronize or backup files, including cloud mounted drives like onedrive or dropbox *) Applications that must handle large quantities of files or monitor them for changes like coding applications (Visual Studio/ VSCode) Still, these bugs can cause silent file corruption, file misplacement, deleted files, performance degradation, data leakage ( a file shared with someone externally could have its contents overwritten by any sensitive file on your computer), missed file changes, and potential other issues for a small portion of users (I have had nearly all these things occur). It may also trigger some BSOD crashes, I had one such crash that is likely related. Due to the subtle nature some of these bugs can present with, it may be hard to notice they are happening even if they are. In addition, these issues can occur even without file mirroring and files pinned to a specific drive. I do have some potential workarounds/suggestions at the bottom. More details are at the bottom but the important bug facts upfront: Windows has a native file changed notification API using overlapped IO calls. This allows an application to listen for changes on a folder, or a folder and sub folders, without having to constantly check every file to see if it changed. Stablebit triggers "file changed" notifications even when files are just accessed (read) in certain ways. Stablebit does NOT generate notification events on the parent folder when a file under it changes (Windows does). Stablebit does NOT generate a notification event only when a FileID changes (next bug talks about FileIDs). Windows, like linux, has a unique ID number for each file written on the hard drive. If there are hardlinks to the same file, it has the same unique ID (so one File ID may have multiple paths associated with it). In linux this is called the inode number, Windows calls it the FileID. Rather than accessing a file by its path, you can open a file by its FileID. In addition it is impossible for two files to share the same FileID, it is a 128 bit number persistent across reboots (128 bits means the number of unique numbers represented is 39 digits long, or has the uniqueness of something like the MD5 hash). A FileID does not change when a file moves or is modified. Stablebit, by default, supports FileIDs however they seem to be ephemeral, they do not seem to survive across reboots or file moves. Keep in mind FileIDs are used for directories as well, it is not just files. Further, if a directory is moved/renamed not only does its FileID change but every file under it changes. I am not sure if there are other situations in which they may change. In addition, if a descendant file/directory FileID changes due to something like a directory rename Stablebit does NOT generate a notification event that it has changed (the application gets the directory event notification but nothing on the children). There are some other things to consider as well, DrivePool does not implement the standard windows USN Journal (a system of tracking file changes on a drive). It specifically identifies itself as not supporting this so applications shouldn't be trying to use it with a drivepool drive. That does mean that applications that traditionally don't use the file change notification API or the FileIDs may fall back to a combination of those to accomplish what they would otherwise use the USN Journal for (and this can exacerbate the problem). The same is true of Volume Shadow Copy (VSS) where applications that might traditionally use this cannot (and drivepool identifies it cannot do VSS) so may resort to methods below that they do not traditionally use. Now the effects of the above bugs may not be completely apparent: For the overlapped IO / File change notification This means an application monitoring for changes on a DrivePool folder or sub-folder will get erroneous notifications files changed when anything even accesses them. Just opening something like file explorer on a folder, or even switching between applications can cause file accesses that trigger the notification. If an application takes actions on a notification and then checks the file at the end of the notification this in itself may cause another notification. Applications that rely on getting a folder changed notification when a child changes will not get these at all with DrivePool. If it isn't monitoring children at all just the folder, this means no notifications could be generated (vs just the child) so it could miss changes. For FileIDs It depends what the application uses the FileID for but it may assume the FileID should stay the same when a file moves, as it doesn't with DrivePool this might mean it reads or backs up, or syncs the entire file again if it is moved (perf issue). An application that uses the Windows API to open a File by its ID may not get the file it is expecting or the file that was simply moved will throw an error when opened by its old FileID as drivepool has changed the ID. For an example lets say an application caches that the FileID for ImportantDoc1.docx is 12345 but then 12345 refers to ImportantDoc2.docx due to a restart. If this application is a file sync application and ImportantDoc1.docx is changed remotely when it goes to write those remote changes to the local file if it uses the OpenFileById method to do so it will actually override ImportantDoc2.docx with those changes. I didn't spend the time to read Windows file system requirements to know when Windows expects a FileID to potentially change (or not change). It is important to note that even if theoretical changes/reuse are allowed if they are not common place (because windows uses essentially a number like an md5 hash in terms of repeats) applications may just assume it doesn't happen even if it is technically allowed to do so. A backup of file sync program might assume that a file with specific FileID is always the same file, if FileID 12345 is c:\MyDocuments\ImportantDoc1.docx one day and then c:\MyDocuments\ImportantDoc2.docx another it may mistake document 2 for document 1, overriding important data or restore data to the wrong place. If it is trying to create a whole drive backup it may assume it has already backed up c:\MyDocuments\ImportantDoc2.docx if it now has the same File ID as ImportantDoc1.docx by the time it reaches it (at which point DrivePool would have a different FileID for Document1). Why might applications use FileIDs or file change notifiers? It may not seem intuitive why applications would use these but a few major reasons are: *) Performance, file change notifiers are a event/push based system so the application is told when something changes, the common alternative is a poll based system where an application must scan all the files looking for changes (and may try to rely on file timestamps or even hashing the entire file to determine this) this causes a good bit more overhead / slowdown. *) FileID's are nice because they already handle hardlink file de-duplication (Windows may have multiple copies of a file on a drive for various reasons, but if you backup based on FileID you backup that file once rather than multiple times. FileIDs are also great for handling renames. Lets say you are an application that syncs files and the user backs up c:\temp\mydir with 1000 files under it. If they rename c:\temp\mydir to c:\temp\mydir2 an application use FileIDS can say, wait that folder is the same it was just renamed. OK rename that folder in our remote version too. This is a very minimal operation on both ends. With DrivePool however the FileID changes for the directory and all sub-files. If the sync application uses this to determine changes it now uploads all these files to the system using a good bit more resources locally and remotely. If the application also uses versioning this may be far more likely to cause a conflict with two or more clients syncing, as mass amounts of files are seemingly being changed. Finally, even if an application is trying to monitor for FileIDs changing using the file change API, due to notification bugs above it may not get any notifications when child FileIDs change so it might assume it has not. Real Examples OneDrive This started with massive onedrive failures. I would find onedrive was re-uploading hundreds of gigabytes of images an videos multiple times a week. These were not changing or moving. I don't know if the issue is onedrive uses FileIDs to determine if a file is already uploaded, or if it is because when it scanned a directory it may have triggered a notification that all the files in that directory changed and based on that notification it reuploads. After this I noticed files were becoming deleted both locally and in the cloud. I don't know what caused this, it might have been because the old file it thought was deleted as the FileID was gone and while there was a new file (actually the same file) in its place there may have been some odd race condition. It is also possible that it queued the file for upload, the FileID changed and when it went to open it to upload it found it was 'deleted' as the FileID no longer pointed to a file and queued the delete operation. I also found that files that were uploaded into the cloud in one folder were sometimes downloading to an alternate folder locally. I am guessing this is because the folder FileID changed. It thought the 2023 folder was with ID XYZ but that now pointed to a different folder and so it put the file in the wrong place. The final form of corruption was finding the data from one photo or video actually in a file with a completely different name. This is almost guaranteed to be due to the FileID bugs. This is highly destructive as backups make this far harder to correct. With one files contents replaced with another you need to know when the good content existed and in what files were effected. Depending on retention policies the file contents that replaced it may override the good backups before you notice. I also had a BSOD with onedrive where it was trying to set attributes on a file and the CoveFS driver corrupted some memory. It is possible this was a race condition as onedrive may have been doing hundreds of files very rapidly due to the bugs. I have not captured a second BSOD due to it, but also stopped using onedrive on DrivePool due to the corruption. Another example of this is data leakage. Lets say you share your favorite article on kittens with a group of people. Onedrive, believing that file has changed, goes to open it using the FileID however that file ID could essentially now correspond to any file on your computer now the contents of some sensitive file are put in the place of that kitten file, and everyone you shared it with can access it. Visual Studio Failures Visual studio is a code editor/compiler. There are three distinct bugs that happen. First, when compiling if you touched one file in a folder it seemed to recompile the entire folder, this due likely to the notification bug. This is just a slow down, but an annoying one. Second, Visual Studio has compiler generated code support. This means the compiler will generate actual source code that lives next to your own source code. Normally once compiled it doesn't regenerate and compile this source unless it must change but due to the notification bugs it regenerates this code constantly and if there is an error in other code it causes an error there causing several other invalid errors. When debugging visual studio by default will only use symbols (debug location data) as the notifications from DrivePool happen on certain file accesses visual studio constantly thinks the source has changed since it was compiled and you will only be able to breakpoint inside source if you disable the exact symbol match default. If you have multiple projects in a solution with one dependent on another it will often rebuild other project deps even when they haven't changed, for large solutions that can be crippling (perf issue). Finally I often had intellisense errors showing up even though no errors during compiling, and worse intellisense would completely break at points. All due to DrivePool. Technical details / full background & disclaimer I have sample code and logs to document these issues in greater detail if anyone wants to replicate it themselves. It is important for me to state drivepool is closed source and I don't have the technical details of how it works. I also don't have the technical details on how applications like onedrive or visual studio work. So some of these things may be guesses as to why the applications fail/etc. The facts stated are true (to the best of my knowledge) Shortly before my trial expired in October of last year I discovered some odd behavior. I had a technical ticket filed within a week and within a month had traced down at least one of the bugs. The issue can be seen https://stablebit.com/Admin/IssueAnalysis/28720 , it does show priority 2/important which I would assume is the second highest (probably critical or similar above). It is great it has priority but as we are over 6 months since filed without updates I figured warning others about the potential corruption was important. The FileSystemWatcher API is implemented in windows using async overlapped IO the exact code can be seen: https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/System.IO.FileSystem.Watcher/src/System/IO/FileSystemWatcher.Win32.cs#L32-L66 That corresponds to this kernel api: https://learn.microsoft.com/en-us/windows/win32/fileio/synchronous-and-asynchronous-i-o Newer api calls use GetFileInformationByHandleEx to get the FileID but with older stats calls represented by nFileIndexHigh/nFileIndexLow. In terms of the FileID bug I wouldn't normally have even thought about it but the advanced config (https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings) mentions this under CoveFs_OpenByFileId "When enabled, the pool will keep track of every file ID that it gives out in pageable memory (memory that is saved to disk and loaded as necessary).". Keeping track of files in memory is certainly very different from Windows so I thought this may be the source of issue. I also don't know if there are caps on the maximum number of files it will track as if it resets FileIDs in situations other than reboots that could be much worse. Turning this off will atleast break nfs servers as it mentions it right in the docs "required by the NFS server". Finally, the FileID numbers given out by DrivePool are incremental and very low. This means when they do reset you almost certainly will get collisions with former numbers. What is not clear is if there is the chance of potential FileID corruption issues. If when it is assigning these ids in a multi-threaded scenario with many different files at the same time could this system fail? I have seen no proof this happens, but when incremental ids are assigned like this for mass quantities of potential files it has a higher chance of occurring. Microsoft mentions this about deleting the USN Journal: "Deleting the change journal impacts the File Replication Service (FRS) and the Indexing Service, because it requires these services to perform a complete (and time-consuming) scan of the volume. This in turn negatively impacts FRS SYSVOL replication and replication between DFS link alternates while the volume is being rescanned.". Now DrivePool never has the USN journal supported so it isn't exactly the same thing, but it is clear that several core Windows services do use it for normal operations I do not know what backups they use when it is unavailable. Potential Fixes There are advanced settings for drivepool https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings beware these changes may break other things. CoveFs_OpenByFileId - Set to false, by default it is true. This will disable the OpenByFileID API. It is clear several applications use this API. In addition, while DrivePool may disable that function with this setting it doesn't disable FileID's themselves. Any application using FileIDs as static identifiers for files may still run into problems. I would avoid any file backup/synchronization tools and DrivePool drives (if possible). These likely have the highest chance of lost files, misplaced files, file content being mixed up, and excess resource usage. If not avoiding consider taking file hashes for the entire drivepool directory tree. Do this again at a later point and make sure files that shouldn't have changed still have the same hash. If you have files that rarely change after being created then hashing each file at some point after creation and alerting if that file disappears or hash changes would easily act as an early warning to a bug here being hit.
    8 points
  2. malse

    WSL2 Support for drive mounting

    Hi im using Windows 10 2004 with WSL2. I have 3x drives: C:\ (SSD), E:\ (NVME), D:\ (Drivepool of 2x 4TB HDD) When the drives are mounted on Ubuntu, I can run ls -al and it shows all the files and folders on C and E drives. This is not possible on D When I run ls -al on D, it returns 0 results. But I can cd into the directories in D stragely enough. Is this an issue with drivepool being mounted? Seems like it is the only logical difference (aside from it being mechanical) between the other drives. They are all NTFS.
    5 points
  3. hammerit

    WSL 2 support

    I tried to access my drivepool drive via WSL 2 and got this. Any solution? I'm using 2.3.0.1124 BETA. ➜ fludi cd /mnt/g ➜ g ls ls: reading directory '.': Input/output error Related thread: https://community.covecube.com/index.php?/topic/5207-wsl2-support-for-drive-mounting/#comment-31212
    4 points
  4. I just wanted to say that @Christopher (Drashna)has been very helpful each time I've created a topic. I have found him and the others I've worked with to be thoughtful and professional in their responses. Thanks for all the work you all do. Now we can all stop seeing that other thread previewed every time we look at the forum home.
    4 points
  5. srcrist

    Optimal settings for Plex

    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex: 20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
    4 points
  6. VERY IMPRESSED! Didn't need to create an account and password Same activation code covers EVERY product on EVERY computer! Payment information remembered so additional licenses are purchased easily Nice bundle and multi-license discount I'm in love with the Drive Pool and Scanner. Thanks for a great product and a great buying experience. -Scott
    4 points
  7. Yup, something broke on our end. This should be fixed now, and updating should get the fixed version.
    3 points
  8. I know the chance of this is near zero, but the most recent Windows screenshotting AI shenanigans is the last straw for me. The incredible suite of Stablebit tools is the ONLY thing that has kept me using Windows (seriously, it’s the best software ever - I don’t even think that’s an exaggeration). I will pay for the entire suite again, or hell Stablebit can double the price for Linux, I’ll pay it. Is there ANY chance a Linux version of DrivePool/Scanner would be developed?
    3 points
  9. I would just like to say that this is fairly disappointing to hear. Though they added limits, a lot of people are still able to work within those limits. And the corruption issue hasn't been present for many years as far as I can tell -- the article in the announcement by Alex is from April 2019, so over 5 years ago. I get the desire to focus resources though, and I'm glad it still works in the background for now. Hopefully there are no breaking changes from Google, and I hope an individual API key is going to have reasonable limits for the drive to continue working. But yeah, switching providers is not very realistic in my case, so I guess I'll use it until it just breaks.
    3 points
  10. Sorry, should also mention this is confirmed by StableBit and can be easily reproduced. The attached powershell script is a basic example of the file monitoring api. Run it by "monitor.ps1 my_folder" where my folder is what you want to monitor. Have a file say hello.txt inside. Open that file in notepad. It should instantly generate a monitoring file change event. Further tab away from notepad and tab back to it, you will again get a changed event for that file. Run the same thing on a true NTFS system and it will not do the same. You can also reproduce the lack of notifications for other events by changing the IncludeSubdirectories variable in it and doing some of the tests I mention above. watcher.ps1
    3 points
  11. So this is correct, as the documentation you linked to states. One item I mentioned though, is the fact that even if it can be re-used if in practice it isn't software may make the wrong assumption that it won't. Not good on that software but it may be a practical exception that one might try to meet. Further, that documentation also states: "In the NTFS file system, a file keeps the same file ID until it is deleted. " As DrivePool identifies itself as NTFS it is breaking that expectation. I am not sure how well things work if you just disable File IDs, maybe software will fallback to a more safe behavior (even if less performant). In addition, I think the biggest issue is silent file corruption. I think that can only happen due to File ID collisions (rather than just the FIle ID changing). It is a 128 bit number, GUID's are 128 bits. Just randomize the sucker the first time you assign a file ID (rather than using the incremental behavior currently). Aside from it being more thread safe as you don't have a single locked increment counter it is highly unlikely you would hit a collision. Could you run into a duplicate ? sure. Likely? Probably not. Maybe over many reboots (or whatever resets the ID's in drivepool beside that) but as long as whatever app that uses the FileID has detected it is gone before it is reused it eventually colliding would likely not have much effect. Not perfect but probably an easier solution. Granted apps like onedrive may still think all the files are deleted and re-upload them if the FileID's change (although that may be more likely due to the notification bug). Sure. Except one doesn't always know how tools work. I am only making a highly educated guess this is what OneDrive is using, but only made this after significant file corruption and research. One would hope you don't need to have corruption before figuring out the tool you are using uses the FileID. In addition, FileID may not be the primary item a backup/sync tool uses but something like USF may be a much more common first choice. It may only fall back to other options when that is not available. Is it possible the 5-6 apps I have found that run into issues are the only ones out there that uses these things? Sure. I just would guess I am not that lucky so there are likely many more that use these features. I did see either you (or someone else) who posted about the file hashing issue with the read striping. It is a big shame, reporting data corruption (invalid hash values or rather returning the wrong read data which is what would lead to that) is another fairly massive problem. Marking good data bad because of an inconsistent read can lead to someone thinking they lost data and trashing it, or restoring an older version that may cause newer data to be lost in an attempt to fix. I would look into a more consistent read striping repro test but at the end of the day these other things stop me from being able to use drivepool for most things I would like to.
    3 points
  12. @Shane and @VapechiK - Thank you both for the fantastic, detailed information! As it seemed like the easiest thing to start with, I followed VapechiK's instructions from paragraph 4 of their reply to simply remove the link for DP (Y:) under DP (E:) and it worked instantly, and without issue! Again, I thanks for taking the time to provide solutions, it really is appreciated! Thanks much!
    3 points
  13. My advice; contact support and send them Troubleshooter data. Christopher is very keen in resolving problems around the "new" google way of handling folders and files.
    3 points
  14. Yes, it is supported.
    2 points
  15. With 1600 Installed, go into Settings -> Select Updates... -> Settings -> Disable Automatic Updates Mine now runs without the constant notification that an update is available.
    2 points
  16. The light blue, dark blue, orange and red triangle markers on a Disk's usage bar indicates the amounts of those types of data that DrivePool has calculated should be on that particular Disk to meet certain Balancing limits set by the user, and it will attempt to accomplish that on its next balancing run. If you hover your mouse pointed over a marker you should get a tooltip that provides information about it. https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Disks List (the section titled Balancing Markers)
    2 points
  17. Mostly. As I think you mentioned earlier in this thread that doesn't disable FileIds and applications could still get the FileID of a file. Depending how that ID is used it could still cause issues. An example below is snapraid which doesn't use OpenByFileID but does trust that the same FileID is the same file. For the biggest problems (data loss, corruption, leakage) this is correct. Of course, one generally can't know if an application is using FileIDs (especially if not open source) it is likely not mentioned in the documentation. It also doesn't mean your favorite app may not start to do so tomorrow, and then all the sudden the application that worked perfectly for 4 years starts to silently corrupt random data. By far the most likely apps to do this are backup apps, data sync apps, cloud storage apps, file sharing apps, things that have some reason to potentially try to track what files are created/moved/deleted/etc. The other issue (and sure if I could go back in time I would split this thread in two) of the change notification bugs in DrivePool won't directly lead to data loss (although can greatly speed up the process above) . It will, however, have the potential for odd errors and performance issues in a wide range of applications. The file change API is used by many applications, not just the app types listed above (which often will use it if they run 24/7) but any app that interfaces with many files at once (IE coding IDE's/compilers, file explorers, music or video catalogs, etc). This API is common, easy to use for developers, and generally can greatly increase performance of apps as they no longer need to manually check if every file they can just install one event listener on a parent directory and even if they only care about the notifications for some of the files in the directories under it they can just ignore the change events they don't care about. It may be very hard to trace these performance issues or errors to drive pool due to how they may present themselves. You are far more likely to think the application is buggy or at fault. Short Example of Disaster As it is a complex issue to understand I will give a short example of how FileIDs being reused can be devastating. Lets say you use Google Drive or some other cloud backup / sharing application and it relies on the fact that as long as FileID 123 around it is always pointing to the same file. This is all but guaranteed with NTFS. You only use Google Drive to backup your photos from your phone, from your work camera, or what have you. You have the following layout on your computer: c:\camera\work\2021\OfficialWiringDiagram.png with file ID 1005 c:\camera\personal\nudes\2024Collection\VeryTasteful.png with file ID 3909 c:\work\govt\ClassifiedSatPhotoNotToPostOnTwitter.png with file ID 6050 You have OfficialWiringDiagram.png shared with the office as its an important reason anytime someone tries to figure out where the network cables are going. Enter drive pool. You don't change any of these files but DrivePool generates a file changed notification for OfficialWiringDiagram.png. GoogleDrive says OK I know that file, I already have it backed up and it has file ID 1005. It then opens File ID 1005 locally reads the new contents, and uploads it to the cloud overriding the old OfficialWiringDiagram.png. Only problem is you rebooted, so 1005 was OfficialWiringDiagram.png before, but now file 1005 is actually your nude file VeryTasteful.png. So it has just backed up your nude file into the cloud but as "OfficialWiringDiagram.png", and remember that file is shared to the cloud. Next time someone goes to look at the office wiring diagram they are in for a surprise. Depending on the application if 'ClassifiedSatPhotoNotToPostOnTwitter.png' became FileID 1005 even though it got a change notification for the path "c:\camera\work\2021\OfficialWiringDiagram.png" which is under the main folder it monitors ("c:\camera") when it opens File 1005 it instead now gets a file completely outside your camera folder and reads the highly sensitive file from c:\work\govt and now a file that should never be uploaded is shared to the entire office. Now you follow many best practices. Google drive you restrict to only the c:\camera folder, it doesn't backup or access files anywhere else. You have a Raid 6 SSD setup incase of drive failure, and image files from prior years are never changed, so once written to the drive they are not likely to move unless the drive was de-fragmented meaning pretty low chance of conflicts or some abrupt power failure causing it to be corrupted. You even have some photo scanner that checks for corrupt photos just to be safe. Except none of these things will save you from the above example. Even if you kept 6 months of backup archives offsite in cold storage (made perfectly and not effected by the bug) and all deleted files are kept for 5 years, if you don't reference OfficialWiringDiagram.png but once a year you might not notice it was changed and the original data overwritten until after all your backups are corrupted with the nude and the original file might be lost forever. FileIDs are generally better than relying on file paths, if they used file paths when you renamed or moved file 123 to a new name in the same folder it would break anyone you previously had shared the file with if only file names are used. If instead when you rename "BobsChristmasPhoto.png" to "BobsHolidayPhoto.png" the application knows it is the file being renamed as it still has File ID 123 then it can silently update on the backend the sharing data so when people click the existing link it still loads the photo. Even if an application uses moderate de-duplication techniques like hashing the file to tell if it has just moved, if you move a file and slightly change it (say you clear the photo location metadata out that your phone put there) it would think it is an all new file without File IDs. FileID collisions are not just possible but basically guaranteed with drive pool. With the change notification bug a sync application might think all your files are changing often as even reading the file or browsing the directory might trigger a notification it has changed. This means it is backing up all those files again, which might be tens of thousands of photos. As any time you reboot the File ID changes that means if it syncs that file after the reboot uploading the wrong contents (as it used File ID) and then you had a second computer it downloaded that file to you could put yourself in a never ending loop for backups and downloads that overrides one file with another file at random. As the FileID it was known last time for might not exist when it goes to back it up (which I assume would trigger many applications to fall back to path validation) only part of your catalog would get corrupted each iteration. The application might also validate that if the file is renamed it stayed within the root directory it interacts with. This means if your christmas photo's file ID now pointed to something under "c:\windows" it would fall back to file paths as it knows that is not under the "c:\camera" directory it works with. This is not some hypothetical situation these are actual occurrences and behaviors I have seen happen to files I have hosted on drivepool. These are not two-bit applications written by some one person dev team these are massively used first party applications, and commercial enterprise applications. If you can and you care about your data I would. The convenience of drivepool is great, there are countless users it works fine for (at least as far as they know), but even with high technical understanding it can be quite difficult to detect what applications are effected by this. If you thought you were safe because you use something like snapraid it won't stop this sort of corruption. As far as snapraid is concerned you just deleted a file and renamed another on top of it. Snapraid may even contribute further to the problem as it (like many) uses the windows FileID as the Windows equivalent of an inode number https://github.com/amadvance/snapraid/blob/e6b8c4c8a066b184b4fa7e4fdf631c2dee5f5542/cmdline/mingw.c#L512-L518 . Applications assume inodes and FileIDs that are the same as before are the same file. That is unless you use DrivePool, oops. Apps might use timestamps in addition to FileIDs although timestamps can overlap say if you downloaded a zip archive and extracted it with Windows native (by design choice it ignores timestamps even if the zip contained them). SnapRAID can even use some advanced checks with syncing but in a worst case where a files content has actually changed but the FileID in question has the same size/timestamp SnapRAID assumes it is actually unmodified and leaves the parity data alone. This means if you had two files with the same size/timestamp anywhere on the drive and one of them got the FileID of the other it would end up with incorrect parity data associated with that file. Running a snapraid fix could actually result in corruption as snapraid would believe the parity data is correct but the content on disk it thinks go with it does not. Note: I don't use snapraid but was asked this question and reading the manual here and the source above I believe this is technically correct. It is great SnapRAID is open source and has such technical documentation plenty of backup / sync programs don't and you don't know what checking they do.
    2 points
  18. To be fair to Stablebit I used Drivepool for the past few years and have NEVER lost a single file because of Drivepool. The elaborateness OR simpleness of how you use Drivepool within itself is not really of concern. What is being warned of here though is if you use any special applications that might expect FileID to behave as per NTFS there will be risks with that. My example is that I use Freefilesync quite a bit to maintain files between my pool, my htpc and another backup location. When I move files on one drive, freefilesync using fileid recognises the file was moved so syncs a "move" on the remote filesystem as well. This saves potentially hours of copying and then deleting. It does not work on Drivepool because the fileid changes on each reboot. In this case Freefilesync fails "SAFE" in that it does the copy and delete instead, so I only suffer performance issues. What could happen though is that you use another app that say cleans old files, or moves files around that does not fail safe if a fileid is repeated for a different file etc and in doing so you do suffer file loss. This will only happen if you use some third party application that makes changes to files. It's not the type of thing a word processor or a pc game etc are going to be doing (typically in case someone jumps in with a it could be possible argument). So generally Drivepool is safe, and for you most likely of nothing to worry about, but if you do use an application now or in the future that is for cleaning up or syncing etc then be very careful in case it uses fileid and causes data loss because of this issue. For day to day use, in my experience you can continue to use it as is. If you want to add to the group of us that would like this improved, feel free to add your voice to the request as otherwise I don't see any update for this in the foreseeable future.
    2 points
  19. Hard Disk Sentinel Pro found an additional three HDD's that were "bad" or failing, on top of the 2 i already replaced, all due to bad sectors & running out of space to replace them. Some were over 10 years old & all were over 5 years. Pulled out the bad ones and went shopping. Got my money's worth from them. All new drives are server grade refurbs from the same company I bought one of the old drives from, they're online now, but still stand by their products & honor warranties so I feel pretty secure. And as I read in another post, a sever crash/issue is a great excuse to upgrade your hardware. My pool "accidentally" grew by 12Tb after I got it fixed. 😁 Best news was I turned on Pool Duplication and when I looked at was duplicating at the folder level, I found 2 miscellaneous folders that were not duplicating, changed them manually and YAY! Measuring, Duplication and Balancing all finished OK. Amazing what a hundred bucks or so of new equipment will do. So my original issues were all equipment related after all.
    2 points
  20. Thank you for the investigation! That definitely helps a lot! And there looks like there were UI fixes in 1600, so likely something broke there, accidentally. That said, I've created an issue for this: https://stablebit.com/Admin/IssueAnalysis/28956
    2 points
  21. DrivePool does not use encryption (that's CloudDrive). However, in the event that you have used Windows Bitlocker to encrypt the physical drives on which your pool is stored then you will need to ensure you have those key(s) saved (which Bitlocker would have prompted you to do during the initial encryption process).
    2 points
  22. hello 1. make note of or take screenshots of your DrivePool settings if you have changed them from the default settings in any way. if you take SSs step 2 is important. DP saves your pool data in Alternate Data Streams on the drives themselves but doesn't save any customized balancer/file placement rules etc. from Manage Pool ^ Balancing... under the pie chart in the GUI. also take note of Manage Pool ^ Performance > settings as well. 2. make sure all your user data (i.e. all docs, pics, DLs, etc.) from your C:\Users\[your user name]\ have been saved/backed up elsewhere. 3. yes deactivate your license - cogwheel with downpointing arrow in upper right corner/Manage license/deactivate. in fact you should do this for all licensed 3rd party software on your machine. if you are reinstalling on the EXACT same hardware it *shouldn't* much matter but better safe than hassled later. 4. power OFF machine, and unplug/detach ALL drives EXCEPT your win10 drive from the mobo and any USB ports. IOWs ONLY the win10 boot drive where you want to clean install win11 is attached. 5. install windoze 11 and update to latest version, all new windows update security patches, etc etc. 6. DL and install the latest version of DP from https://stablebit.com/DrivePool/Download and reactivate the license. 7. power OFF your machine and reconnect your DrivePool drives and power ON. 8. in the DP GUI Manage Pool ^ Balancing... ensure all is reconfigured and set up as it was before the reinstall. SAVE. Manage Pool ^ Performance > as well. if it were me, i would reboot here. 9. it is important to remeasure the pool before using it normally. Manage Pool > Remeasure... Remeasure. *NOTE* if you never messed with the settings and all was left at default before, steps 1 and 8 can probably be omitted/ignored. my own pool is fairly customized, so i included them as part of the procedure I would follow. cheers
    2 points
  23. Folks, thank all for the ideas. I am not neglecting them but I have some health issues that make it very hard for me to do much of anything for a while. I will try the more promising ones as my health permits but it will take some time. From operation and the lack of errors during writing/reading it seems that I am in no danger except duplication won't work. I can live with that for a while. Keep the ideas coming and, maybe, I can get this mess fixed and report back which tools worked for me. I am making one pass through my pool where I remove a drive from the pool, run full diagnosis and file system diagnosis on that drive then I re-add it back to the pool. When completed that will assure that there are no underlying drive problems and I can proceed from there. Thanks to all who tried to help this old man.
    2 points
  24. USB so-called DAS/JBOD/etc units usually internally use a SATA Port multiplier setup, and is likely the source of your issues. A SATA Port multiplier is a way of connecting multiple SATA Devices to one root port, and due to the way that the ATA Protocol works, when I/O is performed it essentially takes the entire bus that is created from that root port hostage until the requested data is returned. it also is important to know that write caching will skew any write benchmarks results if the enclosure uses UASP or you have explicitly enabled it. these devices perform even worse with a striping filesystem (like raidz, btrfs raid 5/6 or mdraid 5/6), and having highly fragmented data (which will cause a bunch of seek commands that again, will hold the bus hostage until they complete, which with spinning media does create a substantial I/O burden) honestly, your options are either accept the loss of performance (it is tolerable but noticeable on a 4 drive unit, no idea how it is on your 8 drive unit), or invest in something like a SAS JBOD which will actually have sane real world performance. USB isn't meant for more than a disk or two, and between things like this and hidden overhead (like the 8/10b modulation, root port and chip bottlenecks, general inability to pass SMART and other data, USB disconnects, and other issues that aren't worth getting into) it may be worth just using a more capable solution
    2 points
  25. While chasing a different issue I noticed that my Windows Event logs had repeated errors from the Defrag service stating "The storage optimizer couldn't complete defragmentation on DrivePool (D:) because: Incorrect function. (0x80070001)". Digging further, I found that the Optimize Drives feature had included the pool's drive letter in the list of drives to optimize. It has also listed the underlying drives in the pool, but that should be OK To remove the drive pool from optimization, open Disk Optimizer, in Scheduled optimization, Change Settings, then Choose drives, and then un-select the pool drive. See https://imgur.com/PB2WPH0
    2 points
  26. FWIW, digging through Microsoft's documentation, I found these two entries in the file system protocols specification: https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/2d3333fe-fc98-4a6f-98a2-4bb805aff407 https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/98860416-1caf-4c80-a9ab-8d61e1ccf5a5 In short, if a file system cannot provide a file ID that is both unique within a given volume and stable until deleted, then it must set the field to either zero (indicating the file system does not support file IDs) or maxint (indicating the file system cannot given a particular file a unique ID) as per the specification.
    2 points
  27. Response from Freefilesync developer. I read through the Microsoft docs you posted earlier and others, and I agree with the Freefilesync developer. It appears the best way to track all files on a volume on NTFS is to use fileid which is expected to stay persistent. This requires no extra overhead or work as the Filesystem maintains FileID’s automatically. ObjectID requires extra overhead and is only really intended to track special files like shortcuts for link tracking etc. Any software that is emulating an NTFS system should therefore provide FileID’s and guarantee they stay persistent with a file on that volume. I am seeing the direct performance impact from this and agree with Mitch that there can be other adverse side affects potentially much worse than just performance issues if someone uses software that expects FileID’s to behave as per Microsoft’s documentation. Finally also note that ObjectID is not supported by the Refs filesystem, whereas FileID is. https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/ns-ntifs-_file_objectid_information ReFS doesn't support object IDs. ReFS uses 128-bit file IDs, so can't cleanly distinguish between file ID versus object ID when processing an open by ID.
    2 points
  28. All my drives are installed inside the server chassis, so I'll just remove the HBAs which will have the same effect. Reading back through many, many old posts on the subject, I believe I've been looking at this all wrong. The actual information Drivepool uses to build and maintain the pool is stored on the pooled drives, not on the system disk, so its just a matter of reinstalling Drivepool and it goes looking for the poolpart folders on the physical drives in order to rebuild the pool. I'm not sure why I've been thinking I needed to worry about what was on the C: Drive, other than that's where I placed the junctions for the drives on my system, but that seems to be more of a housekeeping feature to get around the 26 drive letter limit more than anything to do with Drivepool itself. If this is correct, I believe I'm ready to go. It would seem that Stablebit Drivepool is quite a feat of engineering. Thanks again Shane.
    2 points
  29. methejuggler

    Plugin Source

    I actually wrote a balancing plugin yesterday which is working pretty well now. It took a bit to figure out how to make it do what I want. There's almost no documentation for it, and it doesn't seem very intuitive in many places. So far, I've been "combining" several of the official plugins together to make them actually work together properly. I found the official plugins like to fight each other sometimes. This means I can have SSD drop drives working with equalization and disk usage limitations with no thrashing. Currently this is working, although I ended up re-writing most of the original plugins from scratch anyway simply because they wouldn't combine very well as originally coded. Plus, the disk space equalizer plugin had some bugs in a way which made it easier to rewrite than fix. I wasn't able to combine the scanner plugin - it seems to be obfuscated and baked into the main source, which made it difficult to see what it was doing. Unfortunately, the main thing I wanted to do doesn't seem possible as far as I can tell. I had wanted to have it able to move files based on their creation/modified dates, so that I could keep new/frequently edited files on faster drives and move files that aren't edited often to slower drives. I'm hoping maybe they can make this possible in the future. Another idea I had hoped to do was to create drive "groups" and have it avoid putting duplicate content on the same group. The idea behind that was that drives purchased at the same time are more likely to fail around the same time, so if I avoid putting both duplicated files on those drives, there's less likelihood of losing files in the case of multiple drive failure from the same group of drives. This also doesn't seem possible right now.
    2 points
  30. Managed to fix this today as my client was giving errors also. Install Beta version from here: http://dl.covecube.com/CloudDriveWindows/beta/download/ (I used 1344) Reboot. Don't start CloudDrive and/or service. Add the below to this file: C:\ProgramData\StableBit CloudDrive\Service\Settings.json "GoogleDrive_ForceUpgradeChunkOrganization": { "Default": true, "Override": true } Start Service & CloudDrive. Should kick in straight away. I have 42TB in GDrive and it went through immediately. Back to uploading as usual now. Hope this helps.
    2 points
  31. I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this. The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can just install it and look at the list that way, if you'd like. CloudDrive does not support Teamdrives/shared drives because their API support and file limitations make them incompatible with CloudDrive's operation. Standard Google Drive and GSuite drive accounts are supported. The primary tradeoff from a tool like rClone is flexibility. CloudDrive is a proprietary system using proprietary formats that have to work within this specific tool in order to do a few things that other tools do not. So if flexibility is something you're looking for, this probably just isn't the solution for you. rClone is a great tool, but its aims, while similar, are fundamentally different than CloudDrive's. It's best to think of them as two very different solutions that can sometimes accomplish similar ends--for specific use cases. rClone's entire goal/philosophy is to make it easier to access your data from a variety of locations and contexts--but that's not CloudDrive's goal, which is to make your cloud storage function as much like a physical drive as possible. I don't work for Covecube/Stablebit, so I can't speak to any pricing they may offer you if you contact them, but the posted prices are $30 and $40 individually, or $60 for the bundle with Scanner. So there is a reasonable savings to buying the bundle, if you want/need it. There is no file-based limitation. The limitation on a CloudDrive is 1PB per drive, which I believe is related to driver functionality. Google recently introduced a per-folder file number limitation, but CloudDrive simply stores its data in multiple folders (if necessary) to avoid related limitations. Again, I don't work for the company, but, in previous conversations about the subject, it's been said that CloudDrive is built on top of Windows' storage infrastructure and would require a fair amount of reinventing the wheel to port to another OS. They haven't said no, but I don't believe that any ports are on the short or even medium term agenda. Hope some of that helps.
    2 points
  32. To get this started apparently: My server was kind of piecemeal constructed. I recently purchased a 42U HP Rack from a local company (via Craigslist), for super cheap ($50, so literally couldn't pass it up) Sophos UTM (Home): Case: Antec ISK 110 VESA case, Mobo (SoC): ASRock RACK J1900D2Y RAM: 4GB of non-ECC RAM OS Drive: Samsung 850 Pro 120GB SSD Storage Server: Case: SuperMicro 847E26-R1K28LPB OS: Windows Server 2012R2 Essentials CPU: AMD FX-8120 Intel Xeon E3 1245v3 (link) MoBo: ASRock 990FX Extreme3 Supermicro MBD-X10SAT-O (link) RAM: 2x8GB Crucial ECC GFX: nVidia geForce 9400 Intel HD 4600 (on processor GFX) PSU: Built in, 2x redundant power supplies (1280W 80+ Gold) OS Drive: Crucial MX200 256GB SSD Storage Pool: 146TB: 4x 4TB (Seagate NAS ST4000VN000) + 8x 4TB (WD40EFRX) + 12x 8TB Seagate Archive (ST8000AS0002), 2x 8TB Seagate Barracudas (ST8000DM004), 2x 128GB OCZ Vertex 4s Misc Storage: 500GB, used for temp files (downloads) HDD Controller card: IBM ServeRAID M1015, cross flashed to "IR Mode" (RAID options, used to pass through disks only), plus an Intel SAS Expander card USB: 2TB Seagate Backup Plus for Server Backup (system drive, and system files) using a WD Green EARS NVR (Network Video Record, aka IP camera box) via BlueIris: Case: Norco ITX-S4 OS: Windows 10 CPU: Intel Core i3-4130T MoBo: ASRock Rack E3C226D2I RAM: 2x8GB G.Skill GFX: ASPEED 2300 PSU: 450W 1U OS Drive: 128GB SSD, Crucial M550 Storage Pool: 2x4TB Toshiba HDD HyperV VM Lab: Case: Supermicro SYS-6016T-NTF (1U case) OS: HyperV Server 2012R2 CPU: Intel Xeon 5560 (x2, hyperthreading disabled) MoBo: Supermicro X8DTU RAM: 64GBs (8x8GB) Hynix Registered ECC (DDR3-1333) GFX: ASPEED 2300 PSU: 560W 1U OS Drive: 160GB HDD Storage: 500GB Crucial MX200 SSD, using Data Deduplication for VMs Emby Server: Case: Unknown (1U case) OS: Windows 10 Pro x64 CPU: Dual Intel Xeon x5660's (hardware fairy swung by) MoBo: Supermicro X8DTi RAM: 20GB (5x4GB) Samsung Registered ECC GFX: Matrox (Onboard) PSU: 560W 1U OS Drive: 64GB SSD, Storage: 128GB (cache, metadata, transcoding temp) Netgear GS724T Smart Switch 24 port, Gigabit, Managed Switch (one port is burned out already, but it was used). Dell 17" keyboard and monitor tray (used, damaged, propped up). Images here: http://imgur.com/a/WRhZf Here is my network hardware. Not a great image, but that's the 24 port, managed switch, a punchout block, waaay too long cables, cable modem and Sophos UTM box. Misc drawers and unused spares. And my servers. HyperV system in the 1U, and my storage server in the 4U. And the Cyberpower UPS at the bottom. What you don't see is the NVR box, as it's been having issues, and I've been troubleshooting those issues.
    2 points
  33. Quinn

    [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip
    2 points
  34. Ok solution is that you need to manually create the virtual drive in powershell after making the pool: 1) Create a storage pool in the GUI but hit cancel when it asks to create a storage space 2) Rename the pool to something to identify this raid set. 3) Run the following command in PowerShell (run with admin power) editing as needed: New-VirtualDisk -FriendlyName VirtualDriveName -StoragePoolFriendlyName NameOfPoolToUse -NumberOfColumns 2 -ResiliencySettingName simple -UseMaximumSize
    2 points
  35. I used this adapter cable for years & never had a problem. Before I bought my server case I had a regular old case. I had (3) 4 in 3 hot swap cages next to the server. I ran the sata cables out the back of my old case. I had a power supply sitting on the shelf by the cages which powered them. The cool thing was that I ran the power cables that usually go to the motherboard inside of the case from the second power supply. I had a adapter that would plug into the motherboard and the main power supply that the computer would plug into. The adapter had a couple of wires coming from it to a female connection. You would plug your second power supply into it. What would happen is that when you turn on your main computer the second power supply would come on. That way your computer will see all of your hard drives at once. Of course when you turned off your server both of the power supplies would turn off. Here is a link to that adapter. Let me know what you think. https://www.newegg.com/Product/Product.aspx?Item=9SIA85V3DG9612
    2 points
  36. The current method of separate modules, where we can pick and choose which options to use together gets my (very strong) vote! Jamming them all together will just create unneeded bloat for some. I would still pay a "forced" bundle price, if it gave me the option to use just the modules I need... and maybe add one or more of the others later. I'm amazed at the quality of product/s that one (I think?) developer has produced and offering for a low - as Chris says, almost impulse buy - price. Keep up the good work and bug squashing guys!
    2 points
  37. It's just exaggerated. The URE avg rates at 10^14/15 are taken literally in those articles while in reality most drives can survive a LOT longer. It's also implied that an URE will kill a resilver/rebuild without exception. That's only partly true as e.g. some HW controllers and older SW have a very small tolerance for it. Modern and updated RAID algorithms can continue a rebuild with that particular area reported as a reallocated area to the upper FS IIRC and you'll likely just get a pre-fail SMART attribute status as if you had experienced the same thing on a single drive that will act slower and hang on that area in much the samme manner as a rebuild will. I'd still take striped mirrors for max performance and reliability and parity only where max storage vs cost is important, albeit in small arrays striped together.
    2 points
  38. To clarify a couple of things here (sorry, I did skim here): StableBit DrivePool's default file placement strategy is to place new files on the disks with the most available free space. This means the 1TB drives, first, and then once they're full enough, on the 500GB drive. So, yes, this is normal. The Drive Space Equalizer doesn't change this, but just causes it to rebalance "after the fact" so that it's equal. So, once the 1TB drives get to be about 470GB free/used, it should then start using the 500GB drive as well. There are a couple of balancers that do change this behavior, but you'll see "real time placement limiters" on the disks, when this happens (red arrows, specifically). If you don't see that, then it defaults to the "normal" behavior.
    2 points
  39. Windows Server 2016 Essentials is a very good choice, actually! It's the direct successor to Windows Home Server, actually. The caveat here is that it does want to be a domain controller (but that's 100% optional). Yeah, the Essentials Experience won't really let you delete the Users folder. There is some hard coded functionality here, which ... is annoying. Depending on how you move the folders, "yes". Eg, it will keep the permissions from the old folder, and not use the ones from the new folder. It's quite annoying, and why some of my automation stuff uses a temp drive and then moves stuff to the pool. If you're using the Essentials stuff, you should be good. But you should check out this: https://tinkertry.com/ws2012e-connector https://tinkertry.com/how-to-make-windows-server-2012-r2-essentials-client-connector-install-behave-just-like-windows-home-server
    2 points
  40. Even 60 C for a SSD isn't an issue - they don't have the same heat weaknesses that spinner drives do. I wouldn't let it go over 70 however - Samsung as an example rates many of their SSDs between 0 and 70 as far as environmental conditions go. As they are currently one of the leaders in the SSD field, they probably have some of the stronger lines - other manufacturers may not be as robust.
    2 points
  41. Jaga

    Almost always balancing

    With the "Disk Space Equalizer" plugin turned -off-, Drivepool will still auto-balance all new files added to the Pool, even if it has to go through the SSD Cache disks first. They merely act as a temporary front-end pool that is emptied out over time. The fact that the SSD cache filled up may be why you're seeing balancing/performance oddness, coupled with the fact you had real-time re-balancing going on. Try not to let those SSDs fill up. I would recommend disabling the Disk Space Equalizer, and just leaving the SSD cache plugin on for daily use. If you need to manually re-balance the pool do a re-measure first, then temporarily turn the Disk Space Equalizer back on (it should kick off a re-balance immediately when toggled on). When the re-balance is complete, toggle the Disk Space Equalizer back off.
    2 points
  42. With most of the topics here targeting tech support questions when something isn't working right, I wanted to post a positive experience I had with Drivepool for others to benefit from.. There was an issue on my server today where a USB drive went unresponsive and couldn't be dismounted. I decided to bounce the server, and when it came back up Drivepool threw up error messages and the GUI for it wouldn't open. I found the culprit - somehow the Drivepool service was unable to start, even though all it's dependencies were running. The nice part is that even though the service wouldn't run, the Pool was still available. "Okay" I thought, and did an install repair on Stablebit Drivepool through the Control Panel. Well, that didn't seem to work either - the service just flat-out refused to start. So at that point I assumed something in the software was corrupted, and decided to 1) Uninstall Drivepool 2) bounce the server again 3) Run a cleaning utility and 4) Re-install. I did just that, and Drivepool installed to the same location without complaint. After starting the Drivepool GUI I was greeted with the same Pool I had before, running under the same drive letter, with all of the same performance settings, folder duplication settings, etc that it always had. To check things I ran a re-measure on the pool, which came up showing everything normal. It's almost as if it didn't care if it's service was terminal and it was uninstalled/reinstalled. Plex Media Server was watching after the reboot, and as soon as it saw the Pool available the scanner and transcoders kicked off like nothing had happened. Total time to fix was about 30 minutes start to finish, and I didn't have to change/reset any settings for the Pool. It's back up and running normally now after a very easy fix for what might seem to be an "uh oh!" moment. That's my positive story for the day, and why I continue to recommend Stablebit products.
    2 points
  43. Just wanted to give an update for those who have problems with xfinity new 1gb line - I basically had them come out showed them how the line was going in and out with the pingplotter and they rewired everything and they changed out the modem once they did that everything has stabilized and been working great - thank you for all your help guys! long live stablebit drive! lol
    2 points
  44. 1x128 SSD for OS, 1x8TB, 2x4TB, 2x2TB, 1x900GB. The 8TB and 1x4+1x2TB are in a hierarchical duplicated Pool, all with 2TB partitions so that WHS2011 Server Backup works. The other 4TB+2TB are in case some HDD fails. The 900GB is for trash of an further unnamed downloading client.So actually, a pretty small Server given what many users here have.
    2 points
  45. Okay, good news everyone. Alex was able to reproduce this issue, and we may have a fix. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3198_BETA.exe
    2 points
  46. You could do it with a combination of a VPN, Drivepool pool(s), and Clouddrive using file share(s). Here's how I think it could work: The VPN connects all computers on the same local net. Each computer has a Pool to hold data, and the Pool drive shared so the local net can access it. Clouddrive has multiple file shares setup, one to each computer connected via VPN and sharing a Pool. Each local Pool can have duplication enabled, ensuring each local Clouddrive folder is duplicated locally X times. The file shares in Clouddrive are added to a new Drivepool Pool, essentially combining all of the remote computer storage you provisioned into one large volume. Note: this is just me brainstorming, though if I were attempting it I'd start with this type of scheme. You only need two machines with Drivepool installed and a single copy of Clouddrive to pull it off. Essentially wide-area-storage.
    2 points
  47. It's $10 off the normal price for a product you don't already own, but $15/each for products that you do.
    2 points
  48. "dpcmd remeasure-pool x:"
    2 points
  49. "Release Final" means that it's a stable release, and will be pushed out to everyone. Not that it's the final build. Besides, we have at least 7 more major features to add, before even considering a 3.0.
    2 points
  50. Sure. So DP supports pool hierarchies, i.e., a Pool can act like it is a HDD that is part of a (other) Pool. This was done especially for me. Just kidding. To make DP and CloudDrive (CD) work together well (but it helps me too). In the CD case, suppose you have two HDDs that are Pooled and you use x2 duplication. You also add a CD to that Pool. What you *want* is one duplicate on either HDD and the other duplicate on the CD. But there is no guarantee it will be that way. Both duplicated could end up at one of the HDDs. Lose the system and you lose all as there is no duplicate on CD. To solve this, add both HDDs to Pool A. This Pool is not duplicated. You also have CD (or another Pool of a number of HDDs) and create unduplicated Pool B witrh that. If you then create a duplicated Pool C by adfding Pool A and Pool B, then DP, through Pool C will ensure that one duplicate ends up at (HDDs) in Pool A and the other duplicate will en up at Pool B. This is becuase DP will, for the purpose of Pool C, view Pool A and Pool B as single HDDs and DP ensures that duplicates are not stored on the same "HDD". Next, for backup purposes, you would backup the underlying HDDs of Pool A and you would be backing up only one duplicate and still be certain you have all files. Edit: In my case, this allows me to backup a single 4TB HDD (that is partitioned into 2 2TB partitions) in WHS2011 (which onyl supports backups of volumes/partitions up to 2TB) and still have this duplicated with another 4TB HDD. So, I have: Pool A: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool B: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool C: Pool A + Pool B, duplicated. So, every file in Pool C is written to Pool A and Pool B. It is therefore, at both 4TB HDDs that are in the respective Pools A and B. Next, I backup both partitions of either HDD and I have only one backup with the guarantee of having one copy of each file included in the backup.
    2 points
×
×
  • Create New...