Search the Community
Showing results for tags 'FAQ'.
-
As a paying customer, i'm concerned by the complete lack of release stable updates over this extended period.The latest stable version 2.1.1.561 which was released eons ago. I know there have been some subsequent betas into the 2.2.x range but as we all know, that means nothing if nothing actually gets released in some form of a reasonable time frame. If this software has been abandoned, please let us know so we can migrate to other solutions as no one in their right mind should be using software that doesn't receive at least semi-regular updates. No response to this post would also be worrying as it would indicate a lack of attention and concern on the forum and in-turn the software itself.
-
The "Other" and "Unusable" sizes displayed in the DrivePool GUI are often a source of confusion for new users. Please feel free to use this topic to ask questions about them if the following explanation doesn't help. Unduplicated: the total size of the files in your pool that aren't duplicated (i.e. exists on only one disk in the pool). If you think this should be zero and it isn't, check whether you have folder duplication turned off for one or more of your folders (e.g. in version 2.x, via Pool Options -> File Protection -> Folder Duplication). Duplicated: the total size of the files in your pool that are duplicated (i.e. kept on more than one disk in the pool; a 3GB file on two disks is counted as 6GB of duplicated space in the pool, since that's how much is "used up"). Other: the total size of the files that are on your pooled disks but not in your pool and all the standard filesystem metadata and overhead that takes up space on a formatted drive. For example, the hidden protected system folder "System Volume Information" created by Windows will report a size of zero even if you are using an Administrator account, despite possibly being many gigabytes in size (at least if you are using the built-in Explorer; other apps such as JAM's TreeSize may show the correct amount). Unusable for duplication: the amount of space that can't be used to duplicate your files, because of a combination of the different sizes of your pooled drives, the different sizes of your files in the pool and the space consumed by the "Other" stuff. DrivePool minimises this as best it can, based on the settings and priorities of your Balancers. More in-depth explanations can also be found elsewhere in the forums and on the Covecube blog at http://blog.covecube.com/ Details about "Other" space, as well as the bar graphs for the drives, are discussed here: http://blog.covecube.com/2013/05/stablebit-drivepool-2-0-0-230-beta/
-
ParityDuplication FAQ - Parity and Duplication and DrivePool
Shane posted a question in Nuts & Bolts
The topic of adding RAID-style parity to DrivePool was raised several times on the old forum. I've posted this FAQ because (1) this is a new forum and (2) a new user asked about adding folder-level parity, which - to mangle a phrase - is the same fish but a different kettle. Since folks have varying levels of familiarity with parity I'm going to divide this post into three sections: (1) how parity works and the difference between parity and duplication, (2) the difference between drive-level and folder-level parity, (3) the TLDR conclusion for parity in DrivePool. I intend to update the post if anything changes or needs clarification (or if someone points out any mistakes I've made). Disclaimer: I do not work for Covecube/Stablebit. These are my own comments. You don't know me from Jack. No warranty, express or implied, in this or any other universe. Part 1: how parity works and the difference between parity and duplication Duplication is fast. Every file gets simultaneously written to multiple disks, so as long as all of those disks don't die the file is still there, and by splitting reads amongst the copies you can load files faster. But to fully protect against a given number of disks dying, you need that many times number of disks. That doesn't just add up fast, it multiplies fast. Parity relies on the ability to generate one or more "blocks" of a series of reversible checksums equal to the size of the largest protected "block" of content. If you want to protect three disks, each parity block requires its own disk as big as the biggest of those three disks. For every N parity blocks you have, any N data blocks can be recovered if they are corrupted or destroyed. Have twelve data disks and want to be protected against any two of them dying simultaneously? You'll only need two parity disks. Sounds great, right? Right. But there are tradeoffs. Whenever the content of any data block is altered, the corresponding checksums must be recalculated within the parity block, and if the content of any data block is corrupted or lost, the corresponding checksums must be combined with the remaining data blocks to rebuild the data. While duplication requires more disks, parity requires more time. In a xN duplication system, you xN your disks, for each file it simultaneously writes the same data to N disks, but so long as p<=N disks die, where 'p' depends on which disks died, you replace the bad disk(s) and keep going - all of your data is immediately available. The drawback is the geometric increase in required disks and the risk of the wrong N disks dying simultaneously (e.g. if you have x2 duplication, if two disks die simultaneously and one happens to be a disk that was storing duplicates of the first disk's files, those are gone for good). In a +N parity system, you add +N disks, for each file it writes the data to one disk and calculates the parity checksums which it then writes to N disks, but if any N disks die, you replace the bad disk(s) and wait while the computer recalculates and rebuilds the lost data - some of your data might still be available, but no data can be changed until it's finished (because parity needs to use the data on the good disks for its calculations). (sidenote: "snapshot"-style parity systems attempt to reduce the time cost by risking a reduction in the amount of recoverable data; the more dynamic your content, the more you risk not being able to recover) Part 2: the difference between drive-level and folder-level parity Drive-level parity, aside from the math and the effort of writing the software, can be straightforward enough for the end user: you dedicate N drives to parity that are as big as the biggest drive in your data array. If this sounds good to you, some folks (e.g. fellow forum moderator Saitoh183) use DrivePool and the FlexRAID parity module together for this sort of thing. It apparently works very well. (I'll note here that drive-level parity has two major implementation methods: striped and dedicated. In the dedicated method described above, parity and content are on separate disks, with the advantages of simplicity and readability and the disadvantage of increased wear on the parity disks and the risk that entails. In the striped method, each disk in the array contains both data and parity blocks; this spreads the wear evenly across all disks but makes the disks unreadable on other systems that don't have compatible parity software installed. There are ways to hybridise the two, but it's even more time and effort). Folder-level parity is... more complicated. Your parity block has to be as big as the biggest folder in your data array. Move a file into that folder, and your folder is now bigger than your parity block - oops. This is a solvable problem, but 'solvable' does not mean 'easy', sort of how building a skyscraper is not the same as building a two-storey home. For what it's worth, FlexRAID's parity module is (at the time of my writing this post) $39.95 and that's drive-level parity. Conclusion: the TLDR for parity in DrivePool As I see it, DrivePool's "architectural imperative" is "elegant, friendly, reliable". This means not saddling the end-user with technical details or vast arrays of options. You pop in disks, tell the pool, done; a disk dies, swap it for a new one, tell the pool, done; a dead disk doesn't bring everything to a halt and size doesn't matter, done. My impression (again, I don't speak for Covecube/Stablebit) is that parity appears to be in the category of "it would be nice to have for some users but practically it'd be a whole new subsystem, so unless a rabbit gets pulled out of a hat we're not going to see it any time soon and it might end up as a separate product even then (so that folks who just want pooling don't have to pay twice+ as much)".