Jump to content

Search the Community

Showing results for tags 'parity'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
  • StableBit Cloud
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic
  • BitFlock
    • General

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 5 results

  1. Hey, I've set up a small test with 3x physical drives in DrivePool, 1 SSD drive and 2 regular 4TB drives. I'd like to make a set up where these three drives can be filled up to their brim and any contents are duplicated only on a fourth drive: a CloudDrive. No regular writes nor reads should be done from the CloudDrive, it should only function as parity for the 3 drives. Am I better off making a separate CloudDrive and scheduling an rsync to mirror the DrivePool contents to CloudDrive, or can this be done with DrivePool (or DrivePools) + CloudDrive combo? I'm running latest beta for both. What I tried so far didn't work too well, immediately some files I were moving were being actually written on the parity drive even though I set it to only contain duplicated content. I got that to stop by going into File Placement and unticking parity drive from every folder (but this is an annoying thing to have to maintain whenever new folders are added). 1) 2)
  2. Sorry if this makes no sense just typing thoughts. I know this has been talked about before on more than one occasion but is it time to reanalyze the market and see how people feel about it. Drivepool + parity as an option I would pay for. That the savings on storage would be well worth it. What I think would be a huge upgrade or addin for stablebit is a parity option. The fact that you can have sub pools and direct data to certain drives/pools allows for so many configuration options. What I am thinking for my own situation is this, btw I have never lost data with Stablebit since it's release. Pool A 2 x 4TB with 2 x duplication. As per current setup. This will be for recently created, or modified files. Pool B is archive pool, no duplication. 4 x 6TB, 2 x 4TB, = 29.2TB usable storage. Pool C is the parity pool, 2 x duplication. 2 x 12TB disks or more. Parity pool has to have the largest individual disk size to protect the entire pool or smaller disks for individual folder protection. The parity could be calculated from the archive pools individual disks, or individual folders. Now this is where things get smart. Obviously all new and frequently accessed files are stored on pool A, these are duplicated against failure. Simple file and folder created and modified dates can be used to track the files readiness for archiving. Once a file is ready for archiving Stablebit can do this on a schedule out of hours or based on user settings or number of files or the amount of storage that is free on Pool A, etc options are endless. The benefits of Stablebit doing this over products already out there are in it's already great user interface. - Simple drive removal and addition from any pool. - Simple failed drive replacement. - Simple rules and options for file placements. - Parity Pool could be duplicating a single large parity file for all archive disks, or possibly just parity for some folders on the archive drive. - Less user interaction required as Stablebit does the work, set and forget, notify for problems. - Archive drive addition will increase capacity by the size of that added drive, no loss of capacity due to mirroring. - Pool B capacity would be 32.7TB with mirror of 12TB drives vs 65.4TB with archive + Parity. As most storage experts will tell you, drives over 8TB-12TB should really be in an RAID6 array, with hot spare. Thus allowing for multiple failures at a time, most will state that the larger the parity rebuild the more likely a second failure will take place during this time. At what point is mirroring going to be a risk, in my mind we could already be there at 12-14TB. I know I do not want to use larger than 12 TB disks without having at least 3 copies. I cannot afford to have 3 copies on large disks, nor can I have endless small disks as I do not want the heat/power usage and do not have the room. I know there are other options out there, Synology do something on their NAS boxes, FlexRAID, SnapRAID, Unraid. But none would have the ease that Stablebit could create. Thoughts anyone?
  3. Hello everyone! I'm new here trying to do as much research as I can before purchase. I'm liking all the information I've seen on the main site,manual, and the forums/old forums. I think I've caught a little information off Reddit to push me here. I'm hoping for loads of information and maybe this will help MANY people in the long run on what to do. So first off on the topic string. I would like to use StableBit's products only. So in doing so I gathered some can's and can not's. That the Drivepool with Scanner are a pair made to secure any deal. But I'm also worried about parity. My current pool is: 5x4TB Seagates 2x3TB Seagates The purpose of my pool is family movies / music and pictures. Besides the music and pictures being of small size, the movies range from 400MB-16GB. Here's some Reddit research that even put me on the research run about StableBit products. Ok in this I was told that : 1. Drivepool offers redundancy via duplication 2. Creator of StableBit products has a Youtube vLog channel (Couldn't find it but found Stablebit.com's and only had two videos no vLogs) 3. One user that spoke so highly of StableBit products (Has owned it for 4-5 years now) 4. Drivepools duplication works via client setting the folders or subfolders. To be duplicated 2x,x3 so on. I was confused on the duplication settings. And if there is a parity for at least one HDD failure or more depending on settings. I really love the way these products looks, the active community and the progressiveness of the Covecube staff for their products! I need to really strongly put it out here that I would really rather use StableBit's products less programs running and wouldn't have to worry about which one is or isn't causing problems. This is a two part thread so this is the end of the first research part. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Now for the second part of the researching. I've seen this in a few places doing a StableBit Drivepool for pooling the drives with FlexRaid (Raid-F) for the parity set up. But mostly using all the programs from StableBit where as setting and forgetting "almost" the FlexRaid setup. Here's the research I've dug up well what I could. Oddly I found a couple hints on the Flexraid forums but nothing saying where it was on the forums or what to search for or anything. So most of it was on the old Covecube forums that are read only. I would put links but I think I'll just select the little information I need so this thread doesn't get kicked. And the second part. Ok I read the information on the first thread above and that it talking about how it was possible. Saitoh183 posted a few times on that thread with more information on Drivepooling and Flexraiding. Goes through making sure everyone knows that you lose one or more drives (largest or equal size of a every drive"Not put together") for a parity disk or a PPU so called. In the second quote of research it is a small thread "explaining" how to setup the both of them. I know and understand that Saitoh183 said "It doesn't matter in which order you set them up. DP just merges your drives, Your performance comes from DP and Flexraid doesn't affect it. Flexraid is only relevant for performance if you were using the pooling feature of it. Since you aren't, the only performance you will see is when your parity drive is being updated. Also dont forget to not add your PPU to the pool" I know from what Saitoh183 it doesn't matter. But I figured you would make the StableBit Drivepool setup the drive letter. Now going to the FlexRaid: 1. Add new configuration 2. Name Cruise Control with Snapshot and Create 3. Open the new configuration to edit and open Drive Manager 4. Add the DRU's (Data drives) and one or more PPU for parity backup's (Snapshots) I've read a few setup guides and I've heard 1 PPU drive for every 6 and I've heard 1 PPU drive for every 10 both are fine. 5. Initialize the raid if data is on the DRU's it will now do a parity snapshot, now back to the home page for the named configuration and Start Storage Pool. Not sure what else to after that if it's even right. I don't think the FlexRaid should have a drive letter or it would make things more confusing than it already is using two programs. Please enlighten my with any information that can help this research that will help with my purchase and hopefully more people that decide to do this setup also. I would like to firstly so I appreciate everyone up front for there past help with others to even get me here with this information! Thanks again. Techtonic
  4. Hi - I've read the FAQ and various other posts about using Flexraid with Drivepool. However, everything I've seen seems to be using Flexraid-F with snapshot raid. I've installed the trial version of this, but because of activity in my pool, it requires a few hours of updates every day. I'm wondering if it wouldn't be better to use the transparent RAID Flexraid product. I'm not terribly concerned about write performance. However there don't seem to be any posts about the combination of tRAID, drivepool and scanner. Has anyone put this in place and if so, are there any issues? Or is there some reason why tRAID won't work with Drivepool. Thanks! Michael
  5. The topic of adding RAID-style parity to DrivePool was raised several times on the old forum. I've posted this FAQ because (1) this is a new forum and (2) a new user asked about adding folder-level parity, which - to mangle a phrase - is the same fish but a different kettle. Since folks have varying levels of familiarity with parity I'm going to divide this post into three sections: (1) how parity works and the difference between parity and duplication, (2) the difference between drive-level and folder-level parity, (3) the TLDR conclusion for parity in DrivePool. I intend to update the post if anything changes or needs clarification (or if someone points out any mistakes I've made). Disclaimer: I do not work for Covecube/Stablebit. These are my own comments. You don't know me from Jack. No warranty, express or implied, in this or any other universe. Part 1: how parity works and the difference between parity and duplication Duplication is fast. Every file gets simultaneously written to multiple disks, so as long as all of those disks don't die the file is still there, and by splitting reads amongst the copies you can load files faster. But to fully protect against a given number of disks dying, you need that many times number of disks. That doesn't just add up fast, it multiplies fast. Parity relies on the ability to generate one or more "blocks" of a series of reversible checksums equal to the size of the largest protected "block" of content. If you want to protect three disks, each parity block requires its own disk as big as the biggest of those three disks. For every N parity blocks you have, any N data blocks can be recovered if they are corrupted or destroyed. Have twelve data disks and want to be protected against any two of them dying simultaneously? You'll only need two parity disks. Sounds great, right? Right. But there are tradeoffs. Whenever the content of any data block is altered, the corresponding checksums must be recalculated within the parity block, and if the content of any data block is corrupted or lost, the corresponding checksums must be combined with the remaining data blocks to rebuild the data. While duplication requires more disks, parity requires more time. In a xN duplication system, you xN your disks, for each file it simultaneously writes the same data to N disks, but so long as p<=N disks die, where 'p' depends on which disks died, you replace the bad disk(s) and keep going - all of your data is immediately available. The drawback is the geometric increase in required disks and the risk of the wrong N disks dying simultaneously (e.g. if you have x2 duplication, if two disks die simultaneously and one happens to be a disk that was storing duplicates of the first disk's files, those are gone for good). In a +N parity system, you add +N disks, for each file it writes the data to one disk and calculates the parity checksums which it then writes to N disks, but if any N disks die, you replace the bad disk(s) and wait while the computer recalculates and rebuilds the lost data - some of your data might still be available, but no data can be changed until it's finished (because parity needs to use the data on the good disks for its calculations). (sidenote: "snapshot"-style parity systems attempt to reduce the time cost by risking a reduction in the amount of recoverable data; the more dynamic your content, the more you risk not being able to recover) Part 2: the difference between drive-level and folder-level parity Drive-level parity, aside from the math and the effort of writing the software, can be straightforward enough for the end user: you dedicate N drives to parity that are as big as the biggest drive in your data array. If this sounds good to you, some folks (e.g. fellow forum moderator Saitoh183) use DrivePool and the FlexRAID parity module together for this sort of thing. It apparently works very well. (I'll note here that drive-level parity has two major implementation methods: striped and dedicated. In the dedicated method described above, parity and content are on separate disks, with the advantages of simplicity and readability and the disadvantage of increased wear on the parity disks and the risk that entails. In the striped method, each disk in the array contains both data and parity blocks; this spreads the wear evenly across all disks but makes the disks unreadable on other systems that don't have compatible parity software installed. There are ways to hybridise the two, but it's even more time and effort). Folder-level parity is... more complicated. Your parity block has to be as big as the biggest folder in your data array. Move a file into that folder, and your folder is now bigger than your parity block - oops. This is a solvable problem, but 'solvable' does not mean 'easy', sort of how building a skyscraper is not the same as building a two-storey home. For what it's worth, FlexRAID's parity module is (at the time of my writing this post) $39.95 and that's drive-level parity. Conclusion: the TLDR for parity in DrivePool As I see it, DrivePool's "architectural imperative" is "elegant, friendly, reliable". This means not saddling the end-user with technical details or vast arrays of options. You pop in disks, tell the pool, done; a disk dies, swap it for a new one, tell the pool, done; a dead disk doesn't bring everything to a halt and size doesn't matter, done. My impression (again, I don't speak for Covecube/Stablebit) is that parity appears to be in the category of "it would be nice to have for some users but practically it'd be a whole new subsystem, so unless a rabbit gets pulled out of a hat we're not going to see it any time soon and it might end up as a separate product even then (so that folks who just want pooling don't have to pay twice+ as much)".
×
×
  • Create New...