Jump to content

Search the Community

Showing results for tags 'snapraid'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
  • StableBit Cloud
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic
  • BitFlock
    • General

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 8 results

  1. So ive used snapraid + drivepool to great effect. However i just attempted to upgrade a drive by removing a disk from the pool (after disconnecting it) adding in the new one, and beginning a rebuild with snapraid. Would have worked but it was at 111 hours for the rebuild (for an 12TB > 18TB). Now a rebuild might be necessary if I had a failed disk. However I did not. I thought there must be a better way. Is it possible to 1) disconnect the current 12TB disk with data 2) remove it from drivepool 3) add the new 18TB disk so a poolpart folder is created 4) stop the service so it doesnt pick up the old disk and put it in an external dock 5) move all the data from the 12TB poolpart to the 18TB poolpart 6) restart drivepool and profit?
  2. I saw many post are talking about 3-6 drives, but i can't find a post that are talking about 100 10TB drives. I am trying to crate a 1PB drive in drive pool, is that mean I need an other 1PB drive pool for parity, or I set 100 drives as data drive at snap raid? If i set 100 drives as data drive at snap raid do i need a 1PB drive too or how big should the parity drive size be. Is there anyway to do software raid 5?(not windows raid5)
  3. I run windows server 2016 with 10 hdds and 4 small SSDs - 15TB total usable DP pool with 2x3TB hdds parity using snapraid. The really stupid thing is, I’m only sitting on 2tb of data. I plan on repurposing the drives into an old Synology NAS and keep it as an off-site backup. So, I’m just wandering if anyone has done the following and can give me advice: Create 2x Raid 0 between 2x 1TB SSDs = 4TB in total between 4 drives using stablebit drivepool (will only be mirroring a couple of directories) and then have 2x 3TB parity drives with snapraid. Goals: 1. Reduce idle watts from spinning HDDs 2. Saturate 10GbE link (hopefully Raid 0 SSDs can do this) - with a goal of being able to edit 4K off it 3. Protect against bitrot with snapraid daily scrubs Data redundancy is not my main issue as I have a number of backups + cloud + offsite. One advantage I can see is that I can grown this NAS by adding 2 SSDs in RAID 0 at a time
  4. I'm using Drivepool for a couple of years now and I'm very happy. Since the development has somewhat slowed down, I have an idea for a feature that I would pay for again. Maybe call it Drivepool NextGen or whatever. Since Snapraid is open source, would it not be possible to integrate that into Drivepool? Imagine you set certain drives as the snapraid volumes with options and then Drivepool would take care of all the things you need to script Snapraid to do. Creating, scrubbing etc etc. Right now I use Pool Duplication, but I think having a Snapraid option would be much better. I tried doing this manually in the past but wasn't very happy with the workflow. What do you guys think? Would that be possible?
  5. Hello everyone! I plan to integrate DrivePool with SnapRAID and CloudDrive and require your assistance regarding the setup itself. My end goal is to pool all data drives together under one drive letter for ease of use as a network share and also have it protected from failures both locally (SnapRAID) and in the cloud (CloudDrive) I have the following disks: - D1 3TB, D: drive (data) - D2 3TB, E: drive (data) - D3 3TB, F: drive (data) - D4 3TB, G: drive (data) - D5 5TB, H: drive (parity) Action plan: - create LocalPool X: in DrivePool with the four data drives (D-G) - configure SnapRAID with disks D-G as data drives and disk H: as parity - do an initial sync and scrub in SnapRAID - use Task Scheduler (Windows Server 2016) to perform daily synchs (SnapRAID.exe sync) and weekly scrubs (SnapRAID.exe -p 25 -o 7 scrub) - create CloudPool Y: in CloudDrive, 30-50GB local cache on an SSD to be used with G Suite - create HybridPool Z: in DrivePool and add X: and Y: - create the network share to hit X: Is my thought process correct in terms of protecting my data in the event of a disk failure (I will also use Scanner for disk monitoring) or disks going bad? Please let me know if I need to improve the above setup or if there is soemthing that I am doing wrong. Looking forward to your feedback and thank you very much for your assistance!
  6. Just want to run this by you guys to see if I'm missing something before I screw something up I've been using Pool file Duplication and love it but as you know downside is you can only use 1/2 your total volume. So just learned you can use SnapRaid for a weekly back up (or what ever time frame you chose) Here is my current system and snapconfig I just created. I'm assuming once I uncheck "Pool file duplication" it will delete the dupes? Than I can move everything off the J: drive to use as my parity?
  7. Hi all. I'm testing out a setup under Server 2016 utilizing Drivepool and SnapRAID. I am using mount points to folders. I did not change anything in the snapraid conf file for hidden files: # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR\" # Format: "exclude \PATH\FILE" # Format: "exclude \PATH\DIR\" exclude *.unrecoverable exclude Thumbs.db exclude \$RECYCLE.BIN exclude \System Volume Information exclude \Program Files\ exclude \Program Files (x86)\ exclude \Windows\ When running snapraid sync it outputs that it is ignoring the covefs folder - WARNING! Ignoring special 'system-directory' file 'C:/drives/array1disk2/PoolPart.23601e15-9e9c-49fa-91be-31b89e726079/.covefs' Is it important to include this folder? I'm not sure why it is excluding it in the first place since nohidden is commented out. But my main question is if covefs should be included. Thanks.
  8. The topic of adding RAID-style parity to DrivePool was raised several times on the old forum. I've posted this FAQ because (1) this is a new forum and (2) a new user asked about adding folder-level parity, which - to mangle a phrase - is the same fish but a different kettle. Since folks have varying levels of familiarity with parity I'm going to divide this post into three sections: (1) how parity works and the difference between parity and duplication, (2) the difference between drive-level and folder-level parity, (3) the TLDR conclusion for parity in DrivePool. I intend to update the post if anything changes or needs clarification (or if someone points out any mistakes I've made). Disclaimer: I do not work for Covecube/Stablebit. These are my own comments. You don't know me from Jack. No warranty, express or implied, in this or any other universe. Part 1: how parity works and the difference between parity and duplication Duplication is fast. Every file gets simultaneously written to multiple disks, so as long as all of those disks don't die the file is still there, and by splitting reads amongst the copies you can load files faster. But to fully protect against a given number of disks dying, you need that many times number of disks. That doesn't just add up fast, it multiplies fast. Parity relies on the ability to generate one or more "blocks" of a series of reversible checksums equal to the size of the largest protected "block" of content. If you want to protect three disks, each parity block requires its own disk as big as the biggest of those three disks. For every N parity blocks you have, any N data blocks can be recovered if they are corrupted or destroyed. Have twelve data disks and want to be protected against any two of them dying simultaneously? You'll only need two parity disks. Sounds great, right? Right. But there are tradeoffs. Whenever the content of any data block is altered, the corresponding checksums must be recalculated within the parity block, and if the content of any data block is corrupted or lost, the corresponding checksums must be combined with the remaining data blocks to rebuild the data. While duplication requires more disks, parity requires more time. In a xN duplication system, you xN your disks, for each file it simultaneously writes the same data to N disks, but so long as p<=N disks die, where 'p' depends on which disks died, you replace the bad disk(s) and keep going - all of your data is immediately available. The drawback is the geometric increase in required disks and the risk of the wrong N disks dying simultaneously (e.g. if you have x2 duplication, if two disks die simultaneously and one happens to be a disk that was storing duplicates of the first disk's files, those are gone for good). In a +N parity system, you add +N disks, for each file it writes the data to one disk and calculates the parity checksums which it then writes to N disks, but if any N disks die, you replace the bad disk(s) and wait while the computer recalculates and rebuilds the lost data - some of your data might still be available, but no data can be changed until it's finished (because parity needs to use the data on the good disks for its calculations). (sidenote: "snapshot"-style parity systems attempt to reduce the time cost by risking a reduction in the amount of recoverable data; the more dynamic your content, the more you risk not being able to recover) Part 2: the difference between drive-level and folder-level parity Drive-level parity, aside from the math and the effort of writing the software, can be straightforward enough for the end user: you dedicate N drives to parity that are as big as the biggest drive in your data array. If this sounds good to you, some folks (e.g. fellow forum moderator Saitoh183) use DrivePool and the FlexRAID parity module together for this sort of thing. It apparently works very well. (I'll note here that drive-level parity has two major implementation methods: striped and dedicated. In the dedicated method described above, parity and content are on separate disks, with the advantages of simplicity and readability and the disadvantage of increased wear on the parity disks and the risk that entails. In the striped method, each disk in the array contains both data and parity blocks; this spreads the wear evenly across all disks but makes the disks unreadable on other systems that don't have compatible parity software installed. There are ways to hybridise the two, but it's even more time and effort). Folder-level parity is... more complicated. Your parity block has to be as big as the biggest folder in your data array. Move a file into that folder, and your folder is now bigger than your parity block - oops. This is a solvable problem, but 'solvable' does not mean 'easy', sort of how building a skyscraper is not the same as building a two-storey home. For what it's worth, FlexRAID's parity module is (at the time of my writing this post) $39.95 and that's drive-level parity. Conclusion: the TLDR for parity in DrivePool As I see it, DrivePool's "architectural imperative" is "elegant, friendly, reliable". This means not saddling the end-user with technical details or vast arrays of options. You pop in disks, tell the pool, done; a disk dies, swap it for a new one, tell the pool, done; a dead disk doesn't bring everything to a halt and size doesn't matter, done. My impression (again, I don't speak for Covecube/Stablebit) is that parity appears to be in the category of "it would be nice to have for some users but practically it'd be a whole new subsystem, so unless a rabbit gets pulled out of a hat we're not going to see it any time soon and it might end up as a separate product even then (so that folks who just want pooling don't have to pay twice+ as much)".
×
×
  • Create New...