Jump to content
  • 0

Future Drivepool + Parity - More storage for less cost


browned

Question

Sorry if this makes no sense just typing thoughts. I know this has been talked about before on more than one occasion but is it time to reanalyze the market and see how people feel about it.

Drivepool + parity as an option I would pay for. That the savings on storage would be well worth it.

What I think would be a huge upgrade or addin for stablebit is a parity option. The fact that you can have sub pools and direct data to certain drives/pools allows for so many configuration options. What I am thinking for my own situation is this, btw I have never lost data with Stablebit since it's release.

Pool A 2 x 4TB with 2 x duplication. As per current setup. This will be for recently created, or modified files.

Pool B is archive pool, no duplication. 4 x 6TB, 2 x 4TB, = 29.2TB usable storage.

Pool C is the parity pool, 2 x duplication. 2 x 12TB disks or more. Parity pool has to have the largest individual disk size to protect the entire pool or smaller disks for individual folder protection. The parity could be calculated from the archive pools individual disks, or individual folders.

Now this is where things get smart. Obviously all new and frequently accessed files are stored on pool A, these are duplicated against failure. Simple file and folder created and modified dates can be used to track the files readiness for archiving. Once a file is ready for archiving Stablebit can do this on a schedule out of hours or based on user settings or number of files or the amount of storage that is free on Pool A, etc options are endless.

The benefits of Stablebit doing this over products already out there are in it's already great user interface.

- Simple drive removal and addition from any pool.
- Simple failed drive replacement.
- Simple rules and options for file placements.
- Parity Pool could be duplicating a single large parity file for all archive disks, or possibly just parity for some folders on the archive drive.
- Less user interaction required as Stablebit does the work, set and forget, notify for problems.
- Archive drive addition will increase capacity by the size of that added drive, no loss of capacity due to mirroring.
- Pool B capacity would be 32.7TB with mirror of 12TB drives vs 65.4TB with archive + Parity.

As most storage experts will tell you, drives over 8TB-12TB should really be in an RAID6 array, with hot spare. Thus allowing for multiple failures at a time, most will state that the larger the parity rebuild the more likely a second failure will take place during this time. At what point is mirroring going to be a risk, in my mind we could already be there at 12-14TB.  I know I do not want to use larger than 12 TB disks without having at least 3 copies. I cannot afford to have 3 copies on large disks, nor can I have endless small disks as I do not want the heat/power usage and do not have the room.

I know there are other options out there, Synology do something on their NAS boxes, FlexRAID, SnapRAID, Unraid. But none would have the ease that Stablebit could create.

Thoughts anyone?

 

 

Link to comment
Share on other sites

7 answers to this question

Recommended Posts

  • 0

I'd like to have real-time parity stored on a dedicated parity disk. Snapraid exists but that doesnt do real-time parity, and if a disk goes you lose availability of those files until you can get a new disk and rebuild from parity. Drivepool with built in parity beats RAID, as if a disk goes along with the parity one, you have still only lost the files on the original disk. While the rebuild goes on, you still have availability of the pool and files on the faulty disk.

Link to comment
Share on other sites

  • 0

I am not sure realtime parity in an archive/parity pool setup would be an option without killing performance or having some custom ASIC chip to do the calculations on the fly like most RAID controllers. Hence having an active pool for changed files and an archive pool/parity pool for older non changing files.

The difficult part would be managing an archive file being modified, somehow saving the new file/data to the active pool while keeping the archive pool and parity data from needing to be changed. You would need a way of displaying the active file to users instead of the older archive file, until the parity data can be recalculated with the old archive file removed.

The active pool is your realtime parity data for your active data as it can be duplicated 2 3 or more times according to the settings of the pool.

Link to comment
Share on other sites

  • 0

I agree with @browned. Recently, I was extremely close to moving my server to an alternate Linux based OS that supports drive pooling/merging and parity natively.  The only reason I ultimately decided against it (and sticking with a Windows OS with Covercube products) was completely unrelated to my current storage needs.

I would buy a new product or extension to my existing Drivepool license to have this functionality!

IMO, at the enthusiast level, without lots of money for extra drives or homelab level hardware, being a datahoarder gets truly complicated as you fill up the machines chassis.  At 8 spinners in my full size tower, I only really have room for 1 more without modding the chassis or buying somewhat expensive "patches" to the root problem.  I view the root problem as inefficient, fault tolerant data archival.  There is no way I would ever run storage solutions without a level of fault tolerance.  The only things on my pool that aren't duplicated are items that are easily accessible online for re-download (i.e. OS ISO images, digitally distributed games, etc.).  Everything else is at least duplicated at a 1:1 level ... and the truly important stuff, at 1:2+.  I'm already almost at the end of my rope in terms of physical capacity, and something, possibly drastic, has to change for my data retention strategy.  What @browned spoke to is where I find myself going ... and currently, this means cobbling together a less than ideal solution of software.

Currently, some of this can be done using the SSD optimizer plug-in (though I have never used it) even if the drives aren't SSDs.  Though, without parity as a fault tolerant protection, doing this with HDDs and duplication as the fault tolerant protection plan is pretty silly.  Drivepool's balancer is pretty great!  Kicking in when it needs to ... minimizing intrusion into other, possibly more important tasks ... I've never had any issues!  Similarly, Scanner's influence on what to do with data when a drive is being wonky has saved me at least 3 times I can remember!  I'm not really wanting to give those up, especially as Scanner's functionality seems to not have an equivalent in the Linux world!  Incorporating a parity strategy as an alternative to a duplicating strategy seems like it would fit right in!

I could go into details on my personal setup and options I was toying with, but I don't want to dilute the message here and take the conversation off on a tangent.

TL;DR; Parity please and I will pay for it! ;)

-JesterEE

 

 

Link to comment
Share on other sites

  • 0

Well, while it's not officially supported, a lot of users are using SnapRAID in conjunction with StableBit DrivePool, to get parity and pooling.  

However, we don't have any plans on adding parity support for StableBit DrivePool, currently. Though, it has been requested, in the past. 

Link to comment
Share on other sites

  • 0

@Christopher (Drashna) I understand Stablebits stance on this, you have a great product. Developing parity would be time consuming and costly. Unfortunately the parity options out there are just not user friendly and lack the possibility of integrating into drivepool properly.

This is something I have thought about everytime I need to add drives to my server, the cost of mirroring data with current large capacity drives is getting a bit much. I cannot see my data reducing, but perhaps my only option is less data.

Link to comment
Share on other sites

  • 0

I have been thinking about this more over the last month or so. Having looked seriously at Freenas (new hardware, expensive), Unraid (less new bits, issues with performance when parity scanning, frequent hardware freezes for me, not virtual).

Here is what I have come up with.

Drive Pool A - New data landing zone
Balancer = SSD Optimiser
2 x Duplication
2 x SSD as write cache
2 x HDD as Archive (maybe if SSD space is not enough)

Drive Pool B - Long term data
Balancer = Prevent Drive Overfill or None, do not want data moving often.
No Duplication
Snapraid enabled.
6 x HDD Data Drive

Snapraid Parity- (Not a drivepool disk)
External to drive pool for Snapraid parity data only
2 x HDD Parity drives.

Drive Pool C - Drive presented to clients.
Balancer = ??? custom/new
No Duplication
- Include Pool A as initial write dump
- Include Pool B with data being moved here from Pool A after x days/months/years or drive space issue on pool A

The balancer on Drive Pool C could be aware of Snapraid, even running the parity scans after files are moved. Maybe helping with Snapraid config and drive replacements, rebuilds?

So the questions are
1 What are the possibilties of a balancer that would work with snapraid?
2 Would the above configuration work?
3 Would multiple pool A files be able to be dumped to different disks on pool B to speed up archiving?
4 Would the Prevent Drive Overfill balancer move files between disks on pool B causing parity issues?


The Pros with this potential setup
- Writing speed due to SSD landing.
- Less costs on disk space.
- Integration allows more config options for drivepool and snapraid.
- Less work for stablebit to enable parity and enhance their product.
- Still NTFS disks that can be put into any system and data read.


The cons with this setup
- Difficult first setup.
- Single disk performance on data pool B.
- Parity calcs always slow data pool down, especially of they take a day or so.
- Drive restores could take longer
- During disk failure/restore archiving is disabled
- During disk failure on pool B files from missing disk are not available

Link to comment
Share on other sites

  • 0
19 hours ago, browned said:

1 What are the possibilties of a balancer that would work with snapraid?

It's possible.  We do have a driver plugin framework, but you'd basically have to implement that yourself.

19 hours ago, browned said:

 2 Would the above configuration work?

From the looks of it, yeah, it should. 

19 hours ago, browned said:

 3 Would multiple pool A files be able to be dumped to different disks on pool B to speed up archiving?

Possible, but I don't think that StableBit DrivePool does that.  An external program could, though. 

19 hours ago, browned said:

 4 Would the Prevent Drive Overfill balancer move files between disks on pool B causing parity issues?

No, it shouldn't, in theory. However, IIRC, if too much data is moved, it can cause issues with SnapRAID's parity. 

However, I think the "Ordered File Placement" balancer may make more sense here, as it will fill up one (or more) disks at a time.    

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...