Jump to content
  • 0

Recomendations for SAS internal storage controller and configuration


TAdams

Question

Hello,

I am wanting to provide hot swap capability and get my storage devices off of my current motherboard storage controller. I have done some searching and ran across a few controllers but having little experience with these cards I thought I could learn from other peoples experiences. The 2 cards I have in my shopping cart are the SAS9211-8I and the LSI Logic SAS 9207-8i at roughly the same price, I do not mind spending more if need be and having raid in the future may be a good option, but at this time I do not have plans for a raid array. I have seen where some of these cards are being flashed with firmware but I am not clear on what the advantage(s) is/are or whether that would apply to my intended usage.

My current system is as follows:

  • Case Silverstone Technology CS380B
  • PSU Power and Cooling Silencer MkII 850 Watt
  • Windows Server 2016 Standard
  • Intel Xeon W367024 GB Quad channel memory
  • Gigabyte G1 Sniper (using PIC-e Intel Gigabit NIC)
  • 4X Seagate 8 TB Barracudas
  • 2X Corsair GT 90GB  SSD (OS)
  • 1X Mushkin Reactor 1TB SSD (Cache)
  • 1X Intel 250GB SSD (Cache)
  • 2X Seagate 2 TB Barracudas (Old system being phased out)
  • 2X WD Black 2 TB (Old system being phased out)

Future Plans of 4X Seagate 8 TB Barracudas or other and two 500 GB SSD's to replace the 2 cache drives (1 which belongs in my main system). My intended use is document mirroring (for redundancy), Video, general storage as well as a large collection of photographs, CAD data, system backups and movies.

 If I have folder duplication set to 3, would 3 cache drives be required for on the fly duplication? Between the motherboards storage controller and the SAS controller which would allow the SSD's to perform more efficiently? I would really appreciate your input, thank you!

Regards,
Tom

Pardon, I meant to post this in the Hardware section!

Link to comment
Share on other sites

11 answers to this question

Recommended Posts

  • 0
2 hours ago, TAdams said:

 If I have folder duplication set to 3, would 3 cache drives be required for on the fly duplication? Between the motherboards storage controller and the SAS controller which would allow the SSD's to perform more efficiently? I would really appreciate your input, thank you!

Regards,
Tom

 

If you want 3x pool duplication, you definitely need 3 cache volumes.  They don't necessarily need to be separate physical drives, but if you partition one SSD into 3 logical volumes and use those for the SSD cache, your read/write speeds are going to suffer (not to mention space).  If you can afford to, keep them separate physical drives.

I just recently (~2 months ago) purchased and installed a LSI 9201-16e card for a new array of 9x 8TB drives.  The cables loop back in through a MB slot cover at this point, and can be used with external equipment later if I want to transition to it, so I get good expandability options.  Most people flash the LSIs to the P20 firmware - make sure you get a more recent firmware version, like 20.00.07.00 if you go with a LSI 92XX card as it fixes some older CRC error issues.  Flashing the card into "IT Mode" (specific to the firmware) is desirable for HBA use.  The 92XX are great cards, and I'm quite happy with mine for the price.

I opted not to go the hardware RAID route for a bunch of different reasons, which is why the HBA controller worked so well for me.  I can do software RAID 0 or 1 stripes/mirrors if need in the OS, and do nightly parity caclculations with SnapRAID, stored on some older separate drives.  The 9207-8i/e don't do RAID either, but honestly I think hardware RAID is on it's way out due to inherent failure rates when trying to rebuild a replaced drive in an array of significant size.

I don't think you'll saturate your Marvell 88SE9182 controller with just 3 SATA drives for the cache.  But if you acquired one of the LSI cards and connected them to it instead of the motherboard, you definitely won't have issues with throughput.  Here's a bit more info on your motheboard's controller, which has a two-lane PCIe interface:

Quote

The 88SE9182 device offers the same Dual SATA interface as the 88SE9170, but has a two-lane PCIe interface for additional host bandwidth.

While the Marvell controller is capable, I certainly wouldn't run an entire pool and the SSD cache drives on it.  Definitely going to want an add-in card.

Link to comment
Share on other sites

  • 0
3 hours ago, Jaga said:

 

If you want 3x pool duplication, you definitely need 3 cache volumes.  They don't necessarily need to be separate physical drives, but if you partition one SSD into 3 logical volumes and use those for the SSD cache, your read/write speeds are going to suffer (not to mention space).  If you can afford to, keep them separate physical drives.

I would hope, and am pretty sure, this won't work. DP checks, in case of duplication, whether files are placed on different *physical* devices.

@OP: I recently bought a, I think,. LSI 9220-8i, flashed to IT-mode (the P20...07 version). The label was Dell PERC H310, it should be the same as the IBM1015. I am not sure, as far as I understand it, the 9xxx number also relates to the kind of bios it is flashed with. In any case, it works like a charm. One thing though, these controllers run HOT and it is advisable to mount a fan on top of it (just use a 40mm fan and mount by screws running into the upstanding bars or somesuch of the heatsink.

Link to comment
Share on other sites

  • 0

I've set a child pool as the SSD target (as a test a day ago) with 2 volumes on the same physical disk inside it.  Drivepool took that just fine.  Not entirely certain about full pool duplication and whether or not it requires different physical disks for each cache target.  I don't have a separate pool setup to test with.  Perhaps I'll go give that a try with a bunch of volumes all on the same disk.

 

Update:  Turns out that the native behavior of Drivepool when adding in raw volumes to a pool, is to only allow one of them as the SSD cache target.  However - you can trick it by making two separate child pools, each with a single volume from the same physical SSD, add those to the pool you want to cache, and set both as SSD cache targets.  Of course it bypasses the desired hardware redundancy of two SSDs, but if you have a large SSD and don't care as much about that (or are short on funds for a second), this method works fine.  The child pools are each identified as separate disks, and become valid SSD cache targets.

k8tKIuu.png

My new test pool "TinyPool" shown above, has two 200GB volumes (both from the same spinner drive) set for 2x pool duplication.  It also has two child pools, each of which has a single volume from the same SSD.  I set the Cache targets in TinyPool to both child pools, and told it not to fill past 50% (hence the red arrows halfway across both 100GB child pools).  Using only two drives, I have full 2x pool duplication, and full SSD cache functionality.  There's no hardware redundancy in this case for either the pool or the SSD cache, but it demonstrates it's doable.

If you're short on funds or have extra SSD space and want to reuse the same drive for the SSD cache, it is possible.

Link to comment
Share on other sites

  • 0

What do you mean with a child pool? Are you using hierarchical pools here?

I wonder whether, when you write files to this pool E, DP complains about not being able to comply with the duplicaiton settings _or_ that one duplicate ends up at the spinner and the other remains on the SSD.

At one time I had a Pool of HDDs partitioned in 2TB volumes (for WHS2011 Server Backup purposes) and some duplicates ended up at the same physical HDD. This was acknowledged to be a bug or at least undesirable behaviour but Drashna/Alex were not able to reproduce.

Link to comment
Share on other sites

  • 0
12 hours ago, Jaga said:

 

... I think hardware RAID is on it's way out due to inherent failure rates when trying to rebuild a replaced drive in an array of significant size.

Parity based yes (I dislike parity in general, except parchive/par2 for cold backups) for HDD's (SSD/NVME is taking a liking to it as the typical URE danger is nonexistent), striped mirrors are still fine with a proper controller, spares available. Motherboard is just software RAID pretending to be hardware RAID and is a bad lock-in to be tied to. Most cards are not a "HW" option either if entirely SW based, no BBU or writeable flash cache, no RAM. They are however excellent HBAs when flashed to the version you suggest, I run a few 9211 based myself.

While HW RAID still has a place in enterprise, I would not really consider real HW RAID anymore for private projects, mostly because the price doesn't outweigh the power of SW these days when UPS and ECC protected. HW can support blind swapping which is very practical in big enterprise, but SAS HBA's still have hot swap which is fine for most.

While I love the individual and mobile NTFS JBOD aspect of DP, I often ponder the complicated OS layer it deals with and potential driver/kernel/memory problems that may occur - I'm always careful when using the UI not to challenge it too much, I've experienced some weird fs race conditions etc. since using it from 2014,  but most were fixed along the way. I sometimes wonder if I would be better off growing a simple MD RAID in pairs instead as I run 2x duplication anyway and depending on the amount of mirrors I would be able to actually loose a bunch of drives simultaneously. I wouldn't even want LVM so it would just be simple ext3/4 on top with gparted to extend when needed, on a clean Debian install with automated smartmontools monitoring in the background.

E.g. a backup server in a workspace running this, suddenly it dies. Mr. X suddenly needs to restore his workstation. Getting the array back up should be as simple as moving them to any computer and booting of a live-cd and mdadm --assemble --scan to copy out needed data. Can't really think of any negatives outside of having to run a GNU/Linux installation for it, as well as a much higher risk of making accidental catastrophic administration mistakes (human error is #1 for complete array failures).

Maybe I should have already focoused in this direction, but have been lazy (mildly tired of IT) and blinded by the excellent simplicity that Stablebit offers, not to mention their support. When I already have the proficiency to do this and have for many years, it's hard to view it as someone who doesn't and just want a simple solution - I can't manage to realize if this is the real DP market or if it makes sense that I use it as well. Then I start thinking of the developers of DP, why did they bother making this when they could have done the same thing? Is it just to fill the demand after the drive extender exit? Would they have done it today with Storage Spaces on the rise (even if heavily powershell dependant) if staying in Windows is the reason? I'm also curious about the business aspect Stablebit ultimately wants to move in direction of; just storage hoarding consumers, or enterprise as well? And if they're supporting any big enterprises already.

Time to feed the kids before bed here in Norway... Sorry for wall of text (perhaps also bad grammar), I've had storage on my mind a lot lately as I'm itching to do another project soon. Feel free to ignore at will. 

Link to comment
Share on other sites

  • 0
4 hours ago, Umfriend said:

What do you mean with a child pool? Are you using hierarchical pools here?

I wonder whether, when you write files to this pool E, DP complains about not being able to comply with the duplicaiton settings _or_ that one duplicate ends up at the spinner and the other remains on the SSD.

At one time I had a Pool of HDDs partitioned in 2TB volumes (for WHS2011 Server Backup purposes) and some duplicates ended up at the same physical HDD. This was acknowledged to be a bug or at least undesirable behaviour but Drashna/Alex were not able to reproduce.

Yes, hierarchical pools can be the SSD cache target too.  Make a simple 1-volume pool, the volume can be from any disk.  Add it to the main pool, then make it the SSD cache target.  When it's a sub (child) pool, Drivepool no longer cares about the hardware considerations.  It may not truly be how it was designed to work, but the flexibility it offers makes a lot more things possible in the big picture.  You can make a 2-volume SSD two separate child pools, each a SSD target.  Or 3 or more volumes on the same drive - each a separate child pool and cache target, the possibilities are endless.  Just have to remember that it foils the hardware redundancy reason for using multiple physical drives.

Whether it's an actual bug or not I don't know, but I'd prefer to keep the functionality if given the choice.  Most people would use raw drives for pool membership, which this wouldn't impact.  More advanced users could move to the volume and child pool levels to get better flexibility and a wider range of possibilities.  It has implications with the evacuate plugin for Scanner (and possibly others), which is why I wouldn't normally recommend it.

 

1 hour ago, Thronic said:

...lots of stuff...

I admit I've also wondered how many Enterprise level customers Stablebit has for Drivepool and Scanner.  i.e. installations with more than 50 seats, a serious storage solution (1PB+), or multiple server farms with large storage needs.

You're right about hardware RAID mirrors - they're still useful, though I do try to steer away from hardware based implementations since you never quite know if the storage will be forever tied to it.  The portability of a Drivepool member drive is fantastic, and one of the things I love about it.  I don't have to question whether or not a drive will be visible in another machine, I can just rely that it will.  And all the typical file system repair tools always work, guaranteed.

Looking forward to hearing what your next project may be, I'm curious.

 

14 hours ago, TAdams said:

Regards,
Tom

Let us know your thoughts on our feedback Tom - we've gotten a little sidetracked (as often happens - sorry!) with the range of possibilities when someone asks "how do I do this" in Stablebit products.  It's a testament I think to their good design, when you can come up with different solutions to the same problem.  ;) 

Link to comment
Share on other sites

  • 0
14 hours ago, TAdams said:

The 2 cards I have in my shopping cart are the SAS9211-8I and the LSI Logic SAS 9207-8i at roughly the same price, I do not mind spending more if need be and having raid in the future may be a good option, but at this time I do not have plans for a raid array.

For these two cards, you want the 9207-8i.  It's PCIe 3.0, whereas the 9211-8i is PCIe 2.0.  That means more bandwidth.  Not that it actually matters, since the 2.0 version can get 15-20 disks at max speeds before bottlenecking. 

11 hours ago, Umfriend said:

One thing though, these controllers run HOT and it is advisable to mount a fan on top of it (just use a 40mm fan and mount by screws running into the upstanding bars or somesuch of the heatsink

They're meant to be run in a server chassis, where the fan setup in the case takes care of that.   So if you're not doing that, then yes, you want to add some cooling, too. 

14 hours ago, Jaga said:

This are wrong.  I can dig up the links, but there are a number of flawed assumptions that these articles are making that undermine their entire argument. 

14 hours ago, Jaga said:

I just recently (~2 months ago) purchased and installed a LSI 9201-16e card for a new array of 9x 8TB drives

You can go that route too.  But there are also SAS Expanders (like the Intel RES2SV240) which hook up to one of the SAS ports, and let you hook up 20 more drives.   

This is basically what I do, and have about 30 drives hooked up to a M1115 (flashed with the 9211-8i firmware). 

 

45 minutes ago, Jaga said:

You're right about hardware RAID mirrors - they're still useful, though I do try to steer away from hardware based implementations since you never quite know if the storage will be forever tied to it.  The portability of a Drivepool member drive is fantastic, and one of the things I love about it.  I don't have to question whether or not a drive will be visible in another machine, I can just rely that it will.  And all the typical file system repair tools always work, guaranteed.

That's why my personal recommendation is to use hardware solutions for system drive, and cache/temp drives.  If you have to move to new hardware, you're going to have to reinstall anyways (most likely, aka, it's a good practice), so ... nothing really lost.  

 

Link to comment
Share on other sites

  • 0
1 hour ago, Christopher (Drashna) said:

This are wrong.  I can dig up the links, but there are a number of flawed assumptions that these articles are making that undermine their entire argument. 

That's interesting, as the premise/logic/numbers do seem to add up.  If you ever have spare time and think about it, shoot me the links in a PM.  I've never had a failure rebuilding an array member, but heretofore my RAID 5/6 arrays have all been under 20TB.  I didn't want to risk hammering a 9+ large-drive array every time the RAID got out of sync (which I -have- had happen in the past, many many times).

Link to comment
Share on other sites

  • 0
1 hour ago, Christopher (Drashna) said:

For these two cards, you want the 9207-8i.  It's PCIe 3.0, whereas the 9211-8i is PCIe 2.0.  That means more bandwidth.  Not that it actually matters, since the 2.0 version can get 15-20 disks at max speeds before bottlenecking. 

They're meant to be run in a server chassis, where the fan setup in the case takes care of that.   So if you're not doing that, then yes, you want to add some cooling, too. 

This are wrong.  I can dig up the links, but there are a number of flawed assumptions that these articles are making that undermine their entire argument. 

You can go that route too.  But there are also SAS Expanders (like the Intel RES2SV240) which hook up to one of the SAS ports, and let you hook up 20 more drives.   

This is basically what I do, and have about 30 drives hooked up to a M1115 (flashed with the 9211-8i firmware). 

 

I can attest that the above works. I purchased a Norco RPC-4216. This is a 16 bay case a year ago. I purchased the LSI SAS 9207-8i card. I have it hooked up to the Intel RES2SV240 Expander card. All 16 hard drives run at full speed with this setup. It is a little slow to boot, the card takes a few minutes to initialize. But once the machines boots up it is a screamer.

Link to comment
Share on other sites

  • 0
13 hours ago, Jaga said:

Yes, hierarchical pools can be the SSD cache target too.  Make a simple 1-volume pool, the volume can be from any disk.  Add it to the main pool, then make it the SSD cache target.  When it's a sub (child) pool, Drivepool no longer cares about the hardware considerations.  It may not truly be how it was designed to work, but the flexibility it offers makes a lot more things possible in the big picture.  You can make a 2-volume SSD two separate child pools, each a SSD target.  Or 3 or more volumes on the same drive - each a separate child pool and cache target, the possibilities are endless.  Just have to remember that it foils the hardware redundancy reason for using multiple physical drives.

Whether it's an actual bug or not I don't know, but I'd prefer to keep the functionality if given the choice.  Most people would use raw drives for pool membership, which this wouldn't impact.  More advanced users could move to the volume and child pool levels to get better flexibility and a wider range of possibilities.  It has implications with the evacuate plugin for Scanner (and possibly others), which is why I wouldn't normally recommend it.

Ah yes, that figures. And this may not be a bug but a feature indeed.

Link to comment
Share on other sites

  • 0
On 9/6/2018 at 9:56 PM, Jaga said:

That's interesting, as the premise/logic/numbers do seem to add up.  If you ever have spare time and think about it, shoot me the links in a PM.  I've never had a failure rebuilding an array member, but heretofore my RAID 5/6 arrays have all been under 20TB.  I didn't want to risk hammering a 9+ large-drive array every time the RAID got out of sync (which I -have- had happen in the past, many many times).

It's just exaggerated. The URE avg rates at 10^14/15 are taken literally in those articles while in reality most drives can survive a LOT longer. It's also implied that an URE will kill a resilver/rebuild without exception. That's only partly true as e.g. some HW controllers and older SW have a very small tolerance for it. Modern and updated RAID algorithms can continue a rebuild with that particular area reported as a reallocated area to the upper FS IIRC and you'll likely just get a pre-fail SMART attribute status as if you had experienced the same thing on a single drive that will act slower and hang on that area in much the samme manner as a rebuild will. 

I'd still take striped mirrors for max performance and reliability and parity only where max storage vs cost is important, albeit in small arrays striped together.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...