Jump to content
  • 0

SSD Pool


Hakuren

Question

Hi guys. First post here.

 

I wonder if going full SSD Pool is a good idea. While with HDDs it makes sense with SSDs it seems like it's more beneficial to RAID0 bunch of them and then copy everything to HDD only pool. Or abandon pool and go big array way (like 16 drives R60).

 

I would love to [sllooowwwwllllyyy] get rid myself of HDD ballast. Noisy, clunky and reliability is iffy with some and better with others. Problem is size and price. I can order 8x 4TB HDDs in a instant, you can't do that with SSDs unless you have Arab sheik as an uncle :D.

 

Right now went through some modifications as my Adaptec 71605 didn't initialized at all on my X99 board. After long wait finally changed it to 8405+Intel expander RES3FV288, only by name because it's re-branded Adaptec 82885T for (equivalent of) ~50USD less, which is nice. For now have mix setup with 2 arrays of HGST 3TBs and 5TBs + 4x4TB HGST drives configured into pool with triple duplication. 3&5 TB part will be removed in the near future and moved to old X58 system.

 

How fully SSD pool affect performance and life of SSD? Of course advantage of pool is obvious. No need to buy wholesale drives like in the case of RAID array, but will frequent duplication wear much faster NAND than for example R10 or 60? I know that SSDs get massive write hit when RAIDed into any redundant kind of array 10/5/6/50/60. Tested that first hand and really interested on the input of more knowledgeable folks than myself. 

 

The only gripe I have with StableBit DrivePool is Scanner part. I reinstalled it again and with some tweaks finally got SMART of HDDs but not SSDs or NVMe 750s (for now only smartools returns any data). Still it takes days to scan anything (HDDs). For now disabled scanning the arrays because it's pointless. I would have to prevent any operations on both arrays for Scanner to work which is simply impossible. Prefer to configure controller to scrub drives on its own. Beginning to think that I should abandon that also on newly created pool. Controller reports SMART OK, without any dark-magic. When I order verification on a drive it just starts where ended previous day. SB Scanner just restarts and grinds surface for 12-15 hours and can't finish before workstation is powered down. At the moment it slowly crawling past 64.8% of first drive. It will never finish that today. Probably my last attempt with running Pool&Scanner together. Will see in next 5 hours or so if at least 1 drive is done.

 

 

Link to comment
Share on other sites

2 answers to this question

Recommended Posts

  • 0

That's a lot of info here. 

 

 

As for SSD vs HDDs... yeah, from a cost perspective, it's not worth going full SSD.  Additionally, depending on how you're accessing this information ..... well, gigabit network caps at 125MB/s (not counting overhead), and as most HDDs can hit that fairly reliably...

 

 

As for the SSD and StableBit DrivePool, that depends on a number of factors. However, for the most part, the data shouldn't be moved around too much. And since writes are what affect the SSD's lifespan... it should be fine.  Additionally, you could use the "Ordered File Placement" balancer plugin to fill up one drive at a time (or two at a time, with duplication), to minimize this and help keep the data "in place". 

 

 

As for write performance on a RAID... it's a block based solution, so there are many more random writes to the drive. Additionally, most configurations don't support TRIM, so the drive is relying on active garbage collection at best (which isn't anywhere near as good as TRIM).  Additionally any redundant arrays are writing to multiple disks at the same time, and waiting for both drives to complete. Rather than writing every other block to the disk (or every X block).  Even with a good controller, that will cause a delay. And it will increase as you use the SSDs (again, TRIM). 

 

 

As for StableBit Scanner, I'm not surprised that the NVMe drives don't work with StableBit Scanner.... It is possible that the interface doesn't support it... but it sounds like it does.

You can test out with the DirectIO Test utility we have. 

http://community.covecube.com/index.php?/topic/36-how-to-contribute-test-data/

Select the drive in question. See if it lights up the "SMART Attributes" option in either the WMI or Direct IO section. If it does, click on the ellipse button and make sure it's reporting information there.

If not in either case, check the "Unsafe Direct IO" option, and double check. 

 

If you need the Unsafe option, you'll need to enable that in StableBit Scanner:

http://wiki.covecube.com/StableBit_Scanner_Advanced_Settings

You'll want to look for the "Direct IO" section, check the "Unsafe" option and reboot the system (or restart the StableBit Scanner Service).

 

If this doesn't work, check the "Specific Method" option and go through the list until you find methods that do work. 

If none work, let us know exactly what model you're using.

 

 

As for the slow scans in StableBit Scanner ... I'm assuming that it's scanning one disk at a time, as this is the default behavior.  Scanning SSDs this way is absolutely fine.  However, if you have a large number of disks (especially, if they're all on the same controller....) .... there is an advanced option.

I use this personally... as I'm using the SFF-8087 version of your hardware (an LSI ... well IBM M1015 in IR/RAID mode, and an Intel Expander, as well). 

 

Open up StableBit Scanner, click on the "Settings" button and select "Scanner settings".

On the main tab, check the "Show advanced settings and troubleshooting" option.  

Then open the "Throttling" option. Uncheck the "Do not interfere with other disks on the same controller" option. 
Hit "OK" and close the window.

Now click on the "Settings" button and select "Advanced settings and troubleshooting".

Open the "Configuration Properties" tab. Find the "Scanner" section, and look for the "ScanMaximumConcurrent".

Set this value to "0" for "unlimited", or set it to a specific number (such as 8).  

Click "OK" to close the window.

Restart the system, or restart the StableBit Scanner Service.

 

What this will do, is allow StableBit Scanner to scan multiple drives concurrently. Setting it to 8 or the like will limit it to that number of drives being scanned on the same controller at once. Setting it to 0 will allow it to scan ALL drives at once. 

It's awesome watching it scan all 17 drives on my LSI card at one time. And hitting 80-120MB/s read speed. Just awesome. (I have an image of this posted somewhere on the forums, not sure where though). 

 

 

 

And if you're still having issues scanning, try changing the throttling settings, such as decrease the disk activity sensitivity. 

Link to comment
Share on other sites

  • 0

Thanks. Implemented. Will see tomorrow how it performs.  1 drive finished which is success in itself with default settings. I really don't want to move past 4 HDDs now. I'm probably terminally addicted to flash and just can't stand spinning media anymore. LOL Quite annoyed that HAMR technology which - announced like 50 times already - is still in the doldrums. With 2 like 200TB drives I wouldn't even need a pool  ;)

 

Word (or more) of clarification. I don't use any network storage. DP is not running on a server but on my primary workstation (CaseLabs TH10A, it holds a loooot of stuff). Never used NAS, never will. Everything external is DAS cold storage which as name suggest is used only when I acquire sufficient amount of new data to move something more substantial. BTW: DP works perfectly well with my 8 bay Fantec USB3 QB-XU8S3-6G (clone of a clone of MediaSonic Probox and many others) enclosures. Was thinking about Thunderbolt but whole circus with certification and often truly ridiculous pricing I decided against.

 

As for SSDs. I really arriving at the conclusion it's better to bunch like 16x 250 GB drives for RAID60 than toy with pool (these ToughArmor 8x2.5" enclosures are really cute, tiny and hold bags of drives - already obtained one and will get quite few of those + another expander if cash flow permits). With regard to TRIM. Well it's not that catastrophic. I think its relevance is vastly over hyped (except critical applications). I did experiment with SSDs connected in old X58 system via first motherboard then 6805 and then 71605. Normal volumes and RAIDed. Same drives were moved and I never took care of TRIM thingy. After ~4 years of 12-15/h use I dismantled arrays and took all drives for a "service" and discovered only 1 out of 4 shown 3% drop in life expectancy (that 1 was used as system boot drive for about 2 years) with rest in tip top shape. Because of SSD superior transfers it's not end of the world if there a bit of delay with writes. I have 2 SSD arrays and all of them perform quite well. Read is blazing fast (talking R10) with write hit with like 30% penalty. My fingers itching for more SSDs in my case, but I try to resist and see what X-Point vel Optane (what a stupid name) will bring to the table. 750s are worth every [put your currency in here] can't wait for 2016.

 

I've mentioned 750 above, but my only concern was operating temperature. With smartools I'm now calm. Was worried a bit it getting too hot, but after few runs of monitoring it never passed 35C so no problems there.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...