Jump to content
  • 0

Going to DrivePool comming from RAID 0+1


propergol

Question

Hello,

I am thinking of converting my actual home server setup to DrivePool.

My server is actually running Windows server 2012 r2 and has the folowing hardware setup :

 

- 1 x SSD 256Gb on the motherboard Intel Z77 SATA 3 :

 for the system and some app  

 

- 4 x WD RED 4Tb on the motherboard Intels ports Z77 SATA 2 in RAID 10 (which is I think a RAID 0+1) :

for backups comming from 3 home PCs and server system itself, photos and videos comming from 4 Android devices, movie library

 

- 1 x Toshiba 5Tb on discreet card Marvell 9210 dual port SATA :

for backups of some of previous RAID10 folders

 

- 2 x WD Green 4Tb that are not used yet.

 

- 3 x GbE Intel Ethernet l212-T1 NICs teamed into LACP 3Gbits/s plugged into 8 ports Netgear switch GS108T that support LACP and jumbo frame.

 

 

I am thinking of breaking my RAID (with a backup before...) in order to use the 4 WD RED (and the others disks too) with DrivePool.

 

If I am not mistaking, with Drivepool, if I do at least one duplicate, then I would have some kind of mirroring but also would be able to read from 2 disks at the same time. Plz correct me if I am wrong.

So for the read part it should have the same speed as my current RAID 10, or even faster if I do 3x duplicate, isn't it?

 

Now for the write part, again if my andertanding is correct, the speed should be half of my current RAID write speed.

 

But if I use a SSD with Drivepool's plugin, then I should have faster write speed than my current RAID, since all datas would be cached on the SSD before being written to the pool?

 

So if I choose 3x duplicates + SSD caching then everything : read, write, IO would be faster than my current setup.

 

Please let me know if I am correct, but also if you think I am ok with going from a RAID to DrivePool system.

 

Thanks for this long reading  :)

Link to comment
Share on other sites

10 answers to this question

Recommended Posts

  • 0

Well, first, I'm glad to hear that you're considering converting!

 

As for converting, you could add the RAID array and the other drives to the pool, "seed" the pool, and then use the "Disk Usage Limiter "balancer to clear the data off of the RAID array, and then break it up. It may take a bit longer to do this, but it would be more automated.

http://wiki.covecube.com/StableBit_DrivePool_Q4142489

 

 

As for performance, this is the onboard (ICH/RST) RAID controller. If that's the case, then you may not see much of a difference.

However, it is possible that the pool may be slower than the RAID array.  However, we do implement a read striping feature that is designed to help boost performance on the system. And the higher the level of duplication, the better speeds you should see (as it's reading from more sources).

 

 

As for writes, that depends. All the writes are done in parallel (so if you're using x3 duplication, it's writing to all three disks). So it may be limited by the slowest disk.  But that may not be as fast as the striped array. 

 

But if you want to use the SSD Optimizer balancer plugin, that will definitely help. However, the caveat, is that you'd want the same number of SSDs are you are using for duplication. So if you set the duplication for everything to x3, you'd want 3x SSDs (see above). 

But this definitely does improve the write speed.

 

 

And have you taken a look at the manual?

http://stablebit.com/Support/DrivePool/2.X/Manual

 

http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Performance%20Options

Link to comment
Share on other sites

  • 0

Thanks a lot for your help.

Yes I have read most part of the manual, but I guess I should experiment virtualy inside VM before jumping to real life scenario  :ph34r:

 

Regarding the number of SSD (for a x3 duplication) can't we cheat the plugin by creating 3 partitions on one single SSD?

Link to comment
Share on other sites

  • 0

You are very welcome.

 

And yeah, experimenting in a VM isn't a bad idea. :)

 

 

As for cheating... Nope, StableBit DrivePool is smarter than that. We check for physical disks when determining where the different copies for duplication go (this is part of why we don't support dynamic disks, as well, a lot of additional overhead).  So, if you have just three partitions, it will see that they're all part of one disk and ... well, invalidate two of the partitions and look for other, suitable destinations. 

 

The reasoning for this (if it's not clear) is to ensure that if a single physical disk fails, that you won't lose all of the copies of the file because they're on the same physical disk.

Link to comment
Share on other sites

  • 0

About the SSDs for caching, what are the most important specs?

 

 

Let me know if this is correct, and or, if I am missing things :

 

- SSD based on MLC since caching may involve a lot of write/delete and MLC are less prone to wear than TLC

 

- in my case, the worst case is that 3 clients would upload their backups at the same time to the server running DrivePool. Since each client would use 1Gb ethernet link : 125Mb/s x 3 = 375Mb/s So each SSD should be capable of at least 375Mb/s sequential write.

Link to comment
Share on other sites

  • 0

Hmm, WS2012R2 allows for concurrent Client Backups? Nice.

 

Of course, even a slower write would not matter as backups would simply take a bit longer. If it were a technical requirement then someone running 20 clients would have a serious issue, no?

 

I used to be scared of SSD wear but it might be worth while to investigate the actual I/O Client Backups cause. Assume you would actually write 1TB a day for the Client Backups, which seems like a lot to me, even TLC should still last you years, no?

 

Also, I seriously doubt Clients would actually deliver 125MB/s. Thinking about it, I would say that Client Backups is probably one of the less obvious reasons to go SSD Optimiser.

Link to comment
Share on other sites

  • 0

Hmm, WS2012R2 allows for concurrent Client Backups? Nice.

 

Of course, even a slower write would not matter as backups would simply take a bit longer. If it were a technical requirement then someone running 20 clients would have a serious issue, no?

 

I used to be scared of SSD wear but it might be worth while to investigate the actual I/O Client Backups cause. Assume you would actually write 1TB a day for the Client Backups, which seems like a lot to me, even TLC should still last you years, no?

 

Also, I seriously doubt Clients would actually deliver 125MB/s. Thinking about it, I would say that Client Backups is probably one of the less obvious reasons to go SSD Optimiser.

 

You are right : client backups (Acronis backup, not Windows backup) wont run @ 125Mb/s since during file copy tests it did cap at 118Mb/s.

Also I wouldn't schedule client backup to run at the same time...but again IF this happend, or other scenario with concurent file transfert then I want it to run uncapped @118Mb/s for each client.

 

So...118Mb/s x3 = 354Mb/s  :P

Also imagine that tomorow I will add 1 more 1Gb link or maybe a single 10Gb link for a new PC then theoretical write would be higher again.

 

I am simply wondering if sequential write is the only spec I should look at, or maybe there are other important things,I don't know since caching is something special.

Link to comment
Share on other sites

  • 0

Ah, no clue what kind of I/O Acronis causes. Does it do incremental backups and, if so, how does that work?

 

The thing is, IF you run concurrent backups AND get to about 350MB/s AND each backup runs like 200GB, then you'd need a 600GB SSD (that is _three_ of them) to ensure you don;t run into a bottleneck becuase the SSDs run full and need to offoad to HDDs in any case...

 

But it really depends, I think, on the backup mechanics (I/O speeds, total sizes etc.).

Link to comment
Share on other sites

  • 0

Well backup of each client won't be bigger than 50 Gb since most of their datas are already on server's SMB shares. Also, second bigest file transfert would probably be movies and won't exceed this size.

 

But you are right regarding the SSD size it is very important not to underscale.

 

I was thinking to add 3 x CRUCIAL BX100 250Gb SSDs because they are not expessive (less than €85), they are MLC based SSD, their write speed is 370Mb/s and are extremly power efficient.

 

But again, nothing choosed yet.

Link to comment
Share on other sites

  • 0

Well, if you've got Euro 250 to waste, sure, it seems like a nice penis-enlarger! I seriously doubt you'll experience a performance benefit IRL, unless you are one of those that actually moniter performance while it  is doing something (as I am prone to do). With, on average, like 200GB a day though, I would seriously consider going TLC if that is cheaper or allows for faster performance or for larger SSDs (which, if used as a cache will prolong longevity as well).

Link to comment
Share on other sites

  • 0

Personally, I like having SSDs for the pool, because it helps even out the writes.  Specifically, spinning drives can sometimes write at some pretty slow speeds (even the better ones).  SSDs help ensure that it's always fast. :)

 

So for me, it's worth the money. 

 

And as most SSDs should last a good long while (and list how much "life" aka writes are left for the drive), it should be fine to "trash" them like this.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...