Jump to content
Covecube Inc.
  • 0

Moving from a RAID10. Please advise


Question

Hi everyone. 

I have a desktop running Windows 10 Enterprise Edition and a motherboard that accepts up to 6 HDD. I have installed initially 4 HDD (1x3TB and 3x4TB) in RAID10. The RAID was created in the BIOS and then set up with Intel RST. I know I'm restricted by my 3TB HDD, but it's an old drive from the initial setup, and I'm waiting for it to fail to buy a 4TB new one to replace it. Anyway, My RAID is almost full, and I need to expand it. I bought two new 4TB drives and installed them on my system, but when I tried to increase my RAID with these two new drivers, I found out that I can't. If I want to increase my space, I have to delete my RAID and create it again, using my 6 HDDs. I don't have a problem deleting my RAID. I already backed up all my data. But as I'm going to start fresh, I was wondering if building the RAID10 again is a good decision. So, I have a few questions to try to help me decide what to do:

1) How does DrivePool compare to a RAID10 in terms of performance and data protection?
2) What is the difference between DP and Windows Storage Spaces? Should I consider Windows option instead?
3) I'm planning to have my entire pool duplicated (2x) and one specific folder duplicated 3x. How much space will it use for my entire Virtual Drive? Can I see it?
4) Regards the previews question, I'm asking this because I have already installed DP to try it. I have created a pool with my two 4TB new drives, and after creation, DP showed me 7,28TB of free space, and this number won't change even if I enable duplication. How can I know how much space I have left with duplication enabled?
5) I'm planning to copy all files from my RAID10 (5.6TB) to my newly created DP drive. After that, I'm planning to delete my RAID and add those drives to the pool and then enable duplication. Is it a good move?

Sorry for all the questions, but I'm pretty lost with all possibilities.

Thanks,
Glauco

Link to post
Share on other sites

15 answers to this question

Recommended Posts

  • 0
3 hours ago, Glauco said:

I bought two new 4TB drives and installed them on my system, but when I tried to increase my RAID with these two new drivers, I found out that I can't. If I want to increase my space, I have to delete my RAID and create it again, using my 6 HDDs. I don't have a problem deleting my RAID. I already backed up all my data. But as I'm going to start fresh, I was wondering if building the RAID10 again is a good decision.

Given that you currently have 15TB in your RAID10 and only have 5.6TB of storage, I'd be looking for alternatives too. That seems to be a lot of storage space being used for data recovery in case of HDD failure. If you have already backed up all your data, do you really need a RAID10 fault protection scheme? Unless you are running a business and have those type of never down concerns, I think you could do better than a RAID10 for a home storage solution. With that in mind, I would offer my thoughts.

 

3 hours ago, Glauco said:

1) How does DrivePool compare to a RAID10 in terms of performance and data protection?

In the past, I have run RAID (0, 1, 5) systems at home. A dedicated RAID controller and matched HDDs can give outstanding performance, no doubt. Stripping data to multiple HDDs at the same time can really speed up your transfer rates. Your RAID10 offers great data protection and fast recovery options in the event of a HDD failure. The cost of your RAID10 setup, as you mentioned, is that you have 15TB online and are limited to about 6TB of storage.

DrivePool only writes to one disk at a time, so the performance in that respect is limited to the speed of that HDD. Certainly not as fast as stripping data out to 2 HDDs at the same time. However, I recently added a SSD to the front end of my DrivePool, so now my writes to the pool are as fast as my SSD can handle. My SSD is rated at about 480 MB/s, so, in theory, that would be my maximum write speed to my DrivePool. DrivePool writes the file transfer to my SSD cache, and then later, when it hits my trigger point (100 GB on my system), it will flush the SSD cache and write to my archive HDDs in the background. Essentially, with the SSD cache on my DrivePool, I get faster real life performance on my system than I ever got with my RAID systems.

As far as data protection, there would only be one file copy on the SSD cache until it flushed out to the archive HDDs. I can live with that, but my system is mainly a home media server for Plex/Kodi. Most of my data is not that essential, so I do not even have duplication set on my pool. With DrivePool, you can set duplication on the folder level, not just on the entire pool. I have a couple folders set for 2X duplication and one folder set for 3X duplication. Almost all of my DrivePool is set to store media files at 1X duplication, because I have my original files backed up on HDDs stored in the closet. If I lost a HDD in the pool, I could rebuild the lost files from my backups.

Unlike RAID systems where a loss of a HDD can mess up everything on the pool because the files are striped out to various drives, DrivePool writes a complete file to a single HDD. I had a HDD start to fail, DrivePool noticed it, and I was able to offload all but 2 corrupt files on a 5TB HDD. With my RAID systems, a failing HDD easily meant all data loss. So far, DrivePool is just more reliable and easier to recover from a HDD failure than other systems I have used.

3 hours ago, Glauco said:

2) What is the difference between DP and Windows Storage Spaces? Should I consider Windows option instead?

I ran Windows Storage Spaces for ~7 years. When it works, it is great. But I had one small HDD fail in my Storage Spaces pool of drives and it crashed my entire 26 drive setup. That, despite, Storage Spaces was setup to recover from a 2 drive failure (in theory). In practice, for whatever reason, Storage Spaces was not able to recover from a 1 drive failure and I lost massive online data. I had to delete the entire Storage Spaces pools and rebuild from my backups - took forever. This happened not once, not twice, but three times and then I gave up and switched to DrivePool. Life is better.

If you have problems with Storage Spaces, good luck. You will find MS does not really support Storage Spaces. I was not the only person reporting this problem of a signal HDD failure crashing my entire pool. MS does not have an answer. I used to spend hours and hours in PowerShell trying to manually recover problems in Storage Spaces. It was just not worth it to me. DrivePool just works better for me, and I am enjoying life in other ways than sitting in front of my computer for hours trying to recover data from Storage Spaces.

3 hours ago, Glauco said:

3) I'm planning to have my entire pool duplicated (2x) and one specific folder duplicated 3x. How much space will it use for my entire Virtual Drive? Can I see it?

The DrivePool GUI shows you the stats of data in the Pie Chart: Unduplicated, Duplicated, and Other. As I stated, almost my entire 64.8 TB DrivePool is Unduplicated at 56.9TB, but I also have Duplicated data in 2X folders and one 3X folder at 2.26TB, and Other data at 61.9 GB. Free space is currently at 5.62 TB. So DrivePool gives me much better storage options for my needs and still provides data duplication for my more important files.

If you have 2X duplication set for your entire DrivePool, then I believe the DrivePool GUI will show all your data as Duplicated. You would only have half of your reported Free space (i.e. 6TB of Free space on your 2X pool would mean you could add only 3TB of new data).

4 hours ago, Glauco said:

4) Regards the previews question, I'm asking this because I have already installed DP to try it. I have created a pool with my two 4TB new drives, and after creation, DP showed me 7,28TB of free space, and this number won't change even if I enable duplication. How can I know how much space I have left with duplication enabled?

There are options in the DrivePool balancers to set real time duplication, or defer duplication for a more convenient time like at midnight. If you did not have duplication set on the pool when you initially transferred files to DrivePool, and then you later set it to 2X duplication, it probably is waiting for midnight to check which files need to be duplicated. You can manually trigger this by Settings>Troublshooting>Recheck duplication. It will recheck your duplication and perform needed tasks in the background. When done with the duplication, your DrivePool GUI should reflect your current stats. Again, if you have 2X duplication set on the entire pool, and your Free space reports 6TB, then you could add 3TB of data to be duplicated.

4 hours ago, Glauco said:

5) I'm planning to copy all files from my RAID10 (5.6TB) to my newly created DP drive. After that, I'm planning to delete my RAID and add those drives to the pool and then enable duplication. Is it a good move?

Exactly what I would do. You can let DrivePool automatically check for duplication later, or trigger a manual duplication check via Settings.

Although I came from a RAID background, and then Storage Spaces, when I moved to DrivePool I decided not to have 2X duplication on my entire pool. Like you, I have separate backups if ever I need them. If you use HDD monitoring programs like DrivePool Scanner, than you have a good chance of detecting a failing HDD before it completely dies and DrivePool can be set up to automatically start moving data off the failing HDD. In my experience, DrivePool just performs better for me than my former RAID systems or Storage Spaces and data recovery is possible with DrivePool - but not with crashed RAID or Storage Spaces systems. 

Hope this was helpful. Good luck.

Link to post
Share on other sites
  • 0
3 hours ago, gtaus said:

As far as data protection, there would only be one file copy on the SSD cache until it flushed out to the archive HDDs. I can live with that, but my system is mainly a home media server for Plex/Kodi. Most of my data is not that essential, so I do not even have duplication set on my pool. With DrivePool, you can set duplication on the folder level, not just on the entire pool. I have a couple folders set for 2X duplication and one folder set for 3X duplication. Almost all of my DrivePool is set to store media files at 1X duplication, because I have my original files backed up on HDDs stored in the closet. If I lost a HDD in the pool, I could rebuild the lost files from my backups.

Unlike RAID systems where a loss of a HDD can mess up everything on the pool because the files are striped out to various drives, DrivePool writes a complete file to a single HDD. I had a HDD start to fail, DrivePool noticed it, and I was able to offload all but 2 corrupt files on a 5TB HDD. With my RAID systems, a failing HDD easily meant all data loss. So far, DrivePool is just more reliable and easier to recover from a HDD failure than other systems I have used.

So gtaus did a good elaborate job but there is one thing that is not entirely correct: If you use the SSD Optimzer plugin, where you assign an SSD as a cache / first landing zone, then that will only work for files that are not to be duplicated. If you write a file to a folder that has x2 duplication, DP will write one copy to a HDD right away. If you have little duplicated data then it won't matter much though.

If you do have duplication and insist that one SSD suffices then there is a way to trick this using Hierarchical Pools. Two copies would be written to the SSD initially. I consider this inadvisable (and probably a bug).

Overall, I have been using DP since early 2013 (or 2012??). It's a small Pool, more for redundancy/uptime then backup (which is dealt otherwise, Duplication != backup). Never an issue and the fact that it writes files in plain NTFS format is really great for transferring Pools to other PCs, recovery etc. Just not the RAID 10 performance.

Link to post
Share on other sites
  • 0
59 minutes ago, Umfriend said:

If you do have duplication and insist that one SSD suffices then there is a way to trick this using Hierarchical Pools. Two copies would be written to the SSD initially. I consider this inadvisable (and probably a bug).

If you wish to use the SSD Optimizer plugin to have files cached by a single "SSD", for a pool that has duplication enabled, the method would be to un-check the "Real-time duplication" option under Manage Pool -> Performance. Duplication would then occur later as a background task. This does somewhat defeat the point of ensuring your files are always duplicated, of course.

Link to post
Share on other sites
  • 0

Thank you all for your replies. 
Even though I'm using it at home, I do have concerns about losing my data. I'm a wedding photographer, and I can't afford to lose my clients' photos. That's why I have a RAID10 and my backups (external HDDs and Crashplan to back up everything).
Now, I have more questions about speeding things up using an SSD. I do have a spare 128TB SSD laying around. I don't have any more ports on my motherboard to connect it, though. I have an HDD docker that I can connect up to two HDDs via USB. So, I have a few options, but I need to understand how it'll work before making the changes.
First, my scenario. When I shot a wedding, I come home with around 200GB of data. I copy everything to my RAID and then I start editing my photos. When I copy to my RAID, I have a piece of mind (at least I had) that my photos would be safe, as the RAID would store them all safely. Meanwhile, in the background, Crashplan is backing up everything on the cloud, and I only format my memory cards after crashplan has backed up everything AND I have delivered all the photos to my client (yes, I'm pretty obsessed about losing my photos before delivering them).

Now, here comes my questions:
1) What happens if I have a 128GG SSD as a cache and tries to write 200GB of data at once?
2) I still want to have my photos written at least 2x, better yet 3x, so my risk of losing them is smaller. What would be the best strategy for that? I can afford to ask DP to duplicate my files once a day, for example, that is, it doesn't have to be on the fly.
3) Will my SSD speed up read AND write speed?
4) How do I set up my SSD as a cache? Any documentation?
5) As I said, I don't have any SATA connections left on my motherboard. All six ports are being used with my "backup" HDDs. What option would be better:
a) Connect my SSD to my USB docker and leave all 6 HDDs connected to my motherboard;
b) Remove one HDD from my motherboard and connect my SSD to it, and leave this HDD disconnected as a spare to replace one HDD when it fails;
c) Remove one HDD from my motherboard and connect my SSD to it, but connect this spare HDD to my docker to increase my DP space;
d) Don't use my SSD and just leave all my 6 HDDs connected to the pool?

Once again, thank you all very much for all the help. It's very appreciated. 

Link to post
Share on other sites
  • 0
6 hours ago, Umfriend said:

If you use the SSD Optimzer plugin, where you assign an SSD as a cache / first landing zone, then that will only work for files that are not to be duplicated. If you write a file to a folder that has x2 duplication, DP will write one copy to a HDD right away.

Thank you for the clarification. As for my system, I really don't have much data set for 2X duplication. So I created a TEMP directory in DrivePool set for 2X real time duplication, and transferred files to it. What I see on my DrivePool GUI, is that the files are written to my SSD cache as shown under Disk Performance. When the files are done writing to the SSD cache, then I see file activity written to an archive HDD under Disk Performance for only a second or two, and then it is done.

I don't see the files being written to both the SSD cache and the archive HDD at the same time under the Disk Performance in the DrivePool GUI. Is the second copy of the file being written directly from the SSD cache to the archive HDD in the background at the same time? That would explain the HDD not showing up under Disk Performance during the file transfer until the SSD cache has completed. If I open Task Manager, I can clearly see data being written to both the SSD cache and the archive HDD at the same time for a real time 2X folder.

Link to post
Share on other sites
  • 0
3 hours ago, Glauco said:

Now, here comes my questions:
1) What happens if I have a 128GG SSD as a cache and tries to write 200GB of data at once?

You can set your SSD cache at a lower trigger point than your max storage on the SSD. I have a 224 GB SSD and have my trigger point set to flush the cache at 100 GB. There is a separate Prevent Drive Overfill optimizer under the Balancing plugins. I have that set to the default 90% full.

What I have seen on my system is that my SSD cache will fill up to 100 GB, hit the trigger point, and then start flushing data off to the archive HDDs. If I am transferring larger amounts of data, the SSD will continue to fill beyond the trigger point, but it does reach a point where it start writing directly to an archive HDD. I think this is perhaps when the SSD reaches 90% full and that triggers the Prevent Drive Overfill optimizer to start direct writing to a HDD. Since DrivePool writes a single file to a single drive, a file would be completely written to the archive HDD which would be slower than to the SSD cache. By the time that file is written to the HDD, the SSD cache would be more than ready to take more data.

I don't know what would happen if your single file was 200GB. That is certainly larger than your 128GB SSD and I doubt if DrivePool would even bother attempting to write to the SSD cache in that case. I suspect it would just write the 200GB file directly to an archive HDD at that time. 

3 hours ago, Glauco said:

2) I still want to have my photos written at least 2x, better yet 3x, so my risk of losing them is smaller. What would be the best strategy for that? I can afford to ask DP to duplicate my files once a day, for example, that is, it doesn't have to be on the fly.

Since you are running a business, I would only suggest using real time duplication. In my test using a real time 2X TEMP folder, I discovered that a 60 GB file transfer completed to my SSD and then flashed activity to my archive HDD for just a few seconds more. If your data is important enough for 2X or 3X duplication, the extra few seconds for the file transfer is worth it, IMHO.

4 hours ago, Glauco said:

3) Will my SSD speed up read AND write speed?

If you add a SSD cache to the front end of your DrivePool, it only speeds up your write speeds. If your data has been flushed to archive HDDs, then you will not see the SSD cache filling up with read data.

Having said that, I set my SSD cache to 100 GB and use it as a fast cache for both reads and writes for some of my temp files. Until the data is flushed to the archive HDDs, the data remains in SSD cache, and would therefore speed up both reads and writes. If I need to flush the SSD cache before using it as a fast temp directory, I can manually do it by selecting a re-balance in the Pool Organization toolbar. That strategy works well for me.

4 hours ago, Glauco said:

4) How do I set up my SSD as a cache? Any documentation?

I simply added the SSD to my DrivePool, used the SSD Optimizer plugin to designate that drive as "SSD", and then under Balancing>Settings>Automatic balancing>Balance immediately is checked. On the same screen, Automatic balancing - Triggers, I set "at least this much data to be moved" at 100 GB. That gives me my 100 GB SSD cache before DrivePool flushes the cache.

DrivePool does have documentation on the website, and that can be helpful. I just got my system working the way I wanted and stopped messing with it. I am sure that there are many other ways you could custom setup your DrivePool based on your needs.

There is a program called Primo Cache that is worth looking into if you want to use your SSD as cache for all reads and writes on your system. Primo Cache can use both system RAM and your SSD for caching reads and writes. Primo Cache would be totally independent of DrivePool, but would work on all your programs, including DrivePool. I believe they still offer a free trial period for evaluation.

4 hours ago, Glauco said:

5) As I said, I don't have any SATA connections left on my motherboard. All six ports are being used with my "backup" HDDs. What option would be better:
a) Connect my SSD to my USB docker and leave all 6 HDDs connected to my motherboard;
b) Remove one HDD from my motherboard and connect my SSD to it, and leave this HDD disconnected as a spare to replace one HDD when it fails;
c) Remove one HDD from my motherboard and connect my SSD to it, but connect this spare HDD to my docker to increase my DP space;
d) Don't use my SSD and just leave all my 6 HDDs connected to the pool?

I currently have my DrivePool using 16 external USB 3.0 HDDs connected to my computer via hubs and my one SSD is connected internally to the motherboard. The direct motherboard connected SSD gives me the speed, and the external USB 3.0 HDDs give me the cheap storage for my home media center. I would think that attaching a SSD to a USB docker would slow down the SSD considerably.

Of your options, I would chose c) Remove one HDD from my motherboard and connect my SSD to it, but connect this spare HDD to my docker to increase my DP space;

One of the great advantages of using DrivePool is that you can add storage with any sized HDD. You don't have to match drives as in RAID systems. Most of my drives are standalone external USB 3.0 HDDs, but I do also have two 4-bay MediaSonic ProBoxes. The advantage there is that I can use any internal 3.5 HDD and slap it in the ProBox. My ProBoxes are set to connect via USB 3.0, but there is also an option to have it connect via eSATA.

Although you don't need a SSD for DrivePool, it will certainly speed up your file transfers. I would definitely recommend taking the time to install a SSD to the front end or your DrivePool if speed is important to you. Coming from a RAID10 background, you might not be happy with DrivePool's speed writing a single file to a single HDD. It would be slower than what you are used to. The SSD cache on the front end of DrivePool changes everything, at least it did for me, and I get write speeds faster than what I got with either my older RAID systems or when I used Windows Storage Spaces.

 

Link to post
Share on other sites
  • 0

gtaus, thank you very much for all your explanation and time.
I don't have a single file that is 200GB. My files are around 70MB each, but summing all photos will end up with ~200GB. I'll definitely install my SSD and set it as a cache.
Regarding Primo Cache, I already use it. As a matter of fact, I used to use it with my RAID10, so I could increase my performance a little bit. Good to know I can use it with DP. I'll do it, for sure!
Thank you again.

Link to post
Share on other sites
  • 0
4 hours ago, gtaus said:

I don't know what would happen if your single file was 200GB. That is certainly larger than your 128GB SSD and I doubt if DrivePool would even bother attempting to write to the SSD cache in that case. I suspect it would just write the 200GB file directly to an archive HDD at that time. 

The file would still be written to the SSD until the drive ran out of space, at which point you would get an error that the file was unable to be copied.

This is because - for various reasons including the fundamental nature of file operations - DrivePool cannot measure the size of an incoming file until the file is completely written to the pool, which is also the reason why DrivePool's default setting is to write each incoming file to whichever drive has the most free space remaining at the time.

FYI: while the total free space of your pool is the sum of the free space of the drives in the pool, the momentary free space on your pool is that of whichever drive that DrivePool chooses to use for an incoming file at that moment. (it's slightly more complicated if real-time duplication is enabled, but the concept is similar).

So it is important when using the SSD Optimizer (or any other balancer which overrides that default choice) to consider your likely use-cases and choose your "SSD" drives and balancing settings so that you don't create a risk of running out of momentary free space.

3 hours ago, Glauco said:

I don't have a single file that is 200GB. My files are around 70MB each, but summing all photos will end up with ~200GB. I'll definitely install my SSD and set it as a cache.

A basic rule of thumb would be look at the size of the largest file you'd ever be likely to copy to the pool, and set your "Fill SSD drives up to" value to an amount that would leave enough room for that. Note that actual SSDs should not be filled more than 70% to 90% anyway, to maximise their performance (due to the nature of SSDs), which on a 128GB drive should give you at least ten gigabytes of leeway.

Link to post
Share on other sites
  • 0
6 hours ago, Glauco said:

Regarding Primo Cache, I already use it. As a matter of fact, I used to use it with my RAID10, so I could increase my performance a little bit. Good to know I can use it with DP. I'll do it, for sure!

If you already have Primo Cache, you might want to consider adding your SSD to your computer and letting Primo Cache use both your system RAM and the SSD as cache for both reads and writes on all programs on your computer. You would not have to limit the SSD as a front end write cache for DrivePool only.

I tried out Primo Cache a few years ago, and at the time I did not have a SSD. So, I set aside about ~6 GB of system RAM for my cache and it worked fine. However, most of my media file transfers are about 100 GB, so it filled up the 6 GB RAM cache lightning fast and then the file transfer slowed back to the turtle speed of the HDD. So I did not want to buy Primo Cache to save about 1 minute of time on my transfers. However, Primo Cache can also use the SSD (if you have one) for cache and then I think I might have stayed with it.

Either way, using Primo Cache or DrivePool to control your SSD cache, you should see a big boost in performance.

Link to post
Share on other sites
  • 0
10 hours ago, Shane said:

A basic rule of thumb would be look at the size of the largest file you'd ever be likely to copy to the pool, and set your "Fill SSD drives up to" value to an amount that would leave enough room for that. Note that actual SSDs should not be filled more than 70% to 90% anyway, to maximise their performance (due to the nature of SSDs), which on a 128GB drive should give you at least ten gigabytes of leeway.

Thanks you Shane! I'll do that!

 

7 hours ago, gtaus said:

If you already have Primo Cache, you might want to consider adding your SSD to your computer and letting Primo Cache use both your system RAM and the SSD as cache for both reads and writes on all programs on your computer. You would not have to limit the SSD as a front end write cache for DrivePool only.

I already have a M2 on my computer as my C: drive. It has my S.O. and my applications on it. It's pretty fast already, but on top of that, I already have Primo Cache. In this scenario, I don't think it would be good to add my SSD, which is much slower then my M2 to my system and use it as a cache to my applications. I think it would slow down my system. Would you agree?

I guess my best shot is to add the SSD, make it a cache to my DP (taking into account everything that has been said) and use PrimoCache to cache Read/Write to my DP, right?

Once again, thank you all very much! 

Link to post
Share on other sites
  • 0
19 hours ago, Glauco said:

I don't think it would be good to add my SSD, which is much slower then my M2 to my system and use it as a cache to my applications. I think it would slow down my system. Would you agree?

Yes, I would agree, use your fastest drive makes the most sense. My old DrivePool computer does not support M.2, but my SSD made everything a lot faster. Your M.2 drive is even better.

19 hours ago, Glauco said:

I guess my best shot is to add the SSD, make it a cache to my DP (taking into account everything that has been said) and use PrimoCache to cache Read/Write to my DP, right?

Sounds like a plan. Primo Cache would use your system RAM and your M.2 as a buffer to read/write to DrivePool. Primo Cache has many more options for using fast drives like your M.2 and SSD than the SSD Optimizer plugin for DrivePool.

It has been a few years since I used Primo Cache, but it seems to me that you can prioritize your cache - first would be RAM, then secondary cache would be your M.2. If you added your SSD to Primo Cache after your M.2, I think it would not slow down your system as the SSD would still be much faster than your archive HDDs. I would find a way to also use that SSD.

Also wanted to mention that @Shane warned about the possibility of overfilling the SSD cache as front end for DrivePool using DrivePool's balancers. IIRC, Primo Cache treats its cache more as a buffer and therefore would not be able to overfill the cache (buffer). If you ever transferred more data to the Primo Cache then you set aside on your RAM, M.2, and SSD, it would just slow down to the write speed of the archive HDD at that point as long as the archive HDD had room for the files.

In the real world, if you have enough RAM, M.2 and SSD buffer set aside in Primo Cache, you might never see the buffer maxed out and the slow down to the write speed of the archive HDD. Even on my old system, with a 100GB SSD cache front end write to DrivePool, I cannot overfill my SSD because it still writes to the archive HDDs pretty fast. I transferred 2TB of media data last night to my archive HDDs and DrivePool never overfilled my 224GB SSD with the 100GB cache. Again, DrivePool just works for me and the more I use it, the more I like it.

Link to post
Share on other sites
  • 0
On 12/23/2020 at 4:45 AM, Shane said:

If you wish to use the SSD Optimizer plugin to have files cached by a single "SSD", for a pool that has duplication enabled, the method would be to un-check the "Real-time duplication" option under Manage Pool -> Performance. Duplication would then occur later as a background task. This does somewhat defeat the point of ensuring your files are always duplicated, of course.

I'm new to the product and didn't see this answered in the thread, but does this mean that the SSD cache can be configured for multiple SSD drives in order to speed up writes? I.E. If my maximum degree of duplication is 2x and I am willing to install two SSDs as a cache, will writes of duplicated files be optimized?

Also, how does space utilization work for SSD cache drives? Does the space still get used for general storage in the pool? Is there a way to configure the amount of drive space that should be kept free for write optimization?

Link to post
Share on other sites
  • 0

Yes; if you have Nx duplication enabled and you have N (or more) drives selected as your SSD cache, DrivePool will default to duplication in parallel to those drives.

You can configure how often the cache drives are flushed to the rest of the pool based on time (manual / immediately / not more often than every X / every day at Y) and size (balance ratio or amount of data in cache), you can set how much of the cache can be used before new files will go straight to the rest of the pool (e.g. to avoid filling your SSD), and the cache is otherwise treated as part of the pool as far as total storage is concerned (i.e. it's all the same virtual drive letter).

Link to post
Share on other sites
  • 0
9 hours ago, Shane said:

Yes; if you have Nx duplication enabled and you have N (or more) drives selected as your SSD cache, DrivePool will default to duplication in parallel to those drives.

If you partition a SSD into 2 or more partitions, would DrivePool consider them separate drives for your Nx duplication enabled scenario? And yes, I understand that might not be a good idea as the loss of the one SSD would result in the loss of all Nx duplications in the SSD caches on that same drive.

 

14 hours ago, Jason Simone said:

Also, how does space utilization work for SSD cache drives? Does the space still get used for general storage in the pool? Is there a way to configure the amount of drive space that should be kept free for write optimization?

I have a 224GB SSD as the front end cache for my DrivePool. I set the cache to a 100GB flush trigger point. Files in the 100GB SSD cache show up in the general storage pool like any other drive. When I fill up the 100GB SSD cache, DrivePool will flush the SSD cache and write the data to my archive HDDs. You can set the trigger point for flushing your SSD cache to any amount that works for you. I reserved 124GB on my 224GB SSD for potential overflow on large file transfers.

 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...