Jump to content

gtaus

Members
  • Posts

    285
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by gtaus

  1. Thanks for your help. I started a new thread on how to plan for future HDD failures and recovery options. Other than Duplication of the pool/folder, DrivePool does not seem to have a built in recovery plan. But, like you said, I lost nothing critical and my important folders were duplicated.
  2. Background: DrivePool is used as my unduplicated home media storage server for Plex/Kodi. Original media files are backed up and stored on HDDs in the closet. No duplication is used on DrivePool for the media files. I recently lost a 5TB HDD in a pool of 17 drives, all of different brands, sizes, ages.... Problem: My disk monitoring software warned me about an imminent HDD failure. I immediately attempted to remove the disk from DrivePool, but all attempts to move data off that disk failed and the drive completely crashed in less than 12 hours. Now, like many before me, I am wondering what was actually on that failed drive and do I want to recover the lost pool files from my backups. Solutions considered: SnapRAID might work for some people as adding a parity solution to the pool for recovery, but that means dedicating additional disks to the pool just to hold the parity data. Also, from what I understand, SnapRAID accesses drives via drive letters. With 17 drives in my pool, I have turned off the drive letters because DrivePool does not need them. It appears that there are ways to mount drives in empty folders so SnapRAID can bypass the Drive Letter requirement, but it looks to me that the SnapRAID solution gets more and more complicated as you add pool drives to DrivePool. Duplication on DrivePool seems to be the easiest solution to recover from a HDD failure within DrivePool, but, who can afford to have a 1:1 solution on a 70TB+ pool? I already have backups of the data on HDDs stored in the closet. I just don't have the budget to both backup files in offline storage and then again duplicate the files in DrivePool. WinCatalog 2020 is what I started to use just before my HDD failed. In theory, if you have DrivePool cataloged in WinCatalog 2020 and you suffer a HDD failure, you can run an update on the DrivePool volume and WinCatalog 2020 will report the "deleted" files compared to its last update. From what I understand, this still would require me to manually look up and recover lost files from my back ups in the closet. However, at the point, I may or may not decided to recover some or all of those files from my backups. Problem is, with a 5TB drive failure, that gets to be a lot of lost files. As I said, I just started using WinCatalog 2020, have not learned how to use it well yet, and and had not even gotten to the point of cataloging my DrivePool before I had a HDD failure in the pool. Solution I am searching for: I would like some kind of cataloging program, like WinCatalog 2020, that I could constantly use to catalog my DrivePool automatically, like maybe every night. If I had a HDD failure in DrivePool, and lost an entire 5TB HDD of unduplicated media files, I would like the cataloging program to tell me which files were lost and prompt me to see which files I would want to recover from my storage backups in the closet. I could select the files I wanted to recover, and the program would tell me which storage HDD(s) I needed to get online (HDD caddy in my case) and then it would automatically restore those files to DrivePool from my original storage backups, prompting me to get each backup HDD as needed for the recovery task. It would be great if DrivePool had such a cataloging feature built into it, but from what I have read on past threads in this forum, there does not seem to be much interest in working in that direction. Over the years my media collection has grown and now I currently have 70TB of files in DrivePool. As HDDs continue to increase in size, a loss of even one pool HDD means lots of unduplicated files are potentially lost to the pool. I don't expect DrivePool to do everything, but I can see HDDs at 12TB+ being more common and the loss of just one 12TB+ pool HDD could have a significant negative impact on unduplicated media files. If anyone has addressed this issue and developed a working solution to recover from a HDD loss in an unduplicated media storage pool, please let me know. I have no desire to reinvent the wheel if a solution exists. Thanks for any comments.
  3. After a couple of reboots, the DrivePool GUI gave me the option to remove the missing DP07 HDD. This time it removed the missing disk from the listing and is currently Measuring the pool. So it looks like everything will be back on line and all remaining files will be accounted for in my remaining 65.7TB pool in ~2 hours. Just want to mention that the drive that went bad (DP07) went from 100% health to completely dead within 12 hours. Not much time to move data off a large 5TB drive. Also, in this case, DrivePool was unable to remove the drive successfully before it died. The bad drive went offline every time DrivePool attempted to move files off it and into good disks in the pool. In this case, DrivePool failed to move any files off this failing drive. I have had one other experience with a failing drive in DrivePool. In that case, DrivePool was able to move all but 2 corrupt files off a 4TB failing HDD before it completely died. But that drive lasted almost 3 days from the warning.
  4. Quoting myself: "I did find some free recovery software via YouTube called iCare Data Recovery Pro, which states it might be able to recover lost data on a RAW drive so I can copy it over to another good drive. I have that running right now but the estimated time to complete is over 2 hours, so I'll just let it run overnight and hope for something in the morning. That might save me some time in rebuilding the deleted data. One can only hope." Turns out that the "free" version only lets you recover 1GB of data, if you want to recover more than that, you need the "premium" version at $70.00. They did not state that up front, of course, but I just found it on a review of the product. It may be worth the price for someone who has important data that is not backed up elsewhere, but that is not my case. Turning my attention to rebuilding the DrivePool with the existing drives, if possible. Any help appreciated.
  5. Thanks, @Shane. My third attempt, ticking all the boxes also failed. So I stopped the DrivePool service and attempted to manually move the contents. Windows immediately gave me an error message to run chkdsk on the drive because it found errors. Unfortunately, I ran chkdsk as suggested and it wiped out my drive. I am now looking at a RAW drive in my Disk Management and Windows will not access it at all stating the directory file is corrupt and unreadable. That has never happened to me before with chkdsk, but as much as I would like to fault MS, I do know that this drive was failing fast and maybe there was too much damage on it before I ran chkdsk. There is nothing so important on the drive that I cannot live without. All my original media files are backed up on HDDs sitting in the closet, and my important working files are backed up on a cloud service. I did find some free recovery software via YouTube called iCare Data Recovery Pro, which states it might be able to recover lost data on a RAW drive so I can copy it over to another good drive. I have that running right now but the estimated time to complete is over 2 hours, so I'll just let it run overnight and hope for something in the morning. That might save me some time in rebuilding the deleted data. One can only hope. In a worst case scenario, that this drive bad drive is completely gone, how do I get DrivePool back into normal service? Currently my DrivePool drives are all greyed out, like it wants to Measure the pool, but it will not because the failed drive (DP07) is still listed there and cannot be removed. I would think there must be a way to recover from a sudden death of a HDD that is no longer readable. For example, if you physically remove the drive from the pool, will DrivePool just list the drive as missing and give you the option of rebuilding the pool without that drive? I still have access to my virtual DrivePool on drive letter J:, but everything is greyed out on the DrivePool GUI and appears to be a fault state (drives not measured, pie chart all greyed out, etc....). Assuming my DP07 drive is totally gone, really only care about getting DrivePool back online and working normal. Any help appreciated. Thanks.
  6. OK, first problem I have really encountered with DrivePool. Last night I had a 5TB HDD start to fail, hard. I got a notice from my drive monitoring program that the HDD's health was not good. So I went into the DrivePool GUI and selected to remove the failing HDD. Unfortunately, the HDD is not removing itself normally and I get an error "Error Removing Drive" with the Error Details "The system cannot find the path specified." At that point, DrivePool goes into a full pool Measuring task on all pool drives, which on my 70TB DrivePool takes ~2 hours. I let DrivePool remeasure everything overnight, and this morning I tried once again to remove the failing drive from the DrivePool GUI. Again, after a few minutes, the failing drive errored out and DrivePool is once again measuring all drives on my 70TB pool - which will take another ~2 hours to complete. Is there a better way to remove a failing drive from DrivePool? When I select remove the drive, there is a Remove Options checkbox that comes up and none of the 3 options are selected by default. Should I check any/all of those options and if so which boxes should I select. After my DrivePool has remeasured all the drives, I am planning on selecting the "Force damaged drive removal (unreadable files will be left on the disk)" option, but the other 2 options do not really seem to address my current problem. Also, can I attempt another drive removal when DrivePool is "Measuring" the drives, or should I wait until the Measuring task (~2 hours) is complete? Thanks for any help.
  7. If you partition a SSD into 2 or more partitions, would DrivePool consider them separate drives for your Nx duplication enabled scenario? And yes, I understand that might not be a good idea as the loss of the one SSD would result in the loss of all Nx duplications in the SSD caches on that same drive. I have a 224GB SSD as the front end cache for my DrivePool. I set the cache to a 100GB flush trigger point. Files in the 100GB SSD cache show up in the general storage pool like any other drive. When I fill up the 100GB SSD cache, DrivePool will flush the SSD cache and write the data to my archive HDDs. You can set the trigger point for flushing your SSD cache to any amount that works for you. I reserved 124GB on my 224GB SSD for potential overflow on large file transfers.
  8. Interesting thread. I think I'll just leave my drives on. Seems the risk of a drive failure from stopping/starting exceeds the cost of electricity saved.
  9. If your hardware supports the new x265 encoding, you can get a much higher quality picture at a much smaller file size. My system, unfortunately, does not support x265 so I have to convert them to x264. Storage is so much cheaper today than years ago, so I will save both the large original files for future use and my smaller converted files for use now. Given a choice, I usually opt to download a smaller file that I can play on current system. The Shana encoder I use does a good job, but it takes 1:1 processing time on my old computer. In other words, my computer will convert the files overnight and I can watch them the next day. I would still encourage you to post your question on a Plex forum, if there is one, and maybe they would be able to help you more specifically with your local cache question for Plex streaming.
  10. I can remember when movies would fit on a CD ~700MB. Then the files were split on 2 CD's for ~1400MB. Then DVD's came in at 4.37GB and double sided DVDs at 9.4GB. Then BlueRay discs ~25GB but some BlueRay disc formats are up to 100GB. It's just a constant moving target, I guess. A 5000mb prefetch should be good for about half a 1080p movie these days. Most of my current 1080p movies are between 8GB - 14GB. If I get a movie >15GB, I start to have buffering issues on my Amazon Fire TV stick. However, I have seen a number of new 4K movies uploaded at ~40GB to ~70GB. I don't have a 4K TV, so I don't even bother with those large 4K movie files. If I find a movie I really want only available in a large file, then I will run it through the free utility Shanna encoder and reduce it down to 1080p or 720p. I usually reduce to 720p. At 720p resolution, most files are reduced to ~3GB in size, so the entire file would fit on your prefetch forward to 5000mb.
  11. You might have better luck asking this question in a Plex forum, if one exists. From my experience with Plex, I could never get it working good streaming a cloud based movie and my Amazon Fire TV Stick. My brother-in-law was a big fan of Plex and had me try it out. So I would log into his account and pull down a movie to watch. But what I got was constant buffering which really turned me off. His explanation was that the Amazon Fire TV Stick had limited memory and it was not able to buffer enough data to provide a good experience. At that time, he told me, there were game machines that allowed you to download, or preload, the entire movie file from the internet to your local device and play them back later when you wanted with no buffering. I did not have those game machines so I gave up on Plex. Recently, after installing DrivePool on my local computer, I installed Plex and now Plex plays local DrivePool files on my Amazon Fire TV Stick without buffering. I don't know if Plex has improved downloading/streaming over the internet because my brother-in-law got frustrated with Plex's lack of customer support and moved on to other platforms. If you are having buffering issues with Plex over the internet, then I think you would want to set up some kind of a preload option on your local computer if that option is available. Then, after the file(s) have completely downloaded from the internet, you can watch the movie from your local device and avoid the internet buffering. Again, I was told that was possible with certain game machines, but I did not have them so I never actually got to see this feature work. FWIW, I mainly use Kodi as my media platform these days. Unless I am trying to play a movie file >15GB, I seldom experience any buffering on my home network Wifi with Kodi. But the weak point in my system is my Amazon Fire TV Stick streaming movies over WiFi. My next upgrade would be to get a device direct connected to my ethernet's 1 gigabit home network. I don't know if any of this was helpful, but I thought I would respond with what I know on this Plex issue since no one else has jumped in on your question. Good luck.
  12. I just wanted to add that if you have a folder compare utility, you could eliminate duplicates before your transfer of files. I use a free file manager program called FreeCommander. There is a tool in FreeCommander to sync folders, and part of that program allows you to delete duplicates. Once you have eliminated all possible duplicates, you could just transfer the data on your new DrivePool drive into the hidden PoolPart directory of that same drive - saving you potential massive amounts of time on a large drive. I did that when I initially added 5TB HDDs full of movies to my DrivePool. Worked great for me.
  13. Well, shoot, I have had great success with my ~7 year old ProBoxes. The newer boxes have upgraded firmware and can hold larger capacity HDDs then the models I have. I believe the new ProBoxes are USB 3.1 and can support 12TB HDDs. It may still be an option to consider especially if you have something like Amazon Prime and you can return the box if it does not work for you. I will say that, in general, my external USB 3.0 HDDs connected via USB hubs are just as effective as the ProBoxes, or maybe even more so, because if a single external USB HDD goes off line, you don't have 4 HDDs offline at one time. And if you want to save money, I have found external USB HDDs with case and power supplies are often cheaper than standalone 3.5 internal HDDs. For my DrivePool media storage server, I find the cheapest external USB drives hold up just as well as my more expensive internal HDDs. But, again, my DrivePool media storage server does not ask too much from the HDDs. If someone has a better solution for JBOD storage, I'd like to know, too, because I have maxed out my ProBoxes and will need to buy something new in the future for further expansion. Just one final thought, I used to have disconnects with my ProBoxes and external USB HDDs until I put them all on surge protected battery backup supplies. But I live out in the country and stable electricity is not guaranteed. The battery backup supplies have saved my equipment from many problems.
  14. I have 2 4-bay MediaSonic ProBoxes and another 9 external USB HDDs all connected via USB 3.0 to my DrivePool server. DrivePool works just fine on my system. The MediaSonic ProBoxes have a feature to auto reconnect upon boot. I make sure that is always on. I can't say the ProBoxes never drop out - because it just happened to me last night - but that was the first time in about 3 months. I did not have to reboot the computer because when the ProBoxes came back online, DrivePool went through the re-measure task, and everything is working fine this morning. I have only run DrivePool with external drives and it works fine. I do not have the TB3 JBOD you are asking about specifically, but the MediaSonic ProBoxes I have can be connected via USB 3.0 or eSATA. The 4-bay ProBoxes I have recently increased in price to around $120 new, but you can also buy renewed boxes at Amazon for current price ~$89. The ProBoxes I have do not have any special JBOD software, which might be an advantage, because I have no problems to speak of with DrivePool running everything. With the 4-bay ProBox, the 4 HDDs all show up as separate HDDs in Disk Management, but they are all contained in one box. It's an option to consider if you continue to have problems with your TB3 JBOD box.
  15. Thanks for the replies. I downloaded the All In One plugin and reported it as safe to MS. Hope that helps.
  16. @methejuggler or @Shane, Is this the All In One by C. J. Manca plugin which is on the official DrivePool plugin page? If I install the All In One plugin, should I uninstall the other plugins as no longer necessary? Am I correct in thinking that the Stablebit Scanner plugin is still needed? Also, MS Defender is blocking the All In One plugin download stating "it may harm your device." The other plugins on the page did not have that warning and downloaded without any problems. Any reason why the All In One .msi plugin is being blocked and the other .msi plugins download normally? Of course, I can manually override the warning, but others who have not been following this thread may be hesitant to download the file given the MS Defender warning.
  17. Signing on to follow the thread...
  18. FWIW, when I had a drive that was failing, Windows would take forever to read/reread the directory. If that HDD was part of DrivePool, then I would think DrivePool would probably just sit there attempting to measure/remeasure the pool. It might have nothing to do with DrivePool itself, but rather Windows trying to read the drive. Normally, my DrivePool does not remeasure my pool (NTFS HDDs) upon reboot. I have had a few instances when one, or both, of my ProBoxes did not auto-reconnect upon reboot and the HDDs were listed as missing in DrivePool. After reconnecting the ProBoxes, DrivePool then would find the HDDs and start a remeasure of the pool. My DrivePool is currently ~65TB, and I estimate it takes ~45 minutes to remeasure. Is that a long time? I guess it depends. However, I am still able to use my system while DrivePool is remeasuring and I don't notice much performance hit. My DrivePool is mainly used as a media storage system so my demands on it are not too high. I don't know what HDD monitoring program(s) you use, but Stablebit Scanner is a good choice and may detect early warning signs of drive failure. I had a problem with a pool HDD which was detected by Hard Disk Sentinel and that gave me time to remove almost all files off the drive before it completely died. Hard Disk Sentinel has a free version that will monitor your drives, but the paid version has extra features such as advanced testing and repair. SMART is a great feature on drives, but sometimes you get a failure that may not be reported in SMART. In those cases, I find the programs that actively test the drives are better at detecting problems. If you are able to find the problem with your DrivePool slow startups/remeasuring, I hope you come back and let us know. If you already use Stablebit Scanner, I would still encourage you to download the free version of Hard Disk Sentinel and let it run the basic diagnostics overnight on your pool drives to see if can detect anything abnormal. It won't cost you anything but a little time and it may detect something that you could fix. Good luck.
  19. Yes, I would agree, use your fastest drive makes the most sense. My old DrivePool computer does not support M.2, but my SSD made everything a lot faster. Your M.2 drive is even better. Sounds like a plan. Primo Cache would use your system RAM and your M.2 as a buffer to read/write to DrivePool. Primo Cache has many more options for using fast drives like your M.2 and SSD than the SSD Optimizer plugin for DrivePool. It has been a few years since I used Primo Cache, but it seems to me that you can prioritize your cache - first would be RAM, then secondary cache would be your M.2. If you added your SSD to Primo Cache after your M.2, I think it would not slow down your system as the SSD would still be much faster than your archive HDDs. I would find a way to also use that SSD. Also wanted to mention that @Shane warned about the possibility of overfilling the SSD cache as front end for DrivePool using DrivePool's balancers. IIRC, Primo Cache treats its cache more as a buffer and therefore would not be able to overfill the cache (buffer). If you ever transferred more data to the Primo Cache then you set aside on your RAM, M.2, and SSD, it would just slow down to the write speed of the archive HDD at that point as long as the archive HDD had room for the files. In the real world, if you have enough RAM, M.2 and SSD buffer set aside in Primo Cache, you might never see the buffer maxed out and the slow down to the write speed of the archive HDD. Even on my old system, with a 100GB SSD cache front end write to DrivePool, I cannot overfill my SSD because it still writes to the archive HDDs pretty fast. I transferred 2TB of media data last night to my archive HDDs and DrivePool never overfilled my 224GB SSD with the 100GB cache. Again, DrivePool just works for me and the more I use it, the more I like it.
  20. If you already have Primo Cache, you might want to consider adding your SSD to your computer and letting Primo Cache use both your system RAM and the SSD as cache for both reads and writes on all programs on your computer. You would not have to limit the SSD as a front end write cache for DrivePool only. I tried out Primo Cache a few years ago, and at the time I did not have a SSD. So, I set aside about ~6 GB of system RAM for my cache and it worked fine. However, most of my media file transfers are about 100 GB, so it filled up the 6 GB RAM cache lightning fast and then the file transfer slowed back to the turtle speed of the HDD. So I did not want to buy Primo Cache to save about 1 minute of time on my transfers. However, Primo Cache can also use the SSD (if you have one) for cache and then I think I might have stayed with it. Either way, using Primo Cache or DrivePool to control your SSD cache, you should see a big boost in performance.
  21. Thanks @Shane. That is good to know.
  22. You can set your SSD cache at a lower trigger point than your max storage on the SSD. I have a 224 GB SSD and have my trigger point set to flush the cache at 100 GB. There is a separate Prevent Drive Overfill optimizer under the Balancing plugins. I have that set to the default 90% full. What I have seen on my system is that my SSD cache will fill up to 100 GB, hit the trigger point, and then start flushing data off to the archive HDDs. If I am transferring larger amounts of data, the SSD will continue to fill beyond the trigger point, but it does reach a point where it start writing directly to an archive HDD. I think this is perhaps when the SSD reaches 90% full and that triggers the Prevent Drive Overfill optimizer to start direct writing to a HDD. Since DrivePool writes a single file to a single drive, a file would be completely written to the archive HDD which would be slower than to the SSD cache. By the time that file is written to the HDD, the SSD cache would be more than ready to take more data. I don't know what would happen if your single file was 200GB. That is certainly larger than your 128GB SSD and I doubt if DrivePool would even bother attempting to write to the SSD cache in that case. I suspect it would just write the 200GB file directly to an archive HDD at that time. Since you are running a business, I would only suggest using real time duplication. In my test using a real time 2X TEMP folder, I discovered that a 60 GB file transfer completed to my SSD and then flashed activity to my archive HDD for just a few seconds more. If your data is important enough for 2X or 3X duplication, the extra few seconds for the file transfer is worth it, IMHO. If you add a SSD cache to the front end of your DrivePool, it only speeds up your write speeds. If your data has been flushed to archive HDDs, then you will not see the SSD cache filling up with read data. Having said that, I set my SSD cache to 100 GB and use it as a fast cache for both reads and writes for some of my temp files. Until the data is flushed to the archive HDDs, the data remains in SSD cache, and would therefore speed up both reads and writes. If I need to flush the SSD cache before using it as a fast temp directory, I can manually do it by selecting a re-balance in the Pool Organization toolbar. That strategy works well for me. I simply added the SSD to my DrivePool, used the SSD Optimizer plugin to designate that drive as "SSD", and then under Balancing>Settings>Automatic balancing>Balance immediately is checked. On the same screen, Automatic balancing - Triggers, I set "at least this much data to be moved" at 100 GB. That gives me my 100 GB SSD cache before DrivePool flushes the cache. DrivePool does have documentation on the website, and that can be helpful. I just got my system working the way I wanted and stopped messing with it. I am sure that there are many other ways you could custom setup your DrivePool based on your needs. There is a program called Primo Cache that is worth looking into if you want to use your SSD as cache for all reads and writes on your system. Primo Cache can use both system RAM and your SSD for caching reads and writes. Primo Cache would be totally independent of DrivePool, but would work on all your programs, including DrivePool. I believe they still offer a free trial period for evaluation. I currently have my DrivePool using 16 external USB 3.0 HDDs connected to my computer via hubs and my one SSD is connected internally to the motherboard. The direct motherboard connected SSD gives me the speed, and the external USB 3.0 HDDs give me the cheap storage for my home media center. I would think that attaching a SSD to a USB docker would slow down the SSD considerably. Of your options, I would chose c) Remove one HDD from my motherboard and connect my SSD to it, but connect this spare HDD to my docker to increase my DP space; One of the great advantages of using DrivePool is that you can add storage with any sized HDD. You don't have to match drives as in RAID systems. Most of my drives are standalone external USB 3.0 HDDs, but I do also have two 4-bay MediaSonic ProBoxes. The advantage there is that I can use any internal 3.5 HDD and slap it in the ProBox. My ProBoxes are set to connect via USB 3.0, but there is also an option to have it connect via eSATA. Although you don't need a SSD for DrivePool, it will certainly speed up your file transfers. I would definitely recommend taking the time to install a SSD to the front end or your DrivePool if speed is important to you. Coming from a RAID10 background, you might not be happy with DrivePool's speed writing a single file to a single HDD. It would be slower than what you are used to. The SSD cache on the front end of DrivePool changes everything, at least it did for me, and I get write speeds faster than what I got with either my older RAID systems or when I used Windows Storage Spaces.
  23. Thank you for the clarification. As for my system, I really don't have much data set for 2X duplication. So I created a TEMP directory in DrivePool set for 2X real time duplication, and transferred files to it. What I see on my DrivePool GUI, is that the files are written to my SSD cache as shown under Disk Performance. When the files are done writing to the SSD cache, then I see file activity written to an archive HDD under Disk Performance for only a second or two, and then it is done. I don't see the files being written to both the SSD cache and the archive HDD at the same time under the Disk Performance in the DrivePool GUI. Is the second copy of the file being written directly from the SSD cache to the archive HDD in the background at the same time? That would explain the HDD not showing up under Disk Performance during the file transfer until the SSD cache has completed. If I open Task Manager, I can clearly see data being written to both the SSD cache and the archive HDD at the same time for a real time 2X folder.
  24. Given that you currently have 15TB in your RAID10 and only have 5.6TB of storage, I'd be looking for alternatives too. That seems to be a lot of storage space being used for data recovery in case of HDD failure. If you have already backed up all your data, do you really need a RAID10 fault protection scheme? Unless you are running a business and have those type of never down concerns, I think you could do better than a RAID10 for a home storage solution. With that in mind, I would offer my thoughts. In the past, I have run RAID (0, 1, 5) systems at home. A dedicated RAID controller and matched HDDs can give outstanding performance, no doubt. Stripping data to multiple HDDs at the same time can really speed up your transfer rates. Your RAID10 offers great data protection and fast recovery options in the event of a HDD failure. The cost of your RAID10 setup, as you mentioned, is that you have 15TB online and are limited to about 6TB of storage. DrivePool only writes to one disk at a time, so the performance in that respect is limited to the speed of that HDD. Certainly not as fast as stripping data out to 2 HDDs at the same time. However, I recently added a SSD to the front end of my DrivePool, so now my writes to the pool are as fast as my SSD can handle. My SSD is rated at about 480 MB/s, so, in theory, that would be my maximum write speed to my DrivePool. DrivePool writes the file transfer to my SSD cache, and then later, when it hits my trigger point (100 GB on my system), it will flush the SSD cache and write to my archive HDDs in the background. Essentially, with the SSD cache on my DrivePool, I get faster real life performance on my system than I ever got with my RAID systems. As far as data protection, there would only be one file copy on the SSD cache until it flushed out to the archive HDDs. I can live with that, but my system is mainly a home media server for Plex/Kodi. Most of my data is not that essential, so I do not even have duplication set on my pool. With DrivePool, you can set duplication on the folder level, not just on the entire pool. I have a couple folders set for 2X duplication and one folder set for 3X duplication. Almost all of my DrivePool is set to store media files at 1X duplication, because I have my original files backed up on HDDs stored in the closet. If I lost a HDD in the pool, I could rebuild the lost files from my backups. Unlike RAID systems where a loss of a HDD can mess up everything on the pool because the files are striped out to various drives, DrivePool writes a complete file to a single HDD. I had a HDD start to fail, DrivePool noticed it, and I was able to offload all but 2 corrupt files on a 5TB HDD. With my RAID systems, a failing HDD easily meant all data loss. So far, DrivePool is just more reliable and easier to recover from a HDD failure than other systems I have used. I ran Windows Storage Spaces for ~7 years. When it works, it is great. But I had one small HDD fail in my Storage Spaces pool of drives and it crashed my entire 26 drive setup. That, despite, Storage Spaces was setup to recover from a 2 drive failure (in theory). In practice, for whatever reason, Storage Spaces was not able to recover from a 1 drive failure and I lost massive online data. I had to delete the entire Storage Spaces pools and rebuild from my backups - took forever. This happened not once, not twice, but three times and then I gave up and switched to DrivePool. Life is better. If you have problems with Storage Spaces, good luck. You will find MS does not really support Storage Spaces. I was not the only person reporting this problem of a signal HDD failure crashing my entire pool. MS does not have an answer. I used to spend hours and hours in PowerShell trying to manually recover problems in Storage Spaces. It was just not worth it to me. DrivePool just works better for me, and I am enjoying life in other ways than sitting in front of my computer for hours trying to recover data from Storage Spaces. The DrivePool GUI shows you the stats of data in the Pie Chart: Unduplicated, Duplicated, and Other. As I stated, almost my entire 64.8 TB DrivePool is Unduplicated at 56.9TB, but I also have Duplicated data in 2X folders and one 3X folder at 2.26TB, and Other data at 61.9 GB. Free space is currently at 5.62 TB. So DrivePool gives me much better storage options for my needs and still provides data duplication for my more important files. If you have 2X duplication set for your entire DrivePool, then I believe the DrivePool GUI will show all your data as Duplicated. You would only have half of your reported Free space (i.e. 6TB of Free space on your 2X pool would mean you could add only 3TB of new data). There are options in the DrivePool balancers to set real time duplication, or defer duplication for a more convenient time like at midnight. If you did not have duplication set on the pool when you initially transferred files to DrivePool, and then you later set it to 2X duplication, it probably is waiting for midnight to check which files need to be duplicated. You can manually trigger this by Settings>Troublshooting>Recheck duplication. It will recheck your duplication and perform needed tasks in the background. When done with the duplication, your DrivePool GUI should reflect your current stats. Again, if you have 2X duplication set on the entire pool, and your Free space reports 6TB, then you could add 3TB of data to be duplicated. Exactly what I would do. You can let DrivePool automatically check for duplication later, or trigger a manual duplication check via Settings. Although I came from a RAID background, and then Storage Spaces, when I moved to DrivePool I decided not to have 2X duplication on my entire pool. Like you, I have separate backups if ever I need them. If you use HDD monitoring programs like DrivePool Scanner, than you have a good chance of detecting a failing HDD before it completely dies and DrivePool can be set up to automatically start moving data off the failing HDD. In my experience, DrivePool just performs better for me than my former RAID systems or Storage Spaces and data recovery is possible with DrivePool - but not with crashed RAID or Storage Spaces systems. Hope this was helpful. Good luck.
  25. DrivePool only uses part of the SSD in the hidden PoolPart directory, so you can still use the SSD outside of the DrivePool envieronment. Have you tried the preallocation to the SSD itself - outside of DrivePool - and compared that to the preallocation speed of that same SSD within DrivePool? I don't use qBittorrent on my system, but the programs I do use have not shown any difference between writing to DrivePool's SSD cache compared to writing to the SSD by itself. For myself, I was happy to verify that the programs I use were not slowed down by going through DrivePool. In fact, in all the testing I have done on file transfers to DrivePool or directly to the SSD/HDDs, I have not seen any difference in speed.
×
×
  • Create New...