A couple of days ago, I tried DrivePool and Scanner for the second time. About a year ago, scanner was not able to read SMART from my HBA and enclosures, so that was a bust and I has trouble with a bug in a release version of drivepool with saving settings while having multiple pools. So I decided no to... I have never used any raid or parity solution before to protect my data. I tried unRaid from time to time years ago, did not like it. Recently I tried tRAID for the 3rd time and did not like it too, again. Then I looked into Snapraid + DrivePool and decided, nah. I was always categorically against backup solution because of cost, but finally, I decided - enough. Drivepool and duplication, let's try this out.
After a couple of days, I have some questions:
1. How safe is to have 2x duplication? With Scanner running safety. I mean, over the past 10 years I've been reading again and again about all kinds of ways of data protection, from hardware raids to parity, to zfs, etc... 99% of the time you always come to the same warning - raid/parity/etc is not safety - backup is. Hence the question. Is Scanner + DrivePool's 2x duplication a way to feel sure that I won't loose data, the next time I'm gonna loose a drive?
2. Is drivepool's 2x duplication more secure (because of scanner) or less secure (bugs, whatever), than running 2 pools and syncing between them?
3. I was copying TBs of data to the pool from unpooled drives and noticed problems with small files. While copying big folders with lots of tiny files inside, like images, mp3 or even smaller, the speed was climbing up to impossible 300MB/s-400MB/s and then suddenly stopping and hanging for sometimes even a couple of minutes. Then again - copying resumes, speed climbs fast to impossible (I was copying from a single spinner, hence speed should be max ~100MB) and then hangs again. Here's the possible reasons, that come to mind: a) I am running StableBit.DrivePool_2.2.0.639_x64_BETA. I looked through forums extensively, to find which beta is considered "stable", iirc this was said to be. But still, it's beta... b.) I was copying from a drive with SMART sector warnings. c) I am using Directory Opus for all my file management, not the windows Explorer. Dopus is the best thing after sliced bread, imho. I hope there is no problems using dopus and drivepool together. If it's between dopus and drivepool - dopus hands down, sorry. It's too powerful and time saving for me to get back to using Explorer. d) Maybe it's some settings I should disable/enable to avoid this behavior?
e) Edit #1. Very important! Similar thing happens with big files as well. I was copying large files from the pool to outside the pool. Again - this time from the pool to outside. The files were ~1.5GB in size. For the first 5-10 minutes Dopus was copying at about normal rate ~100MB+/s, but it was stoping and completely hanging every 5-10 seconds or so. I mean, it showed "not responding" when the copying hanged, then it resumed again, like nothing happened. And again and again, for ~10 minutes. Then it normalized and did not stop/hang anymore, but the copying speed dropped to about half of the normal ~50MB/s. Basically, the copying was done from WD Red 3TB to Hitachi 4TB drive, and the only other activity on any of them was Kodi was streaming an episode from WD Red (source) drive, which is insignificant, imho... Both drives are quite new and aOK.
Is this a normal drivepool behavior? Coz it does not look good and is annoying. Hanging the whole Dopus is enough by itself. But having only half the speed of drives while reading from the pool? Not good at all. I hope it's something on my system, that I can fix, please let me know.
Edit #2. 15 minutes later. Copying from the same place to the same place (from the pool, which happens to be same WD 3TB drive to the same Hitachi 4TB) and everything looks ok. Full speed ~100MB+/s and no stops and hangs.
Edit #3. Nah, after a cuple of minutes, speed drops to about half and stays there, again.
WTF is going on? Should I stop using Beta version of drivepool or is it Dopus (it shouldn't be), or this something else?
4. I was considering adding an SSD to the pool and using file placement rules to put all metadata for my movies and tv shows on it. I mean using filters like "*.jpg", "*.nfo", etc. I was thinking, the media files are big, and they pretty much never change, but meta is small and gets regularly updated, so that would possibly help with defragmentation. Having meta files separated onto SSD, would it be a better? Or possibly worse, as it would most definitely create an enormous amount of additional "merging work" for the pooling driver?
5. Let's say I put all meta onto SSD with file placement filters. If I enable duplication, that probably means I need 2 SSDs to be able to duplicate the meta and keep the SSD bonuses while using the meta files in the pool?
6. Today arrived my order of new 6TB drives. My first 6TBs ever. I was wondering, how much free space should I leave on these so not to get NTFS in trouble? I have 4TB, 3TB & 2TB drives in the pool atm. The setting that I use now is set to leave 200GB of free space. Would 200GB be enough for 6TB drive. Or, basically, the age old question: how much free space to leave on the drives? Should I leave the same amount in GBs on all size drives, or should I set the option to leave eg: 5%? I would not like to leave 10%-15%, like some people say, because I'd hate to loose that much space. Especially considering how much space is already lost on duplication. Jeez.
7. There are 2 balancers that have an option to set free space to be left on the drives: Drive Usage Limiter, which only has percent setting. I have this balancer disabled atm. Prevent Drive Overfill, which I have set to: try not to fill above: 90% or 170GB and Empty to 85% or 200GB. From the info describing how it works, I gather my settings ignore percent and use 170GB and 200GB settings. Am I correct?
8. If I want to enable Drive Usage Limiter, but keep size based free space setting, I should move Prevent Drive Overfill to be above Drive Usage Limiter, right?
9. Long story short: I have 4 "groups" of drives: oldest: 10+ a mix of old 2TB drives older: 8x Seagate 3TB drives newer: 16x WD Red 3TB drives brand new: 6x WD Red 6TB drives What I would like to achieve is to "control" how the duplication woks. I would like to set "drive groups" of drives: a) a group that holds non-duplicated files. That's the newest drives (my thinking, the newer the drives, the less chance to fail). That should be easy enough, I guess: just use Drive Usage Limiter and uncheck Duplicated for those drives? b.) now this second scenario I gather is not possible atm, please confirm. I would like to set 2 groups for duplication: one group of newer drives for "primary copy" and second group for "duplicates". This way I could be sure, that all files have one copy on newer drives. As it stands atm, I guess drivepool would duplicate files onto random drives, hence I would have files with both copies on newer drives and with both copies on oldies...? c) I would very much like to ask for this feature to set drive groups be added to drivepool in the future in a form of balancer, most probably. From what I gather how drivepool works, that should not be difficult, methinks. Additionally, you could enable setting groups as "primary" and "copies". This would have additional positive effect: you could disable real-time duplication and would always use "primary" group of newer, faster drives in real-time everyday file management, and files would duplicate to slower drives in the background...
10. I would like to ask for a feature in the drivepool's UI to display additional "free space" statistic. What I mean, is maybe it's me, still not used to pooled drives, but it's kinda confusing how much free space I really have in the pool. My pool shows 17.6 TB of free space, but considering that I have 32 drives in the pool atm, all keeping 200GB of free space , I always need to remember and do the math to realize, that actually I have FreeSpace - NumberOfDrives*200GB = RealFreeSpace. That's 6.4 TB in my case atm. But as I'm gonna constantly add drives and sometimes take some out, this number will constantly change and I'll constantly need to do the math. I propose additional pie chart section, something like "Reserved Free Space" and then drivepool could show Free Space as in now (full) and additionally, calculated real free space left from FreeSpace-Files-ReservedFreeSpace=HowMuchSpaceIhaveLeftOnThePoolForRealz
11. I've read someplace on the forums, that exporting and importing of drivepool settings is coming. Very nice to hear, as this feature imho should be in every software, period. Also, why not save pool settings on the pool drives themselves? Duplicate for safety. Then, after windows reinstall, or after moving to a new machine, just pop-in the drives, and pool spins up with all the settings... I was really surprised when I first realized that it wasn't this way already. I mean it as a compliment, actually. DrivePool and Scanner just have this quality feel and a whiff of some very good thinking and work put into in to it. Hence, I'm seriously baffled about annoying inability to quickly get away with the settings.
Oh, and CloudDrive? Seriously? One programmer doing this? All I can say - F**k you, Bitcasa. Anyone who knows what bitcasa was?is?whocares..., knows what I mean. F**k you, Bitcasa, eat your heart out...
Question
cryodream
A couple of days ago, I tried DrivePool and Scanner for the second time. About a year ago, scanner was not able to read SMART from my HBA and enclosures, so that was a bust and I has trouble with a bug in a release version of drivepool with saving settings while having multiple pools. So I decided no to...
I have never used any raid or parity solution before to protect my data. I tried unRaid from time to time years ago, did not like it. Recently I tried tRAID for the 3rd time and did not like it too, again. Then I looked into Snapraid + DrivePool and decided, nah.
I was always categorically against backup solution because of cost, but finally, I decided - enough. Drivepool and duplication, let's try this out.
After a couple of days, I have some questions:
1. How safe is to have 2x duplication? With Scanner running safety. I mean, over the past 10 years I've been reading again and again about all kinds of ways of data protection, from hardware raids to parity, to zfs, etc... 99% of the time you always come to the same warning - raid/parity/etc is not safety - backup is. Hence the question. Is Scanner + DrivePool's 2x duplication a way to feel sure that I won't loose data, the next time I'm gonna loose a drive?
2. Is drivepool's 2x duplication more secure (because of scanner) or less secure (bugs, whatever), than running 2 pools and syncing between them?
3. I was copying TBs of data to the pool from unpooled drives and noticed problems with small files. While copying big folders with lots of tiny files inside, like images, mp3 or even smaller, the speed was climbing up to impossible 300MB/s-400MB/s and then suddenly stopping and hanging for sometimes even a couple of minutes. Then again - copying resumes, speed climbs fast to impossible (I was copying from a single spinner, hence speed should be max ~100MB) and then hangs again. Here's the possible reasons, that come to mind:
a) I am running StableBit.DrivePool_2.2.0.639_x64_BETA. I looked through forums extensively, to find which beta is considered "stable", iirc this was said to be. But still, it's beta...
b.) I was copying from a drive with SMART sector warnings.
c) I am using Directory Opus for all my file management, not the windows Explorer. Dopus is the best thing after sliced bread, imho. I hope there is no problems using dopus and drivepool together. If it's between dopus and drivepool - dopus hands down, sorry. It's too powerful and time saving for me to get back to using Explorer.
d) Maybe it's some settings I should disable/enable to avoid this behavior?
e) Edit #1. Very important! Similar thing happens with big files as well. I was copying large files from the pool to outside the pool. Again - this time from the pool to outside. The files were ~1.5GB in size. For the first 5-10 minutes Dopus was copying at about normal rate ~100MB+/s, but it was stoping and completely hanging every 5-10 seconds or so. I mean, it showed "not responding" when the copying hanged, then it resumed again, like nothing happened. And again and again, for ~10 minutes. Then it normalized and did not stop/hang anymore, but the copying speed dropped to about half of the normal ~50MB/s. Basically, the copying was done from WD Red 3TB to Hitachi 4TB drive, and the only other activity on any of them was Kodi was streaming an episode from WD Red (source) drive, which is insignificant, imho... Both drives are quite new and aOK.
Is this a normal drivepool behavior? Coz it does not look good and is annoying. Hanging the whole Dopus is enough by itself. But having only half the speed of drives while reading from the pool? Not good at all. I hope it's something on my system, that I can fix, please let me know.
Edit #2. 15 minutes later. Copying from the same place to the same place (from the pool, which happens to be same WD 3TB drive to the same Hitachi 4TB) and everything looks ok. Full speed ~100MB+/s and no stops and hangs.
Edit #3. Nah, after a cuple of minutes, speed drops to about half and stays there, again.
WTF is going on? Should I stop using Beta version of drivepool or is it Dopus (it shouldn't be), or this something else?
Edited by cryodream4. I was considering adding an SSD to the pool and using file placement rules to put all metadata for my movies and tv shows on it. I mean using filters like "*.jpg", "*.nfo", etc. I was thinking, the media files are big, and they pretty much never change, but meta is small and gets regularly updated, so that would possibly help with defragmentation. Having meta files separated onto SSD, would it be a better? Or possibly worse, as it would most definitely create an enormous amount of additional "merging work" for the pooling driver?
5. Let's say I put all meta onto SSD with file placement filters. If I enable duplication, that probably means I need 2 SSDs to be able to duplicate the meta and keep the SSD bonuses while using the meta files in the pool?
6. Today arrived my order of new 6TB drives. My first 6TBs ever. I was wondering, how much free space should I leave on these so not to get NTFS in trouble? I have 4TB, 3TB & 2TB drives in the pool atm. The setting that I use now is set to leave 200GB of free space. Would 200GB be enough for 6TB drive. Or, basically, the age old question: how much free space to leave on the drives? Should I leave the same amount in GBs on all size drives, or should I set the option to leave eg: 5%? I would not like to leave 10%-15%, like some people say, because I'd hate to loose that much space. Especially considering how much space is already lost on duplication. Jeez.
7. There are 2 balancers that have an option to set free space to be left on the drives:
Drive Usage Limiter, which only has percent setting. I have this balancer disabled atm.
Prevent Drive Overfill, which I have set to: try not to fill above: 90% or 170GB and Empty to 85% or 200GB. From the info describing how it works, I gather my settings ignore percent and use 170GB and 200GB settings. Am I correct?
8. If I want to enable Drive Usage Limiter, but keep size based free space setting, I should move Prevent Drive Overfill to be above Drive Usage Limiter, right?
9. Long story short: I have 4 "groups" of drives:
oldest: 10+ a mix of old 2TB drives
older: 8x Seagate 3TB drives
newer: 16x WD Red 3TB drives
brand new: 6x WD Red 6TB drives
What I would like to achieve is to "control" how the duplication woks. I would like to set "drive groups" of drives:
a) a group that holds non-duplicated files. That's the newest drives (my thinking, the newer the drives, the less chance to fail). That should be easy enough, I guess: just use Drive Usage Limiter and uncheck Duplicated for those drives?
b.) now this second scenario I gather is not possible atm, please confirm. I would like to set 2 groups for duplication: one group of newer drives for "primary copy" and second group for "duplicates". This way I could be sure, that all files have one copy on newer drives. As it stands atm, I guess drivepool would duplicate files onto random drives, hence I would have files with both copies on newer drives and with both copies on oldies...?
c) I would very much like to ask for this feature to set drive groups be added to drivepool in the future in a form of balancer, most probably. From what I gather how drivepool works, that should not be difficult, methinks. Additionally, you could enable setting groups as "primary" and "copies". This would have additional positive effect: you could disable real-time duplication and would always use "primary" group of newer, faster drives in real-time everyday file management, and files would duplicate to slower drives in the background...
10. I would like to ask for a feature in the drivepool's UI to display additional "free space" statistic. What I mean, is maybe it's me, still not used to pooled drives, but it's kinda confusing how much free space I really have in the pool. My pool shows 17.6 TB of free space, but considering that I have 32 drives in the pool atm, all keeping 200GB of free space , I always need to remember and do the math to realize, that actually I have FreeSpace - NumberOfDrives*200GB = RealFreeSpace. That's 6.4 TB in my case atm. But as I'm gonna constantly add drives and sometimes take some out, this number will constantly change and I'll constantly need to do the math.
I propose additional pie chart section, something like "Reserved Free Space" and then drivepool could show Free Space as in now (full) and additionally, calculated real free space left from FreeSpace-Files-ReservedFreeSpace=HowMuchSpaceIhaveLeftOnThePoolForRealz
11. I've read someplace on the forums, that exporting and importing of drivepool settings is coming. Very nice to hear, as this feature imho should be in every software, period. Also, why not save pool settings on the pool drives themselves? Duplicate for safety. Then, after windows reinstall, or after moving to a new machine, just pop-in the drives, and pool spins up with all the settings...
I was really surprised when I first realized that it wasn't this way already. I mean it as a compliment, actually. DrivePool and Scanner just have this quality feel and a whiff of some very good thinking and work put into in to it. Hence, I'm seriously baffled about annoying inability to quickly get away with the settings.
Oh, and CloudDrive? Seriously? One programmer doing this? All I can say - F**k you, Bitcasa. Anyone who knows what bitcasa was?is?whocares..., knows what I mean. F**k you, Bitcasa, eat your heart out...
Link to comment
Share on other sites
3 answers to this question
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.