Jump to content
  • 0

My first true rant with Drivepool.


Dave Hobson

Question

OK. As the title suggests I'm not happy.

After 10 days of carefully defragging my 70TB of media (largely 8TB drives), I decided to reinstall my server and have a fresh start on my optimized storage. All neatly organized with full shows that have ended, filling up three 8TB archive drives.

What happens? As someone who has zero interest in the inbuilt balancers and only uses the "Ordered File Placement" plugin, what I didn't expect after reinstalling the OS and then Drivepool was for the fact that every default balancer is enabled by default and ludicrously balancing itself is enabled by default. Why would anyone think that that's a good idea? By the time it's even possible to set a single pool to MY required settings it's already ripped plenty of files from a full 8TB hard drive cos well hey, I guess the whole world wants their drives " leveled out." In which case just remove the "Ordered File Placement" plugin from being available and I will know that DriveBender is the way to go. Like i said all this with the first pool so by the time I get to the 3rd pool? 

I guess it's my own fault for reinstalling my server....not!!

Sorry but I'm pissed off right now!

...(mutters to himself)

 

 

Link to comment
Share on other sites

19 answers to this question

Recommended Posts

  • 0

I'm sorry to hear about that, and we're sorry about that! 
Though ... this happening would depend heavily on the layout of your pool.  

My guess is that one or more of the drives were above 90% full? If so, then the "Prevent Drive Overfill" will rebalance the content on those drives, so that they'we only 85% full.  
The other possibility is that you had "unusable for duplication" data (Duplication Space Optimizer), or StableBit Scanner detected a read error on one of the drives. 

Otherwise, it shouldn't balance like this.   And even if it does, it should have to measure, and then check for duplication first, giving you plenty of time to change these settings. 

 

That said ... I need to bug Alex about storing the balancing settings on the pool itself, as we have a spot for it, and we do store some info already.  That way.... it should at least "default" to all balancers disabled on the new installation.  Which would have prevented this issue! 

 

Again, we are sorry! 

Link to comment
Share on other sites

  • 0

Thanks for the reply, Christopher.

Yep, I had the drives set at 95%. I have duplication but not with DrivePool itself. As for Scanner I hadn't got as far as installing it. "Duplication Space Optimizer" looks like one of the six that load by default so I definitely did have it enabled but NOT by choice as is the case with "Prevent Drive Overfill" and that's the point I'm making. I didn't ask them to run nor want them to run, but it seems we currently have no choice after a reinstall. 

So no duplication and no scanner installed (at the time) probably severely cut's down the time to react? 

Going back to the 95% (or anything above 90%) fill level. If it's such a dangerous option... Then a warning maybe when taking it above 90%? 

It's good to hear changes are in the pipeline. Personally, I think balancing disabled by default would equally do the job... but yes all balancers off would I guess also work. 

All I know is that either one of those options would have avoided this situation. I guess with hindsight an option is to move all data outside the poolpart folders BEFORE reinstall but it's pretty easy to think of hacks after the event.

Anyway, apologies for my tone... I'm over it now honestly.:D You were after all the one who pointed me to directory junctions who as a media hoarder has totally changed the game for me when using Sonarr. Even more so when I discovered they seem to be OS reinstall resilient.

BTW... I always sign in here with Google and I couldn't... I signed in with Microsoft instead. It's a common issue it seems and the 4th forum I have had an issue with. This whole typing PRODUCT name for authentication seems broken for many people on other forums.   

 

 

 

Link to comment
Share on other sites

  • 0
2 hours ago, Dave Hobson said:

Yep, I had the drives set at 95%

That would do it, actually.  The default threshold for the Prevent Drive Overfill is 90%.  

As for the warning, for the most part it wouldn't be needed. It's an issue when reinstalling or resetting settings only. 

2 hours ago, Dave Hobson said:

It's good to hear changes are in the pipeline. Personally, I think balancing disabled by default would equally do the job... but yes all balancers off would I guess also work. 

Yeah, just talked to Alex (the Developer) about this, and hopefully we can get this changed sooner, rather than later. 

And it wouldn't turn off... but ideally, we should store all of the balancer settings on the pool.  And either read them from there, or as a backup to be read when it's a "new" pool to the system.  

I mean, we store duplication settings directly in the folder structure, and we store reparse point and other info on the pool as well.  No reason we couldn't store balancing settings (or a copy of it) on the pool, as well.

 

2 hours ago, Dave Hobson said:

Anyway, apologies for my tone... I'm over it now honestly.:D You were after all the one who pointed me to directory junctions who as a media hoarder has totally changed the game for me when using Sonarr. Even more so when I discovered they seem to be OS reinstall resilient.

And no worries. It's a legitimate complaint, and one that we should/need to address. 

And glad that Junctions have been awesome for you! :) 
(junctions on the pool are .. stored on the pool already... :) )

2 hours ago, Dave Hobson said:

 BTW... I always sign in here with Google and I couldn't... I signed in with Microsoft instead. It's a common issue it seems and the 4th forum I have had an issue with. This whole typing PRODUCT name for authentication seems broken for many people on other forums.   

Always something, isn't it? 

I'll pass this along and see if we can do something about it. 

Link to comment
Share on other sites

  • 0

Sounds like a good candidate for a custom installation script.  You can either accept DP's defaults during install, or (if you know you want something else), you choose 'custom' and pick which balancers/features/etc you want enabled/disabled before the install actually kicks off.

 

Link to comment
Share on other sites

  • 0
6 hours ago, Christopher (Drashna) said:

Yeah, just talked to Alex (the Developer) about this, and hopefully we can get this changed sooner, rather than later. 

And it wouldn't turn off... but ideally, we should store all of the balancer settings on the pool.  And either read them from there, or as a backup to be read when it's a "new" pool to the system.  

I mean, we store duplication settings directly in the folder structure, and we store reparse point and other info on the pool as well.  No reason we couldn't store balancing settings (or a copy of it) on the pool, as well.

 

1
2

That would be awesome. I actually only changed to ordered file placement because well everything was so micromanaged previously that come reinstall time it just got far too much trying to manually set everything as was.

That's where your mention of directory junctions peaked my interest into what they were. The true copy of Bluray Remux seasons 1-4 of the Blacklist on my defragged Archive drive.

Directory junction to it on my day to day active download drives along with season 05 that Plex, Sonarr etc can be pointed to.

Anyway thanks again for your time and I look forward to the changes that Alex makes.

:)

Link to comment
Share on other sites

  • 0

Ah, something else I meant to ask about. 

Obviously, as I mentioned I'm kind of OCD about defragmentation, especially on the 8TB archive drives that I'm filling. Now because the nature of those drives is that they are filled once and done with, then ReFS kind of makes sense just for bit rot prevention. (Beyond that feature I don't really know about ReFS at all.)

As a PerfectDisk user, when I fill a new drive the defrag map looks clean and reports no files are fragmented as expected or at least that's the case with NTFS. With ReFS though the exact same procedure shows EVERY single file as being fragmented (either 2 or 3 fragments for each movie or episode.)... and so my OCD kicked in and I went back to NTFS. 

I recently started thinking this is more to do with my lack of understanding or how ReFS works though or possibly it's just that PerfectDisk isn't fully ReFS friendly.

I know it's not really anything to do with StableBit but I do know you use ReFS. So any brief answer from your knowledge would be a great help ease my OCD. :) It Kinda sucks that there is hardly any info on ReFS and fragmentation on any of the Microsoft sites I have looked at. 

So, in a nutshell, would you recommend ReFS (with 64K clusters) and just disable monitoring in PerfectDisk to eliminate my OCD:lol: I'm thinking 64K blusters because on these drives the files sizes are 5GB minimum up to probably 80GB. More than happy with a yes or no. :)

 

Link to comment
Share on other sites

  • 0
1 hour ago, Dave Hobson said:

So, in a nutshell, would you recommend ReFS (with 64K clusters) and just disable monitoring in PerfectDisk to eliminate my OCD:lol: I'm thinking 64K blusters because on these drives the files sizes are 5GB minimum up to probably 80GB. More than happy with a yes or no. :)

 

No.  ReFS isn't ready for primetime yet.  Tried it for a year on a pair of drives in software RAID 0.  Did a lot of testing, and a lot of reading.  Too many people running into issues on ReFS for me to fully trust it.  And many of the usual utilities that work well with NTFS aren't updated for ReFS yet.  I combat bit rot by using SnapRAID and frequent syncs/scrubs on my data.  I'd rather have full control, but that's me.

Link to comment
Share on other sites

  • 0
6 hours ago, Jaga said:

No.  ReFS isn't ready for primetime yet.  Tried it for a year on a pair of drives in software RAID 0.  Did a lot of testing, and a lot of reading.  Too many people running into issues on ReFS for me to fully trust it.  And many of the usual utilities that work well with NTFS aren't updated for ReFS yet.  I combat bit rot by using SnapRAID and frequent syncs/scrubs on my data.  I'd rather have full control, but that's me.

Cool, I have wondered about SnapRaid and recall looking into it way back. I probably need to revisit it. That said redundancy isn't really an issue as all 50-60TB is mirrored directly to Gdrive.

But if its a means to an end I may delve back into some research and refresh my memory. That said, I'm at 13 drives I think, so that's gonna be 2 x8TB  drives to achieve this and being in rip-off Britain that's around £400. Granted it gives me parity also but that's not really my main concern. I will, however, look into it more on my next days off work. Thanks for the info. :)

 

 

Link to comment
Share on other sites

  • 0
1 hour ago, daveyboy37 said:

Lol. I have no idea what that is. Will Google it when I get home. 

EDIT...I do know what Shucking is now and have actually done this in the past, without actually knowing there's a name for it. 

Well that's odd I had been having issues signing in via Google for weeks and signed in with Microsoft instead and this last reply has signed me in under the previous account on a different device.

 

 

 
 

 

Link to comment
Share on other sites

  • 0

Just an FYI, in case anyone is not aware: my father taught me long ago not to put things on HDDs and then forget about them, expecting the data to remain intact 'forever'. He told me magnetic media change over long periods of time (i.e. years): the bits 'drift' and weaken, and that can lead to unreadable data eventually. He said that, if the drives are in a system, I should run a utility like Scanner in the background that will periodically read the data (and re-write it if needed), to keep the recorded data intact and strong. For drives not in a system he said I should connect them to a system every year or so and run something like SpinRite to refresh the data signals.

He never said anything about SSDs. I'm not sure whether there's any 'magnetic drift' issue with them, although I suspect not.

Link to comment
Share on other sites

  • 0
On 6/1/2018 at 1:25 PM, Dave Hobson said:

So, in a nutshell, would you recommend ReFS (with 64K clusters) and just disable monitoring in PerfectDisk to eliminate my OCD:lol: I'm thinking 64K blusters because on these drives the files sizes are 5GB minimum up to probably 80GB. More than happy with a yes or no. :)

Honestly, I'd say use NTFS.  

22 hours ago, Jaga said:

No.  ReFS isn't ready for primetime yet.  Tried it for a year on a pair of drives in software RAID 0.  Did a lot of testing, and a lot of reading.  Too many people running into issues on ReFS for me to fully trust it.  And many of the usual utilities that work well with NTFS aren't updated for ReFS yet.  I combat bit rot by using SnapRAID and frequent syncs/scrubs on my data.  I'd rather have full control, but that's me.

In Server 2012R2, and up, I'd say it's pretty much ready. 

The biggest issue is that most disk tools are NOT compatible with ReFS yet. Which is the real problem. 

As for bit rot, that's a fun subject.

12 hours ago, Jaga said:

Yep, you'd want at least 2 parity drives for 13 data.  If cost is an issue, look into "shucking drives" to save some money.

That saves you cost up front, and dumps the cost on the warranty. Eg, you have none. So if the drive fails, you may be SOL, if you didn't save the enclosures. 

But seriously, a 3-5 year warranty is worth the difference, IMO. 

6 hours ago, ikon said:

He never said anything about SSDs. I'm not sure whether there's any 'magnetic drift' issue with them, although I suspect not.

Actually it does.  It's not the same effect, but the outcome is the same:  bleeding.   It's part of why there are a limited number of writes, IIRC. 

 

Link to comment
Share on other sites

  • 0

Thanks for all the awesome input everyone. 

I think I'm gonna say with NTFS. Especially as the SnapRaid site seemingly throws up some suggestions linking to this article https://en.wikipedia.org/wiki/Comparison_of_file_verification_software 

With regards to Shucking... Although as mentioned I have done this is the past (with 4TB drives when 4TB was the latest thing) the cost difference is negligible especially bearing in mind  the reasons Christopher mentions and not an approach I want to return to. Though the cost isn't really the issue my current aim is to get rid of/repurpose some of those 4TB drives and replace them with another couple of 8TB drives. Maybe when that's done I will look again at SnapRaid and It's parity. If Google ever backtrack on unlimited storage at a stupidly low price in the same way Amazon did then it may scale higher on my priorities, but for now... 

 

EDIT

Now I'm even more curious as I have just read a post on /r/snapraid suggesting that its possible to raid 0 a couple of pairs of 4TB drives and use them as 8TB parity drives. Though the parity would possibly less stable it would give me parity (even though it's not priority) and would allow for data scrubbing (my main aim) and mean that those 4TB drives wouldn't sit in a drawer gathering dust. So if any of you Snapraid users have any thoughts on this, I would be glad for any feedback/input.. 

 

 

 

Link to comment
Share on other sites

  • 0
9 hours ago, Christopher (Drashna) said:

In Server 2012R2, and up, I'd say it's pretty much ready. 

The biggest issue is that most disk tools are NOT compatible with ReFS yet. Which is the real problem. 

 

That saves you cost up front, and dumps the cost on the warranty. Eg, you have none. So if the drive fails, you may be SOL, if you didn't save the enclosures. 

But seriously, a 3-5 year warranty is worth the difference, IMO. 

I haven't messed with the server implementation of ReFS, though I assumed it used the same core.  I ditched it ~2 years ago after having some issues working on the drives with utilities.  Just wasn't worth the headache.  I never had actual problems with data on the volume, but just felt unsafe being that "out there" without utilities I normally relied on.  When the utilities catch back up, I'd say it's probably safe to go with it, for a home enthusiast.  Just my .02 - I'm not a ReFS expert.

Shucking has positives and negatives, to be sure.  There's one 8TB drive widely available in the US that normally retails for $300, and is on sale regularly for $169.  For a reduction in warranty (knowing it's the same exact hardware in the case), I'm more than happy to save 44% per drive if all I need to do is shuck it.  They usually die at the beginning or end of their lifespan anyway, so you know fairly early on if it's going to have issues.  That's my plan for the new array this July/Aug - shuck 6-10 drives and put them through their paces early, in case any are weak.

 

6 hours ago, Dave Hobson said:

Now I'm even more curious as I have just read a post on /r/snapraid suggesting that its possible to raid 0 a couple of pairs of 4TB drives and use them as 8TB parity drives. Though the parity would possibly less stable it would give me parity (even though it's not priority) and would allow for data scrubbing (my main aim) and mean that those 4TB drives wouldn't sit in a drawer gathering dust. So if any of you Snapraid users have any thoughts on this, I would be glad for any feedback/input.. 

No need to RAID them just for SnapRAID's parity.  It fully supports split parity across smaller drives - you can have a single "parity set" on multiple drives.  You just have to configure it using commas in the parity list in the config.  There's documentation showing how to do it.  I am also doing that with my old 4TB WD Reds when I add new 8TB data drives.  I'll split parity across 2 Reds, so that my 4 total Reds cover the necessary 2 parity "drives".  It'll save me having to fork out for another 2 8TB's, which is great.

Link to comment
Share on other sites

  • 0
2 hours ago, Jaga said:

I haven't messed with the server implementation of ReFS, though I assumed it used the same core.  I ditched it ~2 years ago after having some issues working on the drives with utilities.  Just wasn't worth the headache.  I never had actual problems with data on the volume, but just felt unsafe being that "out there" without utilities I normally relied on.  When the utilities catch back up, I'd say it's probably safe to go with it, for a home enthusiast.  Just my .02 - I'm not a ReFS expert.

 

That exactly so I think I will stick with NTFS for now.

 

2 hours ago, Jaga said:

Shucking has positives and negatives, to be sure.  There's one 8TB drive widely available in the US that normally retails for $300, and is on sale regularly for $169.  For a reduction in warranty (knowing it's the same exact hardware in the case), I'm more than happy to save 44% per drive if all I need to do is shuck it.  They usually die at the beginning or end of their lifespan anyway, so you know fairly early on if it's going to have issues.  That's my plan for the new array this July/Aug - shuck 6-10 drives and put them through their paces early, in case any are weak.

 

$169 dollars so that's about £125... Wow, like I said rip off Britain. That said the difference In prices between 8TB internal and external has diminished over here lately and both can be picked up for around £180.

 

2 hours ago, Jaga said:

No need to RAID them just for SnapRAID's parity.  It fully supports split parity across smaller drives - you can have a single "parity set" on multiple drives.  You just have to configure it using commas in the parity list in the config.  There's documentation showing how to do it.  I am also doing that with my old 4TB WD Reds when I add new 8TB data drives.  I'll split parity across 2 Reds, so that my 4 total Reds cover the necessary 2 parity "drives".  It'll save me having to fork out for another 2 8TB's, which is great.

 

Oh, that's interesting. Its a while since I looked at Snapraid and the ability for split parity drives has definitely come since then. I'm not doubting you as I have just read several threads on various sites saying the same thing. That's just awesome. 

It's probably stupid NOT going down that route as I know I will have 4-6 4TB drives spare over the next few months.

Thanks so much for your help everybody. :)

Link to comment
Share on other sites

  • 0
14 hours ago, ikon said:

I get around the data loss problem by having multiple backup drive sets. I have 4 to 5 complete copies of all my data, depending on whether my current off-site drive set is fully caught up.

Yeah, I hear you, however, that's not gonna be possible in my case. I'm already kicking up close to 50-60TB of data and I only deal in Remux as a minimum quality as soon as the BluRay is released. Game of Thrones Season 1 has just hit my main tracker in UHD HDR so that's an immediate jump of an extra 100GB for a single season. I still have plenty of series yet that need to be upgraded to 1080p Remux let alone if they later become available in UHD HDR. 

I simply use Gdrive for redundancy but either way, redundancy isn't my concern. Data integrity is. So it's more the data scrubbing element of SnapRaid that interests me. That said, now I know I can use my soon to be redundant 4TB drives for parity It's a no-brainer.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...