Jump to content

Dave Hobson

Members
  • Posts

    30
  • Joined

  • Last visited

  • Days Won

    2

Reputation Activity

  1. Like
    Dave Hobson reacted to Jaga in My first true rant with Drivepool.   
    I haven't messed with the server implementation of ReFS, though I assumed it used the same core.  I ditched it ~2 years ago after having some issues working on the drives with utilities.  Just wasn't worth the headache.  I never had actual problems with data on the volume, but just felt unsafe being that "out there" without utilities I normally relied on.  When the utilities catch back up, I'd say it's probably safe to go with it, for a home enthusiast.  Just my .02 - I'm not a ReFS expert.
    Shucking has positives and negatives, to be sure.  There's one 8TB drive widely available in the US that normally retails for $300, and is on sale regularly for $169.  For a reduction in warranty (knowing it's the same exact hardware in the case), I'm more than happy to save 44% per drive if all I need to do is shuck it.  They usually die at the beginning or end of their lifespan anyway, so you know fairly early on if it's going to have issues.  That's my plan for the new array this July/Aug - shuck 6-10 drives and put them through their paces early, in case any are weak.
     
    No need to RAID them just for SnapRAID's parity.  It fully supports split parity across smaller drives - you can have a single "parity set" on multiple drives.  You just have to configure it using commas in the parity list in the config.  There's documentation showing how to do it.  I am also doing that with my old 4TB WD Reds when I add new 8TB data drives.  I'll split parity across 2 Reds, so that my 4 total Reds cover the necessary 2 parity "drives".  It'll save me having to fork out for another 2 8TB's, which is great.
  2. Like
    Dave Hobson got a reaction from Jaga in My first true rant with Drivepool.   
    Thanks for all the awesome input everyone. 
    I think I'm gonna say with NTFS. Especially as the SnapRaid site seemingly throws up some suggestions linking to this article https://en.wikipedia.org/wiki/Comparison_of_file_verification_software 
    With regards to Shucking... Although as mentioned I have done this is the past (with 4TB drives when 4TB was the latest thing) the cost difference is negligible especially bearing in mind  the reasons Christopher mentions and not an approach I want to return to. Though the cost isn't really the issue my current aim is to get rid of/repurpose some of those 4TB drives and replace them with another couple of 8TB drives. Maybe when that's done I will look again at SnapRaid and It's parity. If Google ever backtrack on unlimited storage at a stupidly low price in the same way Amazon did then it may scale higher on my priorities, but for now... 
     
    EDIT
    Now I'm even more curious as I have just read a post on /r/snapraid suggesting that its possible to raid 0 a couple of pairs of 4TB drives and use them as 8TB parity drives. Though the parity would possibly less stable it would give me parity (even though it's not priority) and would allow for data scrubbing (my main aim) and mean that those 4TB drives wouldn't sit in a drawer gathering dust. So if any of you Snapraid users have any thoughts on this, I would be glad for any feedback/input.. 
     
     
     
  3. Like
    Dave Hobson reacted to Christopher (Drashna) in My first true rant with Drivepool.   
    Yeah, it would be nice.  
     
    And yeah, some of the advanced tricks that you can do make management of media much easier.
  4. Like
    Dave Hobson reacted to johnnj in So far so good...   
    Wow, thanks for responding to my post!  
    I had actually seen that script before I started my migration, but wanted to start from scratch duplicate-wise.  DB had a dupe to primary ratio of greater than 1:1 for a long time and I didn't want to bring that along for the ride.  I had actually nuked the duplicates on my DB pool a year or so ago because it was really bad but it's been creeping back up since then.  
    I don't want to trash DB too much because for years it did to its job for me and I know that for a couple of years now it's been at varying levels of being orphaned.  It's just that in the last few weeks in the course of doing my server upgrade it caused me to waste a LOT of time and it was irritating.  Had I never upgraded to 2016 I could have somewhat happily continued running my years-old version  But I upgraded my MB and cpu to a new gen 8 i7 and wanted to take advantage of the iGPU hardware decoding and no matter how much inf wrangling I tried I couldn't get the intel drivers to work under 2012r2, so an upgrade to 2016 was needed.  
    I've already surpassed the amount of time that DB took before it would act up so I"m optimistic that I can let the server just run and I can go on with my life.  
    Thanks again for the response and for being so engaged in the community.  
    John
  5. Like
    Dave Hobson reacted to johnnj in So far so good...   
    I've been using Drive Bender since MS EOL'ed Drive Extender, but have never been able to use any of the 2.x versions of it due to random drive disconnects under 2012/2012r2.
    A couple of weeks ago I upgraded the MB in my server and put a fresh install of 2016 Essentials on, but the mount point on my trusty old DB 1.9.5 wasn't working so I had to go up to 2.8.
    To be honest, it's been nothing but trouble.  Drives dropping off (but still showing up under device manager) and system lockups.  It got better when I disabled the DB SMART monitoring service, but whenever DB would start an automatic health check the pool would freeze and eventually the system would lock up, not even responding to RDP.  
    I've been aware of Drive Pool for some time, but assumed (incorrectly) that migrating the pool would be a pain.  This morning I finally had it with DB and decided to check out Drive Pool and found the migration post on the forum.  The part that took the longest was removing the duplication on the DB pool before moving the file structure on each drive.
    I migrated a 19 drive 95TB pool in about 2 hours total and the pool came right up in DP and so far it's very responsive.  I like how lightweight it is and I got the license package with Scanner, which seems like a big improvement over HD Sentinal (which had its own issues).  It's only about 25% of the way through creating the duplicates, but even with that going on it seems to perform better than DB did when it was just sitting there.
    I feel like I should have switched a long time ago....
    Thanks, StableBIt and thanks to the community for having all the info I needed on this forum to make an informed purchasing decision (warts and all) and to do the migration itself.
    John
  6. Like
    Dave Hobson reacted to Christopher (Drashna) in My first true rant with Drivepool.   
    That would do it, actually.  The default threshold for the Prevent Drive Overfill is 90%.  
    As for the warning, for the most part it wouldn't be needed. It's an issue when reinstalling or resetting settings only. 
    Yeah, just talked to Alex (the Developer) about this, and hopefully we can get this changed sooner, rather than later. 
    And it wouldn't turn off... but ideally, we should store all of the balancer settings on the pool.  And either read them from there, or as a backup to be read when it's a "new" pool to the system.  
    I mean, we store duplication settings directly in the folder structure, and we store reparse point and other info on the pool as well.  No reason we couldn't store balancing settings (or a copy of it) on the pool, as well.
     
    And no worries. It's a legitimate complaint, and one that we should/need to address. 
    And glad that Junctions have been awesome for you!  
    (junctions on the pool are .. stored on the pool already...  )
    Always something, isn't it? 
    I'll pass this along and see if we can do something about it. 
  7. Thanks
    Dave Hobson got a reaction from Christopher (Drashna) in My first true rant with Drivepool.   
    OK. As the title suggests I'm not happy.
    After 10 days of carefully defragging my 70TB of media (largely 8TB drives), I decided to reinstall my server and have a fresh start on my optimized storage. All neatly organized with full shows that have ended, filling up three 8TB archive drives.
    What happens? As someone who has zero interest in the inbuilt balancers and only uses the "Ordered File Placement" plugin, what I didn't expect after reinstalling the OS and then Drivepool was for the fact that every default balancer is enabled by default and ludicrously balancing itself is enabled by default. Why would anyone think that that's a good idea? By the time it's even possible to set a single pool to MY required settings it's already ripped plenty of files from a full 8TB hard drive cos well hey, I guess the whole world wants their drives " leveled out." In which case just remove the "Ordered File Placement" plugin from being available and I will know that DriveBender is the way to go. Like i said all this with the first pool so by the time I get to the 3rd pool? 
    I guess it's my own fault for reinstalling my server....not!!
    Sorry but I'm pissed off right now!
    ...(mutters to himself)
     
     
  8. Like
    Dave Hobson got a reaction from Christopher (Drashna) in Micro-management Of Drivepool - Solved by using junctions.   
    Even more useful now.
    I have all my data more or less duplicated on gdrive. As my penchant for higher quality files grows local space becomes more of an issue (but not in anyway terminal).
    As a result some shows that friends/family have asked for and not to my personal taste reside solely on gdrive (which has a separate Plex Server attached to it). I tried NetDrive @ Drive File Stream for a while, pointing my local server to those. This lead to one of two evils. All the shows I have locally would show as duplicates OR in Plex I would have to point to the individual show directories for stuff that is only on google (far too lazy to separate the shows on google itself).
    On a whim I just decided to see what would happen if i selected some show folders on Drive File Stream and make directories junctions for them. I really didn't expect it to work and yet it did. I went to my google api screen to monitor the hits while Plex scanned the files in and everything was fine. 
    So now I'm gonna set up a separate folder consisting only of these junction points and point Plex to that. If I do ever get around to watching these shows they are there without switching to the Gdrive mounted server or having to download terrabytes of data locally for shows I may never bother with. 
     
    I have no idea about Directory Junctions inner workings... but wow this is game changing for me. 
     
  9. Like
    Dave Hobson got a reaction from Christopher (Drashna) in Micro-management Of Drivepool - Solved by using junctions.   
    Yep. I'm not really a command line type of guy.
    But http://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html this really helped.
  10. Like
    Dave Hobson reacted to TomTiddler in Is DrivePool abandoned software?   
    Just my 10 cents here ... as someone that used to design and implement custom file systems, I'm well aware of a) How much work is involved, and what a truly splendid job you guys (guy?) have done with DrivePool. I have two fairly large pools running (8 drives 30TB in one, 4 drives 16TB in the other), and have to say zero loss of data ever, and remarkably few problems.
     
    Given how cheap the product is, I would have absolutely NO problem with paying an annual support fee. I suspect that you might see a similar response from the many, many customers who have NOT contributed to this thread. Keep up the good work guys, and don't be put off by the haters/whiners.
     
    IT
×
×
  • Create New...