Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 05/19/18 in all areas

  1. 1 point
    Jaga

    DrivePool + Primocache

    I recently found out these two products were compatible, so I wanted to check performance characteristics of a pool with a cache assigned to it's underlying drives. Pleasantly, I found there was a huge increase in pool drive throughput using Primocache and a good sized Level-1 RAM cache. This pool uses a simple configuration: 3 WD 4TB Reds with 64KB block size (both volume and DrivePool). Here are the raw tests on the Drivepool volume, without any caching going on yet: After configuring and enabling a sizable Level-1 read/write cache in Primocache on the actual drives (Z: Y: and X:), I re-ran the test on the DrivePool volume and got these results: As you can see, not only do both pieces of software work well with each other, the speed increase on all DrivePool operations (the D: in the benchmarks was my DrivePool letter) was vastly greater. For anyone looking to speed up their pool, Primocache is a viable and effective means of doing so. It would even work well with the SSD Cache feature in DrivePool - simply cache the SSD with Primocache, and boost read (and write if you use a UPS) speeds. Network speeds are of course, still limited by bandwidth, but any local pool operations will run much, much faster. I can also verify this setup works well with SnapRAID, especially if you also cache the Parity drive(s). I honestly wasn't certain if this was going to work when I started thinking about it, but I'm very pleased with the results. If anyone else would like to give it a spin, Primocache has a 60-day trial on their software.
  2. 1 point
    jak64950

    Request -- Better Duplication Settings

    Hello, So I'm running a 100TB plex server with only 60TB of local storage and would like to duplicate only certain folders from my clouddrives to local. So for example let's say I want 1000 of my 3000 movies on the local storage. As far as I can tell (and what I've been doing) is going one by one through each folder and setting duplication. Even just being able to select multiple folders or having a config file would be a great improvement to the current system.
  3. 1 point
    Awesome! And yeah, once the cache gets filled, it dumps what is in there, for the new system. So a small cache means constant downloading. So I'm glad to hear that this was the issue here.
  4. 1 point
    The root folder is called "StableBit CloudDrive". If you only renamed that folder, just give it that default name again. The entire (Google Drive) StableBit folder structure looks like this: \StableBit CloudDrive\CloudPart-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-CONTENT(-ATTACHMENT,-METADATA) The "xxxx..." string is a placeholder for the global unique id in hex code (e.g. abcdef01-abc1-def2-1234-12345abcd67f). All files inside the folders on Google Drive have that id in their name and you will find it also in the cache folder's name on your local machine (e.g. \CloudPart-abcdef01-abc1-def2-1234-12345abcd67f).
  5. 1 point
    Scanner also pings the SMART status of each drive on a short timer. You will want to go into Settings, and to the S.M.A.R.T. tab to enable "Throttle queries". It's your choice how long to set it, and you also have the option to limit queries to the work window (active disk scanning) only, or to not query the drive if it has spun down (sleep mode on the drive). In your case, I would simply throttle queries to every 5-10 minutes, and enable "Only query during the work window or if scanning". Then under the General tab set the hours it's allowed to access the drives to perform scans, and check "Wake up to scan". It should limit drive activity and allow the machine to sleep normally.
  6. 1 point
    OK. As the title suggests I'm not happy. After 10 days of carefully defragging my 70TB of media (largely 8TB drives), I decided to reinstall my server and have a fresh start on my optimized storage. All neatly organized with full shows that have ended, filling up three 8TB archive drives. What happens? As someone who has zero interest in the inbuilt balancers and only uses the "Ordered File Placement" plugin, what I didn't expect after reinstalling the OS and then Drivepool was for the fact that every default balancer is enabled by default and ludicrously balancing itself is enabled by default. Why would anyone think that that's a good idea? By the time it's even possible to set a single pool to MY required settings it's already ripped plenty of files from a full 8TB hard drive cos well hey, I guess the whole world wants their drives " leveled out." In which case just remove the "Ordered File Placement" plugin from being available and I will know that DriveBender is the way to go. Like i said all this with the first pool so by the time I get to the 3rd pool? I guess it's my own fault for reinstalling my server....not!! Sorry but I'm pissed off right now! ...(mutters to himself)
  7. 1 point
    Jaga

    My first true rant with Drivepool.

    I haven't messed with the server implementation of ReFS, though I assumed it used the same core. I ditched it ~2 years ago after having some issues working on the drives with utilities. Just wasn't worth the headache. I never had actual problems with data on the volume, but just felt unsafe being that "out there" without utilities I normally relied on. When the utilities catch back up, I'd say it's probably safe to go with it, for a home enthusiast. Just my .02 - I'm not a ReFS expert. Shucking has positives and negatives, to be sure. There's one 8TB drive widely available in the US that normally retails for $300, and is on sale regularly for $169. For a reduction in warranty (knowing it's the same exact hardware in the case), I'm more than happy to save 44% per drive if all I need to do is shuck it. They usually die at the beginning or end of their lifespan anyway, so you know fairly early on if it's going to have issues. That's my plan for the new array this July/Aug - shuck 6-10 drives and put them through their paces early, in case any are weak. No need to RAID them just for SnapRAID's parity. It fully supports split parity across smaller drives - you can have a single "parity set" on multiple drives. You just have to configure it using commas in the parity list in the config. There's documentation showing how to do it. I am also doing that with my old 4TB WD Reds when I add new 8TB data drives. I'll split parity across 2 Reds, so that my 4 total Reds cover the necessary 2 parity "drives". It'll save me having to fork out for another 2 8TB's, which is great.
  8. 1 point
    Dave Hobson

    My first true rant with Drivepool.

    Thanks for all the awesome input everyone. I think I'm gonna say with NTFS. Especially as the SnapRaid site seemingly throws up some suggestions linking to this article https://en.wikipedia.org/wiki/Comparison_of_file_verification_software With regards to Shucking... Although as mentioned I have done this is the past (with 4TB drives when 4TB was the latest thing) the cost difference is negligible especially bearing in mind the reasons Christopher mentions and not an approach I want to return to. Though the cost isn't really the issue my current aim is to get rid of/repurpose some of those 4TB drives and replace them with another couple of 8TB drives. Maybe when that's done I will look again at SnapRaid and It's parity. If Google ever backtrack on unlimited storage at a stupidly low price in the same way Amazon did then it may scale higher on my priorities, but for now... EDIT Now I'm even more curious as I have just read a post on /r/snapraid suggesting that its possible to raid 0 a couple of pairs of 4TB drives and use them as 8TB parity drives. Though the parity would possibly less stable it would give me parity (even though it's not priority) and would allow for data scrubbing (my main aim) and mean that those 4TB drives wouldn't sit in a drawer gathering dust. So if any of you Snapraid users have any thoughts on this, I would be glad for any feedback/input..
  9. 1 point
    Honestly, I'd say use NTFS. In Server 2012R2, and up, I'd say it's pretty much ready. The biggest issue is that most disk tools are NOT compatible with ReFS yet. Which is the real problem. As for bit rot, that's a fun subject. That saves you cost up front, and dumps the cost on the warranty. Eg, you have none. So if the drive fails, you may be SOL, if you didn't save the enclosures. But seriously, a 3-5 year warranty is worth the difference, IMO. Actually it does. It's not the same effect, but the outcome is the same: bleeding. It's part of why there are a limited number of writes, IIRC.
  10. 1 point
    johnnj

    So far so good...

    I've been using Drive Bender since MS EOL'ed Drive Extender, but have never been able to use any of the 2.x versions of it due to random drive disconnects under 2012/2012r2. A couple of weeks ago I upgraded the MB in my server and put a fresh install of 2016 Essentials on, but the mount point on my trusty old DB 1.9.5 wasn't working so I had to go up to 2.8. To be honest, it's been nothing but trouble. Drives dropping off (but still showing up under device manager) and system lockups. It got better when I disabled the DB SMART monitoring service, but whenever DB would start an automatic health check the pool would freeze and eventually the system would lock up, not even responding to RDP. I've been aware of Drive Pool for some time, but assumed (incorrectly) that migrating the pool would be a pain. This morning I finally had it with DB and decided to check out Drive Pool and found the migration post on the forum. The part that took the longest was removing the duplication on the DB pool before moving the file structure on each drive. I migrated a 19 drive 95TB pool in about 2 hours total and the pool came right up in DP and so far it's very responsive. I like how lightweight it is and I got the license package with Scanner, which seems like a big improvement over HD Sentinal (which had its own issues). It's only about 25% of the way through creating the duplicates, but even with that going on it seems to perform better than DB did when it was just sitting there. I feel like I should have switched a long time ago.... Thanks, StableBIt and thanks to the community for having all the info I needed on this forum to make an informed purchasing decision (warts and all) and to do the migration itself. John
  11. 1 point
    johnnj

    So far so good...

    Wow, thanks for responding to my post! I had actually seen that script before I started my migration, but wanted to start from scratch duplicate-wise. DB had a dupe to primary ratio of greater than 1:1 for a long time and I didn't want to bring that along for the ride. I had actually nuked the duplicates on my DB pool a year or so ago because it was really bad but it's been creeping back up since then. I don't want to trash DB too much because for years it did to its job for me and I know that for a couple of years now it's been at varying levels of being orphaned. It's just that in the last few weeks in the course of doing my server upgrade it caused me to waste a LOT of time and it was irritating. Had I never upgraded to 2016 I could have somewhat happily continued running my years-old version But I upgraded my MB and cpu to a new gen 8 i7 and wanted to take advantage of the iGPU hardware decoding and no matter how much inf wrangling I tried I couldn't get the intel drivers to work under 2012r2, so an upgrade to 2016 was needed. I've already surpassed the amount of time that DB took before it would act up so I"m optimistic that I can let the server just run and I can go on with my life. Thanks again for the response and for being so engaged in the community. John
  12. 1 point
    bzowk

    Pool Activity Monitoring

    Hey Guys - I've been using DrivePool & Scanner for a few years now and overall it's been great. My home pool currently consists of 12 disks (11 SATA + 1 SSD for caching) totalling over 43.7tb which is assigned to my D: drive. Being a big fan of monitoring resources, I'd love to be able to monitor the overall disk performance in some sort of desktop gadget or widget. This is easy to do for the pool's individual disks if drive letters are assigned or within Scanner, but not the pool as a whole. Since the pool isn't a standard disk, most applications that do this simply show the D:\ as having no activity ever unfortunately. One of the many examples of what I'd like is an older Windows Gadget "Drive Activity." Does anyone know of an application or workaround where I could get the pool's activity to be shown for typical monitoring applications? All I really would want is something simple which would show (or trick applications into showing) either the combined read / write totals or the highest value of the disks comprising the pool. Thanks!
  13. 1 point
    Yeah, it would be nice. And yeah, some of the advanced tricks that you can do make management of media much easier.
  14. 1 point
    Thank you Chris and Jaga. Ended up just copying everything over so that everything would be in 1 pool. Forced me to do some much needed digital housekeeping.
  15. 1 point
    That would do it, actually. The default threshold for the Prevent Drive Overfill is 90%. As for the warning, for the most part it wouldn't be needed. It's an issue when reinstalling or resetting settings only. Yeah, just talked to Alex (the Developer) about this, and hopefully we can get this changed sooner, rather than later. And it wouldn't turn off... but ideally, we should store all of the balancer settings on the pool. And either read them from there, or as a backup to be read when it's a "new" pool to the system. I mean, we store duplication settings directly in the folder structure, and we store reparse point and other info on the pool as well. No reason we couldn't store balancing settings (or a copy of it) on the pool, as well. And no worries. It's a legitimate complaint, and one that we should/need to address. And glad that Junctions have been awesome for you! (junctions on the pool are .. stored on the pool already... ) Always something, isn't it? I'll pass this along and see if we can do something about it.
  16. 1 point
    GDogTech

    Fantastic Deal on 4TB HDD at Newegg!

    Hello everyone: Not sure if this is the correct Forum for this, but I just wanted to let you all know about what I consider to be a fantastic deal on some 4TB Drives at Newegg. Drashna, you can move it if you need to. To save everyone’s time, I’ll just give you the stats: · Brand: Toshiba · Model: MD04ACA400 (Bare Drive) · Capacity: 4TB · Size: 3.5” · Speed: 7200RPM · Connection: SATA 6.0Gb/s · Cache: 128MB · Warranty: 5-Years from Toshiba · Seller: Newegg · Item Number: N82E16822149644 (Newegg) · Shipping: Free, 3-Day Fed-Ex · Price: $93.49 (That’s $23.37 per TB, Delivered!) IMPORTANT: When you do a Search for the drive on Newegg’s site, do so using only the Newegg Item Number. The drive does not come up with an ordinary search, but the same model from GoHardDrive.com DOES come up for $93.00 or so with a 3-year warranty. If you want, you can just try this direct link to the Newegg offering: https://www.newegg.com/Product/Product.aspx?Item=N82E16822149644 Newegg says they are discounting the drive from $205.99 (55%). I found the Newegg offering quite by chance after I had to return BOTH the drives I purchased from GoHardDrive.com due to numerous bad sectors right out of the package. ALSO, the GoHardDrive units have Manufacturing dates 18-24 Months ago and GoHardDrive is providing the warranty themselves. The Newegg drives I purchased were made in Feb, 2018. I just finished running them through extensive testing and they are both in Perfect 100% Health with no errors at all. I have verified the 5-Year warranty by Serial Number directly on Toshiba’s website. Here’s the link for that (not easy to find): https://myapps.taec.toshiba.com/myapps/admin/jsp/webrma/addRequest1NoLogin.jsp?Action=NEW These seem to be good quality drives. When I bought them, Newegg was advertising them as Enterprise Drives. I already knew they weren’t however and I bought them anyway. 5-Years is a SUPERB Warranty! Now in the new listing, they dropped “Enterprise” and just say “Good for Servers”. Hope I’m not wasting your time. I did do a search on the Forum to make sure no one had already said something and I didn’t see anything. If you’re interested, don’t dally on it. Enough people already do know about them that they are selling like crazy. Newegg told me they are already on their second order of 10,000 drives! Hope this helps someone, GDog
  17. 1 point
    Christopher (Drashna)

    Not even after re-adding?

    Yup, what Jaga said. Specifically, the software doesn't automatically rebalance the data like this, unless there is a specific reason to do so (such one or more disks being too full). Over time, it will accomplish a balanced layout, because new files are added to the disk with the most available free space. However, as Jaga indicated, the "Disk Space Equalizer" balancer plugin aggressively rebalances the pool, so that each disk has the same percentage used, or the same amount of free space (your choice).
  18. 1 point
    Jaga

    Not even after re-adding?

    You'll want to get the "Disk Space Equalizer" plugin for DrivePool from this page, and enable it to force an immediate re-balance. When it's done, turn it off again and let DrivePool do automatic balancing from then on.
  19. 1 point
    msq

    Yay - B2 provider added :)

    In the very latest beta 1.1.0.991 the B2 provided has been added Downloaded, installed, set up and using already. Thank you guys!
  20. 1 point
    Christopher (Drashna)

    DrivePool + Primocache

    Very nice!
  21. 1 point
    Jaga

    Switch from DriveBender

    A - Yes, it is - any OS that can read the file system you formatted the drive with (assuming NTFS in this case) can see all it's files. The files are under a "poolpart..." hidden folder on each drive, fully readable by Windows. B - Yes, it will work with Bitlocker. This is a quote directly from Christopher on these forums: "You cannot encrypt the DrivePool drive, but you CAN encrypt the disks in the pool." (Link)
  22. 1 point
    Yes. The "Remove Disk" option will cause StableBit DrivePool to actively move all of the data from the drive in question to the other drives in the pool. You may want to use the "Force damaged disk removal" option, too. Normally, the removal aborts if it runs into disk errors. There is a decent chance that you will.... so the "force" option will continue on with the removal and leave the "problem files" on the disk, if any.
  23. 1 point
    nauip

    Damaged status will not go

    I have to search and look the fix for this up every time. Maybe the UI element to re-set the file system status could be made easier to find and understand? It seems like I have to manually set it to good, and then if I'd like Scanner to re-check it I set it to Unchecked and it should get checked on the next automatic scan.
  24. 1 point
    Ah ha! Triggered it! I've updated the ticket, and hopefully, we'll be able to fix this soon!
  25. 1 point
    Quinn

    [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip

Announcements

×