Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Jaga

  1. Okay cool. So we know it's not the SMB multichannel, especially if the old one never had that and you still get the striping slowdown there. It is odd however that with duplication enabled on the pool(s), that the read striping performance bar changes between dark/light blue, when it *should* be green under ideal conditions and read striping is actively using all drives. If both servers are delivering reduced speeds with read striping on the hardware is less likely of a culprit. And it isn't the network layer. It points more squarely at either configuration, or DrivePool's current install package on both. Let's see what Christopher/Alex have on it - it's starting to get above my DP pay grade.
  2. Going to try to help dig at the problem until Christopher shows up.. What's your Pool architecture on both servers? You may already know - read striping only helps when you have Pool/folder/file duplication on separate physical drives on the server you copy files from, though I'm unsure why it would decrease performance as much as you're seeing even without duplication. Have you disabled SMB multichannel on your PC and the old server, then re-tried read striping on/off copy tests (server->PC) to see if the mutlichannel is playing a part in the problem? Going to assume that your PC -> new server transfer (@220MB/s) is a PC->Pool copy, not a Pool->Pool or Pool->PC copy (where you see the issues). Are both copies of DrivePool release channel and not beta channel? Are you seeing any read striping bar color changes in the performance UI (scroll down to Read Striping) of DrivePool when you have striping enabled on the source server during a copy to the PC?
  3. Sounds good - let us know if you have any questions on either DrivePool or the SnapRAID command line/script.
  4. I chose DrivePool and SnapRAID, as the two work very well together. The Windows Scheduler runs a nightly "new" SnapRAID Sync (parity calculations on new additions) and then a new item Scrub (re-check parity to ensure it was calculated properly) based on the motto "Trust, but verify". It also does a weekly scrub on the oldest 8% of the array so that the entire array is re-verified every 3 months. There's a very flexible and well designed Powershell script available for it, which even emails status/logs after completion. There's no GUI, but with the easy command line in Powershell and the configurable script, it's a non-issue. While SnapRAID isn't real-time parity calculations (you schedule it when you want), I have full confidence in it's ability to help replace downed drives, repair corrupted files on damaged drives, etc. Add to that fact that it is flexible enough to adjust to any changes made to the Pool on the fly (adding drives, removing drives, etc), can be expanded to whatever RAID level you want (all the way up to Hexa - 6 parity drives), can do split-parity drive pairing, is completely free... and it's a solid winner in my book. It brings the ultimate in flexibility and reliability which is why I chose it. There's a handy comparison chart from the maker of SnapRAID that helps determine if it's right for you. I'd recommend it as the parity software to go with in conjunction to DrivePool, which is very active in development. Alex & Christopher here show no signs of slowing down.
  5. Well, I'll agree with you there, considering the slowdown the (archive/shingled?) HDDs introduce. They're among the slowest ones for writes you can get today I think. I had no idea they were shingled drives. It would probably be easier to use a single Pool (consisting of just the SSDs) and perform a backup to one of the 8TB HDDs, then mirror that to the other HDD with something like file sync software. It would give triple redundancy and keep the pool as fast as possible. It would even serve to protect against accidental file deletions, as well as using differentials/incrementals to give a recoverable file history. Using backup compression would keep writes to a minimum on those archive drives, essentially increasing throughput and extending lifespan.
  6. That is good info for Christopher and Alex. In the case of child pools, it may be a nice future feature to add in some type of priority selection, or prediction, such that faster sub-pools/drives are read from first. But it sounds like you have the structure all setup and working, which is great.
  7. In Drivepool, click the small gear in the upper-right, choose Troubleshooting, then Recheck Duplication. All pools duplicate the Metadata with >1 drive, even if you have file/folder and pool duplication turned off. If you go into Manage Pool, File Protection, Folder Duplication, you can see all of the folders in the pool, and the hidden Metadata folder (which shouuld have 3x duplication on it by default). It may be that DP decided to use the SSD to temporarily hold some of the duplicated Metadata since the HDD went missing. If so and the old HDD is re-attached, then forcing a re-check on the duplication should iron out any issues with it. After that check is done, I'd force a re-measure on the pool also.
  8. Yep - the only way however to get 4 copies on two drives was to divide it logically. You did after all, request 5 total copies, 4 of which were on 2 physical disks. You still have multiple HDD copies even in the event of one 8TB HDD failure - the other HDD would have two copies on it's two volumes, so file redundancy would continue, just on the single working HDD. It could protect against things like file system corruption on it's 2nd volume, leaving you with a single good copy on the first volume of the non-failed drive. It's very redundant and normally you wouldn't partition the drive that way, but your Pool scheme required it to equally spread things out and work well. Two reasons I structured it that way for him. 1) That was the only way I saw to get >2 copies on just 2 drives with duplication working. 2) You still get redundancy in the event one volume is unusable due to physical sector damage, or file system corruption, full drive failure, etc. Imagine a scenario with this setup where one of the HDDs holding Pool 2 fails completely. It leaves child Pools 2 and 3 degraded, each with two copies of the pool contents on it. Then imagine that one of the volumes on Pool 2 is also damaged but at the file level (or even damaged disk sectors affecting a volume). A file he needs isn't readable. We *still* have the remaining volume on Pool 3 with another copy. If the physical disk is still good, and the file system isn't corrupt/damaged, it's retrievable over there and still serves as a backup for the SSD Pool 1. Highly redundant I agree, but that's what was asked for.
  9. There's always going to be -some- duplication on a Pool with multiple disks. Usually that's the Metadata, and the larger the structure/files you have, the larger the Metadata. If you do properties on your folder structure you can get an idea how many folders/files total are on the pool. Drivepool tries to duplicate the info on these (Metadata) three times, if there are three available drives in the pool. If not, it'll try for two times duplication on them. However if you want to re-calc the Pool's duplication, click on the little gear in the upper right, choose troubleshooting, then "recheck duplication...". If Drivepool finds any invalid dupes, it should clean them up itself.
  10. Hierarchical child pools would probably serve you best. But you'll need to partition each 8TB drive into 2 logical 4TB volumes each. To my knowledge it isn't possible to run duplication (either pool or folder) on a single drive listed in a pool. Pool 1: All SSDs are member disks in this pool. No pool duplication. Pool 2: A 4TB volume from HDD1 + a 4TB volume from HDD2. Enable 2x pool duplication, Real Time Duplication, and Read Striping. Pool 3: The other 4TB volume from HDD1 + the other 4TB volume from HDD2. Enable 2x pool duplication, Real Time Duplication, and Read Striping. Main Pool: Made up by adding in Pools 1/2/3. Enable 3x pool duplication, Real Time Duplication, and Read Striping. Then add whatever drive letter you want to the Main Pool and start dumping files into it. It'll distribute a copy to each of the three child pools (1 2 and 3). Each HDD's first 4TB volume will get a copy and then duplicate it, and Pool 1's SSDs should distribute the files equally among them. If you run Anti-Virus on that machine, also enable Bypass File System Filters on all pools to enhance performance.
  11. Windows 7 shouldn't be natively creating the thumbs.db files though. The last Windows OS to do that and rely on them was Windows XP. A bunch of helpful information on thumbnails in Windows is here: https://www.ghacks.net/2014/03/12/thumbnail-cache-files-windows/ Starting with Vista, they were moved to a central location for Windows management: %userprofile%\AppData\Local\Microsoft\Windows\Explorer\thumbcache_xxx.db If they are being re-created on your system PetaBytes, I'm not sure why. I triple-checked my Windows 7 Ultimate x64 media server, and none of the movie/picture folders have the files in them (visible OR hidden). This procedure (and a reboot after) might help your issue: https://www.sitepoint.com/switch-off-thumbs-db-in-windows/ I still maintain a 3rd party utility like Icaros will help the most. What it does (in a nutshell) is maintain it's own cache of thumbnails, so that *if* Windows loses them for a folder, Icaros will supply them back to Windows instead of it having to re-generate them slowly.
  12. Have a look here to run the Internet Health Test. Check here for info on how to test BitTorrent traffic shaping, among other things. You may want to test your ISP's line quality. Download PingPlotter, install and run it against your server with 5 second pings for an hour or more. Watch for spikes in round trip times, and dropped packets. Don't have other large traffic going on your line at the same time. I'd attempt to help with CloudDrive performance settings/tests, but I'm not the expert in that regard.
  13. Yeah, sorry, the UNC name (I'm old-school and still refer to windows shares with NetBIOS terms). I've gotten used to replacing the UNC/NetBIOS name with direct IP addresses to circumvent some issues I find. i.e. \\\Backups Good to hear about EaseUS. I'll take a peek at their software and see how well it fulfills the role. I suspect it may work well, given that I'm considering WSE 2016 inside a Hyper-V environment.
  14. Was going to add the "LSI Logic LSI00208 MegaRAID SAS 9260-16i 16 Ports 512MB 6Gb/s Controller" to the list of possible cards (found one for $200), but from what I read further it is RAID mode only, no JBOD / HBA.
  15. The first of those two (the LSI00276) isn't a bad card per the specs vs price. It's date of introduction on Amazon (the only date I could find on any marketplace) was Oct '13, meaning it's been around for 5 years or more. Well-received technology and good enough capacity but might be nearing EOL, especially since Broadcom acquired LSI 4 years ago. I suspect Server Supply is clearing them off the shelves with good pricing to make room for other product. Amazon's lack of stock kinda supports that. The second one is a refurb, which I probably wouldn't get unless I was swapping out for a damaged card already. New builds to me are.. new builds. I'm definitely split on the cost vs lifespan tradeoff of products that have been on shelves 3+ years. Sure you can get them cheaper, but is the risk to the data worth a card with a reduced lifespan? I haven't put enough older cards through their paces to know. I do know I look at manufacture dates on spinner drives when I buy them new, and anything over a year goes right back to the seller. They are after all, just spinning rust - they decay whether used or not. (Any thoughts on that premise are welcome - it'd be nice to hear other people's experiences) Apparently it's harder to find new (<2 year old) controller cards. That, or the newer cards that handle this number of ports are all over $300. Probably going to have to do a comprehensive website search/review with introduction/manufacture dates to know for certain. Thanks for the investigation and pricing @nauip ! Edit (update): Did some reading online and the 16e with breakouts routed back in may be a good solution, but found so many horror stories associated with ServerSupply that I'll be steering clear of them. The same card is still within my budget on Amazon, so it's clearly in the lead right now. Especially since I can later adapt it to external backplanes/enclosures or internal expander(s) as needed. Still haven't found an affordable 16i style card. They all hover around the $400+ mark.
  16. Macrium Reflect (Free or Paid). If you don't need the server edition (which is paid software to run on Windows Server), it does a fantastic job with all kinds of backups, custom settings, schedules, recovery media, etc. Even after I do my next server upgrade to WSE 2016, I'll probably -still- keep all my workstations/laptops on Macrium Reflect. For me it's largely fire-and-forget, and I check backups by simply looking at the Backup Folder repository to ensure they actually kicked off and completed correctly. No management console required for that. They do have a "Site Manager" package, which is paid software with a 30-day trial, in case you want to investigate that portion of their suite. I've never used it, but it appears to be a central management and license distribution package, so that you can install/uninstall/manage/etc on connected workstations. As far as Veeam's ability to see network shares goes - have you tried mapping to the specific IP\share instead of using NetBIOS naming? That fixes the issue in some software.
  17. That is a really good price, going to bookmark it for later. Thanks for the confirmation (and the information) Christopher! That expander is already EOL, unfortunately. Still - great price. Down the line I may end up expanding the array and put them all into a rackmount chassis. That's long-term thinking though - ~48+ TB will suit my needs for some time yet. Edit: Dug up this compatibility info for the expander. Looks like mostly Intel, and some LSI.
  18. Correct me if I'm mistaken, but I thought the PoolPart folders were free to be read from / written too freely (like with the trick of installing games directly to them). It would throw off pool measurements, but that's easily corrected. If we're just moving contents of one PoolPart folder to another, wouldn't that be the same?
  19. Scanner will poll SMART on all connected drives on a pretty high frequency, unless you restrict it with a schedule and/or turn off it's ability to wake a drive to poll it. I highly suspect it is Scanner that's waking the drives, and not DrivePool. I'm unsure of duplication checks on Metadata, or when they are scheduled on a pool without duplication set otherwise. I don't think it's the culprit here. Try disabling "Wake up to scan" in Scanner general settings, and restrict it to a work window (mine is 5am to 11am). Then change the S.M.A.R.T. settings and check "Do not query if the disk has spun down". I additionally throttle queries to once every hour (60 minutes), just personal preference.
  20. That may depend on your automatic balancing settings. If you have it set to run once every 24 hours, it should not poll the drives to balance (or to check duplication). If there's no activity on the pool drives, then it shouldn't be reading them outside of whatever schedule you have set and they should continue to sleep soundly. I can't say mine is the definitive word on this however - just logical speculation. Christopher and Alex know far better how Metadata duplication checking works, and when it runs. Are yours waking up with duplication off and no drive activity happening? That can also be Scanner waking them up (if it's installed) since it likes to poll SMART regularly.
  21. Because MetaData is always 3x duplicated for safety. You can't ever have a Pool without some type of duplication.
  22. Going to start on my server upgrade project soon, and need suggestions as to which internal PCIe SATA/SAS controller to use. The server is going to have ~12-16 internal 3.5" and 2.5" SATA drives in it (it's a large Mountain Mods case), so I won't be using a specific drive enclosure/backplane/etc. Just SATA or SAS connectors internally, drives all configured in JBOD mode. I may create a 2xSSD RAID 1 volume for caching purposes, but I can use the on-board RAID for that if necessary. It will probably be running WSE 2016. I like the idea of controllers+expanders, but I really don't want to dump a ton of money into this portion of the build. I do care about performance, but won't be streaming 8+ UHD files at once at any time, so it doesn't have to be a beast. On-board cache would be nice, but not totally necessary. All of the 8-10 data drives are going to be shucked 8TB or larger, probably a mix of Reds and Whites (no Archive drives). I have four older 4TB Reds I'm going to be using initially for SnapRAID parity, that will probably sit on the motherboard SATA controller connections, and later be moved into external USB enclosures (or a small 4x enclosure). So the PCIe card(s) need to support around 8 drives at a minimum, later on up to 16. Breakout cables or expanders.. whatever gets the job done well without burning a ton of money. I don't see $500+ as feasible - $300 or less is ideal. I was originally considering cards like these, but that was a cursory lookup around 6 months ago, and may or may not be ideal anymore: https://www.amazon.com/dp/B005B0Z2I4 https://www.amazon.com/dp/B002RL8I7M/ I'd also like to get recent tech if possible. I understand completely that some older generation tech is well received and works great, but this build is for longevity and functionality. Any ideas are greatly appreciated.
  23. It won't break anything, but you'll want to force a re-measure on the pool afterwards, especially if you changed locations/added/deleted any files during the copy.
  24. Yep, top level are no problem. DP passes volume commands to the underlying drives, so operations are as "compliant" as possible. And.. that's rather humorous.
  • Create New...