Jump to content
Covecube Inc.

toxic

Members
  • Posts

    6
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

toxic's Achievements

Newbie

Newbie (1/3)

1

Reputation

  1. What gtaus has suggested is good advice. In my case I did remove one questionable drive from the system (which hadn't shown up much under S.M.A.R.T \ SB Scanner, but Windows eventlogs showed occasional failed writes on the drive). And there were some small improvements removing it from the system but the re-measure persisted and it was switching REFS->NTFS that fixed the performance issues.
  2. Not really no. Despite a lot of back and forth working with Drashna & Co. we never did get to the bottom of the issue. I even deleted the pool entirely, and recreated a fresh one (and move all the data back into the new pool folders). One of the drives (WD Black) did turn out to be failing and removing it from the system improved the measure times a tiny bit since that disk was performing slowly, but overall problem remained. What did help was when I converted the pool back to NTFS (one disk at a time, format & copy). I haven't looked lately, but I believe it still measures each boot. However, under NTFS the measuring is very quick, and doesn't have any real impact on performance. I guess it only needs to read the NTFS journal during a measure now, whereas on REFS it was having to walk the entire directory tree (and I have millions of files in some directories). Additionally, I bought a 2TB SSD for my games and other data I don't care about losing and moved most of the performance sensitive stuff to that. Pool performance is now not so much of a concern. Long term I'm thinking of moving to a dedicated Linux NAS system with mergerfs and snapraid, but for now my current setup is fine - under NTFS it's not really problematic anymore.
  3. I definitely suggest configuring snapraid so it points to the drivepool folder with the GUID inside the config file so that it's much easier to restore. Snapraid doesn't have to be the root of the drive, it can be anywhere you like (as long as they are on different psychical drives). So instead of doing: data d1 z:\ data d2 y:\ data d3 x:\ You do: data d1 z:\drivepool.{guid}\ data d2 y:\drivepool.{guid}\ data d3 x:\drivepool.{guid}\ That way after a failure e.g d2 dies, you drop your new drive in, add it to the pool, get the new GUID from the new drive, and edit your snapraid conf to comment out the old drive and add the new one by changing d2 y:\drivepool.{guid}\ to d2 y:\drivepool.{newguid}\ like so: data d1 z:\drivepool.{guid}\ #data d2 y:\drivepool.{guid}\ data d2 y:\drivepool.{newguid}\ data d3 x:\drivepool.{guid}\ Then run your fix and it all just works - and you don't have to move your files around.
  4. I still have a support job open about this, and I wasn't yet able to fix it. I've just been keeping the reboots to as minimum as possible and doing them as I go to bed when the measuring will not impact me. It's still a massive pain and I'd love to get it sorted. What filesystem are you using out of curiosity? I'm in the process of re-evaluating some things and may end up change my setup a bit. Issue being as well that Microsoft have blocked being able to format a drive REFS without the enterprise or 'pro for workstations' license, which I have neither. As such it's actually pretty difficult to format a new drive to add into the pool for me at the moment (running VM trial of enterprise to format the drive?). But If Microsoft are trying as hard as they are to stop me using REFS I'm a little hesitant to fight them on it and invest further. I'm sort of wondering what I should do next, maybe one by one redo the drives as NTFS and swap them back in (I have some juggle room to do this). Maybe recreate the pool from scratch as NTFS and migrate things over. I've also started using Snapraid for my largest and mostly static data, and that's given me a lot of extra space to play with, and Snapraid doesn't recommend using REFS https://www.snapraid.it/faq#fs either. I've also heard of REFS's failure mode in single drive mode is to just delete any damaged file silently (not likely an issue in a two way mirror in drivepool though), which would then be gone forever after the next snapraid sync, so I'd much rather have corrupt file so I can restore it.
  5. Thanks Drashna, I've done as requested, the SR with attached logs and troubleshooter uploads (under suggested BETA version) should be in now.
  6. I've noticed every single time I reboot, the whole pool remeasures every disk, which takes ~3 hours to complete. During this time, the pool is borderline unusable. Launching a Steam game installed to the pool may take 5 minutes or more to launch, and then stutter for a few seconds every 5 seconds or so (completely I/O starved). Even video playback can stutter, it seems the pool just has no leftover I/O to serve requests over the measuring. I understand that measuring has a high impact, but it doesn't seem like it should be this bad. It seems like outside I/O should have priority over the measuring. Secondly, I understand the pool shouldn't need to remeasure every reboot. Any way to troubleshoot why this is happening? It makes the machine pretty much unusable for hours after rebooting, which I need to do periodically for updates etc. The one thing I have which is a little unusual is the pool is all made of REFS disks. Is this possibly the culprit? Obviously moving back to NTFS would be quite an undertaking, though with Microsoft trying to drop support from the desktop versions of Win 10 it's certainly something I'm open to. Disks in the pool are: 1x8TB WD Red 2x4TB WD Red 1x4TB WD Black EDIT: forgot to add that when the measuring is finished, the performance settles down and becomes pretty reasonable, comparable to a normal disk. So I've been trying to avoid reboots as much as possible not to have to render the machine unusable for hours at a time. I've also completely disabled the Windows Search Service (indexing), as I have no need for it and know it can cause performance impacts.
×
×
  • Create New...