Jump to content
  • 0

About measurement - again


Sergey Klepov

Question

I have not very big pool (~8Tb) but with many many small files, millions of them. Measurement takes a lot of time which is expected, but I wonder why my system is so slow? Like when I try to get list of drives in explorer or open disk which is not even in pool - it takes several minutes to complete. Is it possible to stop measurement (like postpone) for some time may be? Because I'm not interested in statistics, I saw that even if I will stop drivepool service - my pool is still there and files are accessible. Or add some "decrease priority" button just like we have now "increase priority". Because on my system (windows 10 x64, drivepool v2.2.4.1162, the latest one) system becomes almost unusable. Or may be there are some hints on how to make it more responsive?

Thnx

Link to comment
Share on other sites

5 answers to this question

Recommended Posts

  • 0

Pool measurement shouldn't be affecting your ability to list drives in explorer, or to open drives not in the pool (or even in the pool), as it runs with a background idle priority by default. Even with priority manually increased via the GUI, it shouldn't be particularly noticeable. I would be concerned that some other factor is at play.

FWIW, I just forced a Remeasure on my own main pool - currently six internal SATA HDDs, roughly 591k files (not including their duplicates; I'm using a mix of x2 and x3). It took twelve minutes to complete with the increased priority button selected, with about 35% CPU usage sustained on a six-core Ryzen 5 with 16GB of RAM.

Link to comment
Share on other sites

  • 0

Well, it takes almost one day for me to complete re-measuring, I have tens millions of small files though (may be 20 or 30 millions, mostly 100k size). The thing is that it affects server - getting list of drives, changing directory on drive which is not in pool - all takes time. Plex (my movies are in drivepool) just can't play any movie - several minutes tries to access file and then timeouts. Disk usage during measurement is at 100%, but cpu usage is at 0 almost. Disks are 3 SATA 7200RPM. So why could that happen, what could be in play here and how to check that? May be fragmentation? resource monitor showed that system was very busy with reading mft of disk checked. Smth like mft fragmentation described here https://flylib.com/books/en/4.435.1.68/1/, disks are almost 100% full and millions of files make mft very large...

Link to comment
Share on other sites

  • 0

Assuming a linear trend and similar performance to mine, 591k taking 12 minutes at increased priority means 30M files would take about ten hours at same so that sounds about right? Of course you wouldn't want the increased priority if you've got the pool busy doing other stuff (like being accessed by Plex at the time).

It certainly could be fragmentation with what you describe; running NTFS drives so close to their capacity is not recommended for pretty much that reason and if you've got 20-30 million files on just three(?) drives that's a LOT of index space, yeah. Any decent defrag utility should be able to tell you the size and fragmentation level of your MFT (also the downloadable microsoft contig utility can do so, e.g. "contig -a -v d:\$mft" via administrator command line where d is each drive in your pool, NOT the pool itself).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...