-
Posts
7 -
Joined
-
Last visited
Posts posted by bobloadmire
-
-
2 questions.
1. Can I specify to keep the most recent files in the SSD cache for faster reads, while also duplicating them to archive disks during the nightly schedule? Essentially files would be duplicated at night during down time, and also files wouldn't be deleted from the SSD cache. However the cache would operate on a first in/last out algorithm, so that the most recent files would be kept on the cache for faster access? once the oldest file is to be replaced by the newest file, it would just get deleted from the cache since it would have been placed on the archive disk.
2. can I specify some folders to always be on the SSD cache? for instance my Plex database would be great to have permanently in the SSD cache. I have no need to duplicate this database, as it can always be rebuilt easily.
Thanks!
-
On 6/17/2022 at 12:31 PM, Christopher (Drashna) said:
Yeah, you want to run it on both systems, and you want to reboot after running it to ensure that it takes effect.
unfortunately, didn't make a difference, still limited to about 700mbps
-
It would be great to allocate some dis space (say 10TB/disk or something) that would be a dedicated folder or even drive letter for striped writes. This would be great for usenet/torrent downloads, or when your bandwidth exceeds a single disk performance, or when unpacking PAR files. That way I could unpack directly in the stripped pool space and then just move the files to the standard pool space.
Or another option would be to make all writes striped and then have the duplication / movement of files to a single disk happen in the background so you could have immediate access to the files with the advantage of having them unpacked faster.
-
19 hours ago, Christopher (Drashna) said:
Have you messed with the autotuning? if not, that is the usual culprit. Setting it to highlyrestricted can help, a lot.
Try running this on any system involved:
netsh interface tcp set global autotuninglevel=highlyrestricted
And reboot the system.
Ok did that, CMD output "OK" so I assume that did it. still capping out at 730ish mbps. Do I need to run that on the client machine as well?
-
16 hours ago, Christopher (Drashna) said:
Shouldn't be.
I actively game off of a pool, with no issues.
Would there be any benefit to duplicating game data so you can do striped reads? optimal amount of duplications?
-
basically an optimization question. I tested all 4 of the disks in my pool with Scanner, they all did about 250-350MB/s, so my USB controllers are good. I enable jumboframes on both the host and clients. Internal SSD has fantastic performance 600MB/s to the buffers and sustains 250-300MB/s after that. I have network i/o boost on, but I cannot get more than about 600Mb/s over the gigabit network, and I'm expecting to get 900+ with overhead. Any tips at things I should look at?
Optimizing SSD cache with drive pool, trying to keep most recent files in cache for faster reads.
in General
Posted
thank you, clear and concise. Re #1, seems like it shouldn't be too hard to implement, would be very powerful feature.