Search the Community
Showing results for tags 'speed'.
Found 3 results
Dear true humans at Covecube, I have a stringent question that has been haunting me for weeks now: I currently have 11 physical HDD/SSD media that are all part of just ONE pool (and hence represented by one drive-letter..). Assuming I have lots of RAM free, and lots of CPU cores doing very little; Does this setup make sense, performance-wise, or is it faster or otherwise smarter to create more than one pool, say, for different directories or using different policies? I have a lot of placement rules active on this one pool, and lots of levels of multiplication set for different folders, so it does not feel like I *need* to separate or divide stuff over more pools. I do notice that Measuring takes a long time. Would it go faster with more than one pool with the same data (virtually) separated? Perhaps this is more or less the same as "how many file-streams at once can be copied from one drive to another?" where I often found that the sweet spot was with 3 streams, for USB 3 or faster external drives that is. Perhaps with SSD or virtual drives that sweet spot is a higher number. I'm hoping my heroes at Covecube know best. TIA!
In Windows when deleting something like 100,000 files, it's quick and completes in a few seconds. This is a high-level delete operation. IO operations with Drivepool all take a much longer time than Windows. Things such as: In explorer: Right-click folder > properties (to view number of files, folders, size) OK, I'm not sure this one is drastically slower (if at all). Discovery takes a long time for a delete. Like #1 above. A delete operation take a long time for a large number of files. I'm talking like 50-150 IOPS, which makes me think this is related to disk IOPS. Can these operations be dependent on any settings for my pool, or is this just part of the territory for what Drivepool does beneath the surface? Correct if I'm wrong: operations in Drivepool seem to be limited by the IOPS of the disk(s) in question.
I'm just starting to experiment with DrivePool, and I'm wondering how best to use it to optimize for speed. I've got the following disks SSD 256Gb boot/OS SSD 512Gb working drive - photography 1st tier HD 4Tb fastest - photography 2nd tier tier HD 1Tb ok - replication / read cache HD 1Tb ok - replication / read cache HD 2Tb slow - archive HD 2Tb slow - archive For the moment I'm just using drivepool for the spinning rust disks... and will use Samsung RAPID for my working drive - so that's out of scope for now. The rest can be in the pool - but by default * when I save something into /pictures into the pool I want it to hit the fastest 4Tb drive first (then replicate in the background or overnight to one of the other drives - I guess I don't mind which one, but would prefer to keep it to the 1Tb ok speed disks) * when I store something into /videos, this system is only a long term archive (as the files are AeroFS copied to my media server; so here I want it to drop to one of the slow HDs) I think DrivePool optimizes for space first; and doesn't know that the new 4Tb disk is the fastest... which is a pity... So should I simply set balancing to set Pictures to use ONLY the fast 4Tb drive (unless I run out of space) Videos to ONLY use the slow 2Tb drives (unless I run out of space) Thoughts? Andy.