Jump to content
  • 0

Is speed of copying to a pool reduced with pool duplication set to on?


MrArtist

Question

Hi, firstly thanks for a great set of StableBit programs, nicely designed and explained. I've just started using DrivePool properly after buying the bundle two years ago and all is working nicely with two NVME 4TB pools setup so far (I'm putting a lot of trust into DrivePool as I move all my data around finally to sort it out once and for all!).

My question relates to whether the speed of copying files to a pool is reduced due to having to copy things twice (for duplication)? I ask because I just added another 4-way NVME PCIe card and at the same time also turned on pool duplication to secure my data better. However, I noticed that my copying speeds seem to have roughly halved from an NVME drive (outside the pool) to a pool made up of NVME drives.

Before setting duplication I was seeing copying speeds roughly averaging 1.2GBps in transfer speed to that pool, now it's more like 600MBps (with 2 x duplication now on).

So my thought is, either the duplication makes initial copying to the pool take twice as long, or maybe it's something to do with adding the new NVME card which may have slowed things down? (I did a search on the forum and manual but didn't spot anything that may help answer my question).

My system should cope with the extra card, it's got enough PCIe lanes and x16 PCIe slots (a Dell Precision T7910, 2xXeon, 128GB RAM, Win 10).

Would be grateful for your thoughts (anyone).

Link to comment
Share on other sites

3 answers to this question

Recommended Posts

  • 0

Real-time duplication xN writes to N drives in parallel, so while there's a little overhead I can only imagine that seeing the speed effectively halve like you're describing would have to be because of a bottleneck between the source and destination.

Note the total speed you're getting is roughly what the 7910's integrated/optional storage controllers can manage, 12Gb/s per Dell's datasheet, and that while any given slot may be mechanically x16 it may be electronically less and in any case will also not run any faster than the xN of the card plugged into it (so a x1 card will still run at x1 even in a x16 slot). So I'm suspecting something like that being the issue; what's the model of the NVME cards you've installed?

EDIT: It may also be possible to adjust the PCIE bandwith in your BIOS/UEFI, if it supports that.

Link to comment
Share on other sites

  • 0

Hi Shane, thanks for your observations, most useful.

The dual cpu Dell T7910 has four useable full-fat PCIe x16 Gen 3 slots and before the changes I made yesterday I had three official Dell NVME card holders in them as follows:

2 x Quad NVME (4 SSDs in each)
1 x Dual NVME (2 SSDs)

and with that configuration I swear I was seeing speeds more like the 1.2GB/s (and above) as I tested and moved things around while getting ready to swap components to be better arranged and add in an 'ASUS Hyper M.2 X16 Card V2' in place of the dual Dell one so that I could utilise its 4 NVME slots (for 12 NVMEs in total instead of 10).

At that point I hadn't tried DrivePool duplication, I only turned on pool duplication after adding the new card but I get what you say about DP writing in parallel so it shouldn't slow writes/copying down.

All the NVME cards support bifurcation so they fully allow independent 4 x 4 lanes for the x16 slots (which are all electrically x16).

I have done some more testing today between drives to and from the two pools that I now have, one pool is made is made up of 4 x 1TB drives (different brands but all on one of the Dell quad cards), that arrangement was also my initial configuration yesterday where I saw the 1.2GB/s speeds, so no difference there. The other (new) pool is made up of two identical 2TB drives (WD SN550 Blue) on the other Dell quad card which also contains two more identical 2TB drives.

My tests involved a 60GB folder containing 18 large files ranging from 1.41GB to 8.2GB and I've done repeat tests copying to and from the non-pool and pooled drives and all I can really observe is that the approx 675MB/s copying to either of the pools is consistent whereas from the pools to a non-pooled SSD that speed increases to about 1GB/s but one test saw it as high as 1.32GB/s. Copying between the pools I saw an approx 10% speed increase. I believe these increases (copying from a pool to a non-pooled drive) are because of DrivePool's Read Striping optimising how it pulls the data, so that is to be expected.

My main system (C) and some other drives are on the ASUS card and that NVME card has nothing to do with the pools at the moment so I'm sort of ruling that NVME card out as the cause of the slowdown but I guess the only way I can really fathom this out is to try swapping cards/drives around again and trying different arrangements, but that's a bit of a chore to try right now.

Unlike copying the test 60GB folder to a Samsung 980 I have as my main system drive, I saw no significant increase or decrease in copying speeds for the duration of the test in the pooled WD SN550s or the disparate bunch of 1TB pooled drives. On the Samsung 980 I saw a massive spike in performance at first up to 2GB/s (due to buffers I guess) but that dropped to like 200MB/s for most of the test in all averaging at 407MB/s. My copy speeds and timings are as reported by Directory Opus's (Dopus) copy process and are shown similarly in DrivePool.

The 12Gb/s of throughput that the Dell datasheet quotes equates to 1.5GB/s, so that does seem to indicate that my rough average of yesterday of 1.2GB/s (or variably above) before changing things was possible, I just wonder why speeds have now roughly halved?

I have not tweaked any of the settings in DrivePool, everything is as default except I did activate 'Bypass File System Filters' before all these changes and tests because I do have Bitdefender Internet Security running and I read in the manual that AV scanners can double/triple/etc scanning times (because of the duplication).
              
Any more thoughts before I pull everything apart again for more tests? I haven't got around to checking BIOS yet but I can't recall anything in there that might help, the Dell BIOS doesn't have much in there to tweak, but in any case things were faster yesterday so it's still a puzzle.

Link to comment
Share on other sites

  • 0

The 12 Gb/s throughput for the Dell controllers mentioned on the datasheet is total per controller, so if you operated on multiple drives on the same controller simultaneously I'd expect it to be divided between the drives.

Having refreshed my memory on PCIe's fiddly details so I can hopefully get this right, there's also limits on total lanes direct to the CPU and via the mainboard; e.g. the first slot(s) may get all 16 direct to the CPU(s) while the other slots may have to share a "bus" of 4 lanes through the chipset to the CPU. So even though you might have a whole bunch of individually x4, x8, x16 or whatever speed slots, everything after the first slot(s) is going to be sharing that bus to get anywhere else (note: the actual number of direct and bus lanes varies by platform).

So you'd have to compare read speeds and copy speeds from each slot and between slots, because copying from slotA\drive1 to slotA\drive2 might be a different result from slotA\drive1 to slotB\drive1 from slotB\drive1 to slotC\drive1... and then do that all over again but with simultaneous transfers to see where, exactly, the physical bottlenecks are between everything.

As far as drivepool goes, if C was your boot drive and D was your pool drive (with x2 real-time duplication) and that pool consisted of E and F, then you could see drivepool's duplication overhead by getting a big test file and first copying it from C to D, then from C to E, then simultaneously from C to E and C to F. If the drives that make your pool are spread across multiple slots, then you might(?) also see a speed difference between duplication within the drives on a slot and duplication across drives on separate slots. If you do, then consider whether it's worth it to you to use nested pools to take advantage of that.

P.S. Applications can have issues with read-striping, particularly some file hashing/verification tools, so personally I'd either leave that turned off or test extensively to be sure.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...