Jump to content
Covecube Inc.
Alex

SSD Optimizer Balancing Plugin

Recommended Posts

2 hours ago, Christopher (Drashna) said:

Okay, sounds (mostly) good then! 

The only thing is, that I'd use a non-system disk for the write cache, since it may/will get hammered, and may reduce it's lifespan. 

OK, good point. Maybe that's a good reason to spend $60 on a 120GB SSD.

Share this post


Link to post
Share on other sites

I've read through this whole thread and I've still come up a little confused with what I'm seeing in my (now extensive) testing of this plugin.

MY SETUP:  Local Storage Over USB3 in 4-BAY Enclosure.  (2) 10TB Spinners + (1) 2TB SSD

MY GOALS: Use the SSD as a write cash for the pool. Write data to the pool super fast, when the pool becomes idle or a threshold / trigger is reached, data on the SSD would be moved to both HDD's (in duplicate).

MY OBSERVATIONS:

"Real-time Duplication" OFF - Data does not touch the SSD.

"Real-time Duplication" ON - Data gets written to the SSD AND one of the HDD's to maintain RTD. Consequently, performance drops to about equal to when I'm completely avoiding the SSD and writing directly to the HDD's in parallel because at that point, I'm saturating the USB3 Gen1 bus (and DP will still have to eventually empty the SSD out to the other HDD to balance the pool).

This seems to be the inverse logic of a "Landing Zone." If I am explicitly selecting to NOT duplicate in real-time, why would that cause the SSD to be ignored entirely when that is precisely the use case where writing exclusively to the SSD would make the most sense and the balancer rules are telling it to do exactly that (new file placement limits are at 0.0% on both HDDs)?  Is there a way to force the behavior I'm looking for to happen, without having to micromanage my pool (looking for a solution, not a hack/workaround).

I've tried A LOT of different settings here, rebuilt the pool, reset DP, etc.. I'm hoping there's a check box somewhere that I'm overlooking. I very much love DrivePool BTW!

Share this post


Link to post
Share on other sites

Christopher, from my re-read of sanatacruzskim's post, it sounds like you nailed it on the head. I might be beneficial to have a more blatant notice to users of the SSD Cache option that duplication requires 2 SSD drives.

Share this post


Link to post
Share on other sites

This may work, but as with a lot of things the only way to know will be to try it out, as I've not tried this myself.

 

Partition the SSD into 2 separate volumes, assign them a drive letter, or mount them to a directory, and then add those two SSD volumes to the Pool for use by the SSD optimizer.

 

I do something similar and it works perfectly.  I have a 480GB SSD split into 2 volumes and they are part of 2 different Pools, and are used by the SSD optimizer.

Capture.PNG.c796a2104c2508481f34d76b89cbaa0d.PNG

While you may see a reduction in throughput, due to contention on the SATA bus and/or in the SSD drive controller, I'm quite confident it will be significantly faster than 1 SDD and 1 HDD.

The only issue is until the data is moved off the SSD it is essentially unprotected from a SSD hardware failure, but I use my 2 volumes for media files which I'm not too precious about.  Your appetite for risk may be less than mine.

 

J

Share this post


Link to post
Share on other sites

Thanks for the responses.  To be clear, I am trying to intentionally NOT have duplication on write to the pool for the benefit of performance, and I'm aware of the inherent risk involved - I'm a big boy, I can handle it :)

The only drive I have set to "SSD" is the SSD.  If I have real-time duplication off, it doesn't touch the SSD. If I have it on, it writes to 1 SSD and 1 HDD.  What I want is for it to write solely to the SSD, then upon hitting a threshold or becoming idle, the pool balances, sending the data from the SSD to both spinners, in the background, without my intervention.

FYI, splitting the SSD into 2 partitions creates some interesting results - each partition shows up independently as drives for me to add to the pool. If I add both partitions to the pool, SSD optimizer is not fooled - they're listed as one option in the plugin settings, reading …"E:/SSD_01, F:/SSD_02." However, the result is different - with RTD off it writes to the SSD almost exclusively (!), alternating between the 2 SSD partitions as files are copied onto the pool. But it will still occasionally choose an HDD as the target, making me think those decisions are being made by other balancers and not the SSD optimizer (the SSD is always the least full disk, for example, so maybe that's why DP is sending files there first?). With RTD on, it copies to the SSD and 1 HDD in parallel as it did before, with the performance hit, indicating DP itself is likely not fooled by splitting a single drive into partitions (as is shouldn't be!).

So good and bad - It appears to be working as I was originally hoping by using this split partition workaround, but not reliably, and only through this weird hack. Maybe I'm misunderstanding the purpose of this plugin to begin with?  Personally, I feel like my use case is the most logical use of an "ssd cache" in a pooled storage environment, but maybe my whole premise is off here, hence the confusion. However, what continues to perplex me is the fact that the "new file placement limit" (shown with red triangles on the pooled disks' horizontal bars) indicates no files should touch the HDDs until the SSD is 75% full, yet DP is still choosing the HDDs exclusively when RTD is off, avoiding the empty, speedy SSD altogether.

Thanks for any help here. I'm closer to my goal, but just as confused!

Share this post


Link to post
Share on other sites

sanatacruzskim, one thing has struck me: how are you determining which drive the files are going to? I was thinking about it and I'm not sure how I would know.

Share this post


Link to post
Share on other sites

@ikon - I'm basically just monitoring disk activity in Stablebit Scanner (and also the disk "performance" section of DrivePool), which shows pretty clearly what's going on - the active drive(s) are being pinned and non-active literally sitting at or around zero, not to mention the fairly large difference in transfer speed as shown in File Explorer and TeraCopy, depending on whether it's writing solely to the SSD or not.  I'm also being methodical about knowing what files I'm transferring and checking afterwards which drive(s) they ended up in, in the hidden poolpart folders just to make sure there's no funny business going on.

Also, unlike my archive / offline pool currently sitting at 22 drives of all different speeds and manufacturers, this "online" pool I'm testing and trying to configure is just 3 drives, 2 of which are identical in performance and the 3rd is the SSD which performs totally differently. You really can just change settings in DP, start a transfer, and in about 20 seconds, after things level off know with pretty good certainty what's going on.

Hope that helps

 

 

Share this post


Link to post
Share on other sites

Thanks santacruzskim. I've never really tested the SSD's performance, but I do recall seeing a boost in performance after installing it. I should mention that I don't use any duplication at all.

Share this post


Link to post
Share on other sites
On 11/21/2018 at 2:21 PM, xtremecool said:

Partition the SSD into 2 separate volumes, assign them a drive letter, or mount them to a directory, and then add those two SSD volumes to the Pool for use by the SSD optimizer.

Nope, this will not work. StableBit DrivePool is aware of which volumes belong on which physical disk, and will actively avoid using the same disk for duplication.  It would be the same as using a single partition on the disk. 

 

Share this post


Link to post
Share on other sites

Is the source for this plugin available? I'd like to try my hand at creating a read-write cache hybrid plugin that allows caching data that is frequently used. Or is that in development already?

 

I'd really like a PrimoCache type of solution but I can't afford to purchase PrimoCache for my home server and they won't sell me a Personal license for the Server version, sadly.

Share this post


Link to post
Share on other sites

For that specific plugin, no.

But we do have an example/sample source for creating balancers.

http://wiki.covecube.com/StableBit_DrivePool_-_Develop_Balancing_Plugins

 

But no, this isn't in development either, as the balancing framework doesn't really support this sort of control or filtering, IIRC.

Share this post


Link to post
Share on other sites

I need help understanding how to test that it is fully functioning correctly..

I just got up and running with a 10gb network and to cut it short for explanation on testing..

I have a ramdrive on machine A (non stablebit machine) and so a 9gb file is sitting in ramdrive (memory) and I copy it over the 10gb network to the SSD Optimizer drive and I have a crucial 500gb ssd drive set there. At first yeah its super fast.. starts out at 780MB/s and gets about 9 seconds in and slows down to 120MB/s then 40MB/s then just slows down.

 

I had a remote connection with stablebit window up and I saw the SSD drive get a chunk filled on it but I expected the full 9gb to copy over at lightning speed.. 

 

fyi im not griping.. just trying to understand. I am getting an hp procurve switch in this week (just got off ebay) and I will have 5 machines connected via 10gb and im just learning and tweaking and making sure this is all setup correct.. I have a habit of missing something! lol

post.png

pre.png

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×