Jump to content
  • 0

SSD Optimizer problem


plebann

Question

Hi.

I'm using DrivePool with SSD Optimizer plugin (with option Fill ssd drives up to 90%).

I have dedicated partition on my ssd (small, 70GB), SSD Optimizer plugin configured and worked as expected till now.

 

Free space on SSD Partition - 22GB.

Trying to copy one file 43GB to DrivePool Pool (with have 2TB free space) and give error "Not enough free space".

I'm assuming that SSD Optimizer plugin is trying to put new file on my SSD disk, but this file is larger that free space on that disk but - there is still more than 10% free space on that disk and plugin is forcing to place new file on sdd partition.

 

I suppose i'm right because when i disabled SSD Optimizer plugin i was able to copy that file no problem.

 

Is this real issue or i need to improve my SSD Optimizer settings?

Link to comment
Share on other sites

Recommended Posts

  • 2

This is part of the problem with the way that the SSD optimizer balancer works. 

 

Specifically, it creates "real  time placement limiters" to limit what disks new files can be placed on.  

 

 

 

I'm guessing that the SSD is below the threshold set for it (75% by default, so ~45-50GBs).  Increasing the limit on the SSD may help this (but lowering it may as well, but this would force the pool to place files on the other drives rather than on the SSD). 

 

 

Additionally, there are some configuration changes that may help make the software more aggressively move data off of the drive.

http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing%20Settings

On the main balancing settings page, set it to "Balance immediately", and uncheck the "No more often than ever X hours" option, it set it to a low number like 1-2 hours. 

 

For the balancing ratio slider, set this to "100%", and check the "or if at least this much data needs to be moved" and set it to a very low number (like 5GBs). 

 

This should cause the balancing engine to rather aggressively move data out of the SSD drive and onto the archive drives, reducing the likelihood that this will happen.

 

 

Also, it may not be a bad idea to use a larger sized SSD, as the free space on the drive is what gets reported when adding new files. 


This is part of the problem with the way that the SSD optimizer balancer works. 

 

Specifically, it creates "real  time placement limiters" to limit what disks new files can be placed on.  

 

 

 

I'm guessing that the SSD is below the threshold set for it (75% by default, so ~45-50GBs).  Increasing the limit on the SSD may help this (but lowering it may as well, but this would force the pool to place files on the other drives rather than on the SSD). 

 

 

Additionally, there are some configuration changes that may help make the software more aggressively move data off of the drive.

http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing%20Settings

On the main balancing settings page, set it to "Balance immediately", and uncheck the "No more often than ever X hours" option, it set it to a low number like 1-2 hours. 

 

For the balancing ratio slider, set this to "100%", and check the "or if at least this much data needs to be moved" and set it to a very low number (like 5GBs). 

 

This should cause the balancing engine to rather aggressively move data out of the SSD drive and onto the archive drives, reducing the likelihood that this will happen.

 

 

Also, it may not be a bad idea to use a larger sized SSD, as the free space on the drive is what gets reported when adding new files. 

Link to comment
Share on other sites

  • 0

yeah this made this feature a bit less useful to me that it can't somehow know the size of the file before it writes.  i sometimes have files larger than the spare 32gb ssd i had lying around.  but i can see why you wouldn't always be able to tell the size before you start.

Link to comment
Share on other sites

  • 0

Hi,

 

I recently converted to W10 from WHS2011, after a few issues :huh: I'm running without too many issues.

 

I had converted a 512GB SSD for use in DP, but although its seen it was not used -

 

Then I discovered I should the use SSD optimizer plug in.

 

I downloaded it, but cant run it.  I get an error message "must use DP 2.xx" to use this version of SSD Opt.  Based on messages here, I'm actually using 2.2.0.852 beta, shouldn't that let me use SSD opt.?

 

If I need to go back to 2.1.1. 561 how do I do it?

 

Also I gained some files that I don't know what they are and can't open them or see size. I'm guessing there is one file for each of my spinners.

 

Is that correct?. .Forum will not let me post a snipit of the folders name, but its a long alpha numeric sequence

 

I duplicate most but not all of my server folders.

 

Please let me know how I can use my SSD.

 

Thanks guys.

Link to comment
Share on other sites

  • 0

Yeah, sorry about that. The installer that we use (WiX, IIRC) has some odd issues, like this.  

 

And yeah, the new files should copy to the SSD, and then later be balanced off of the SSD, based on your balancing settings. 

 

Also, if you're using duplication, then remember that you need to use 2+ drives for the SSD, or it will fall back to using an archive drive, for duplicated data. 

And yes, this has to be two PHYSICAL drives, not just partitioned drives, as the software is aware of the volume/disk relationships. 

Link to comment
Share on other sites

  • 0

I did not quite understand this answer as in the part about duplication.

 

As far as I understood, I had not turned on duplication to the 512GB SSD only to the folders on the pool.

 

However, since the files did not later transfer to the pool - it would seem that I need 2x SSD even to store the files and then have them later move automatically move to the spinners in the pool.

 

That's a shame since I have one spare 512GB but only 2x 256GB SSD's

 

So to recap - its not possible to set a single SSD to receive the files and later transfer to the main pool if duplication is set there?

Link to comment
Share on other sites

  • 0

The reason for this is that we use "real time duplication".  That means that duplicated files are written to all sets of disks, at the same time, in parallel.   Meaning that they're protected as soon as they're written to the pool. 

 

That's why this balancer requires two (or more) drives for the "SSD"s.  

 

 

That said, you can disable real time duplication, and it will then only need the single disk. But it may be a full 24 hours before the data is actually duplicated (protected). So if a disk fails in that time, you may lose data because it's not duplicated yet. 

Link to comment
Share on other sites

  • 0

That's why this balancer requires two (or more) drives for the "SSD"s. 

 

Really appreciate the quick reply. Pardon me if this information is available in any help file for SSD optimizer.

 

OK, whats the rule about size and number of the SSD's?

 

- If the SSD's are not equal in size, does the optimizer just treat them as both the same size as the smallest?

- How does 3 SSD's work vs. 4? 

- I'm running out of SATA ports, adding them via an add in card is OK for Optimizer?

That said, you can disable real time duplication, and it will then only need the single disk. But it may be a full 24 hours before the data is actually duplicated (protected). So if a disk fails in that time, you may lose data because it's not duplicated yet. 

 

So I could just turn off real time Dupe.  Thought I did that, but 48hrs later files were still on SSD and had duplicated there just like your warning.  I'll try again before deciding on adding another SSD.

Link to comment
Share on other sites

  • 0

You're very welcome! 

 

No rule on size.  But the size is heavily influenced by your needs.  If you're dumping 100's of GB per day, then you will definitely want larger drives. But if it's more like GBs per day, then a small size is going to be fine. 

Also, the size of the drive has to be large enough to fit the files. So if you plan on dumping 100GB files, then you need a drive that is larger than that, otherwise, you will get "out of space" errors when copying files.

 

 

As for number, the minimum is the highest level of duplication that you're using, generally speaking.  Since you want all of the copies written to the SSD first.... that's a requirement.  If you turn off real time duplication, then you only need one, really. 

(also, see below)

 

If the SSDs are not equal in size, that's fine.  DrivePool's default placement strategy is to place files on the drive with the most free space. This also applies to the SSD Optimizer, and the drives marked as "SSDs".   So, it will round robin between the drives, spreading out the data, if possible.

 

 

As for a controller card, absolutely!  And I have a number of recommendations depending on your budget (more is better here, to be honest). 

 

 

As for the files still being on the SSD, that depends on the balancing settings you' configured.  Also, make sure that the drive *is* marked as an SSD in the balancer, and that nothing has the file open (as this will prevent balancing).

Link to comment
Share on other sites

  • 0

So in another thread you will see I ran into problems adding 2 older 256GB SSD's. :rolleyes:

 

Anyway after a complete OS install and after there is resolution of my folder questions, I will be following your information above.

 

I will probably add a 2nd 500GB SSD for simplicity and to avoid the driver issues that started my problems, but its good to know you thought ahead on these points.

 

Before I read this reply, I have to admit I "cheaped out" on a SYBA SI-PEX40094 PCI-E  add-in card SATA card, but fortunately it will only have to work with an occasional use optical device unless I add another "Red" in a year or 2.

 

:unsure: Maybe I will try an SSD on it for the sake of science... and let you know.

Link to comment
Share on other sites

  • 0

I will probably add a 2nd 500GB SSD for simplicity and to avoid the driver issues that started my problems, but its good to know you thought ahead on these points.

 

Before I read this reply, I have to admit I "cheaped out" on a SYBA SI-PEX40094 PCI-E  add-in card SATA card, but fortunately it will only have to work with an occasional use optical device unless I add another "Red" in a year or 2.

 

:unsure: Maybe I will try an SSD on it for the sake of science... and let you know.

 

So far the above card no workee - I'm not sure I want to spend big $ for SATA ports but would be happy to learn of cards you found worked.

 

Maybe move to new thread?

Link to comment
Share on other sites

  • 0
On 10/9/2017 at 5:37 PM, Christopher (Drashna) said:

No rule on size.  But the size is heavily influenced by your needs.  If you're dumping 100's of GB per day, then you will definitely want larger drives. But if it's more like GBs per day, then a small size is going to be fine. 

Also, the size of the drive has to be large enough to fit the files. So if you plan on dumping 100GB files, then you need a drive that is larger than that, otherwise, you will get "out of space" errors when copying files

I just ran into this when trying to backup a few HDD images, and running a few VMS, i bought 2, 120GB SSD for caching but I looks I'll need 200GB-500GB to not run into any issues. 

I setup the cache because the VMs HDD access was sluggish.

I need 2 SSDs for duplication.

Question, i don't really want to waste those SSDs or money, is it possible to pool the 2,  120gb SSDs and buy 2 more 120gb SSDs and pool those, so i can a 240gb 2x duplication SSD cache? So, (120+120) + (120+120)?

Link to comment
Share on other sites

  • 0
2 hours ago, MitchellStringer said:

Question, i don't really want to waste those SSDs or money, is it possible to pool the 2,  120gb SSDs and buy 2 more 120gb SSDs and pool those, so i can a 240gb 2x duplication SSD cache? So, (120+120) + (120+120)?

Just tested this myself using two small partitions off a spare SSD I had in my server.  Turns out you actually can pool drives into a child pool, add that pool to your primary one, and then setup the SSD cache target to be the child pool.

However - Christopher's statement still holds true:  you'll be limited by the size of a single volume for your largest single file copy to the main pool.  i.e. if it's a file over 120GB, then it wouldn't fit on any of your 120GB cache drives.  That's just how Drivepool works - a file must fit on a single volume/drive.

As an alternate (and perhaps better solution), you could use software RAID 0 to create a stripe using 2 120GB SSDs, and then another stripe for the two new 120GB SSDs, and you'd have two valid 240GB SSD cache targets for your cache, which would support 2x duplication.  The performance might even go up due to that implementation.

Link to comment
Share on other sites

  • 0
42 minutes ago, Jaga said:

Just tested this myself using two small partitions off a spare SSD I had in my server.  Turns out you actually can pool drives into a child pool, add that pool to your primary one, and then setup the SSD cache target to be the child pool.

However - Christopher's statement still holds true:  you'll be limited by the size of a single volume for your largest single file copy to the main pool.  i.e. if it's a file over 120GB, then it wouldn't fit on any of your 120GB cache drives.  That's just how Drivepool works - a file must fit on a single volume/drive.

As an alternate (and perhaps better solution), you could use software RAID 0 to create a stripe using 2 120GB SSDs, and then another stripe for the two new 120GB SSDs, and you'd have two valid 240GB SSD cache targets for your cache, which would support 2x duplication.  The performance might even go up due to that implementation.

 

Thank you for testing for me!, I might consider RAID 0. But do you know what would happen in a configuration with different sizes drives?

 

Ie

 

3 x SSDs

 

120GB +120GB+500GB - Would the available space still show as 120gb? Would X2 Duplication be handled ok?

 

Trying to keep as many drive bays free as possible

Link to comment
Share on other sites

  • 0

If you mean a child pool of two 120's, then another higher level pool of the child and the 500, then add that pool to the main pool as it's SSD Cache... the limitation imposed on the largest single file copied to the main Pool would be that of the smallest volume/drive in the child caches (i.e. 120GB) since you have 2x duplication enabled.  The child pool of 120+120 would still need to hold a copy of whatever file was being written to the SSD cache (and duplicated there), and it has a limit of..  120GB per file.

If your largest file size is over 120GB..  getting another 500GB SSD to add as the 2nd cache child won't solve the underlying problem.  Software RAID 0 on the 120's will fix it without any issue however.

Link to comment
Share on other sites

  • 0
19 minutes ago, Jaga said:

If you mean a child pool of two 120's, then another higher level pool of the child and the 500, then add that pool to the main pool as it's SSD Cache... the limitation imposed on the largest single file copied to the main Pool would be that of the smallest volume/drive in the child caches (i.e. 120GB) since you have 2x duplication enabled.  The child pool of 120+120 would still need to hold a copy of whatever file was being written to the SSD cache (and duplicated there), and it has a limit of..  120GB per file.

If your largest file size is over 120GB..  getting another 500GB SSD to add as the 2nd cache child won't solve the underlying problem.  Software RAID 0 on the 120's will fix it without any issue however.

That's what expected, thank you. It's not so much that it's a single file over 120gb it's that i am backing up 3, 60gb files at once so after the 1st 2 files it just gives up.

 

I could do raid 0, 240 (120+120) + 240 (single drive) hopefully 240 should be enough

Thanks again for your time

Link to comment
Share on other sites

  • 0

I have a question about the ssd optimizer, I have it setup and working as expected, it is moving files off the ssd immediately as set by me. However my question is, Can the plug in be set to respect the rules for balancing as set by the plugins below it. In other words, I want drivepool to balance data over all of the drives equally. Is this possible?

It doesn't seem to be as I even set the use no more than 50% of the archive drives, and it still seems to be filling the 1st drive in the list.

Link to comment
Share on other sites

  • 0

Please can anyone answer this, I am attaching screenshots to help illustrate. I want to use the ssd cache because I have a 10gb link from my server to my primary workstation for moving large amounts of data quickly. 1gb link saturates at 113Mbs. But I wsnt t0 also balance out the drives.

 

image.png.5e6a10af9e06a2d76437104f25d978c0.png

image.png.b655b9f9f74cc777cbe725969aca2c13.png

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...