Jump to content
  • 0

Using DrivePool for 3 or more storage tiers?


darana

Question

Howdy - just found DrivePool today, what a great program! I've been poking at a config here for a bit, wanted to see if I was missing anything as far as setting up a tiered storage solution. 

 

Use case is that I am a photographer/designer so I have lots of data that I access all the time, "project" focused chunks of data that I access intensely for a while and want to be fast, and then huge chunks of data that I very rarely access (7.8GB of raw images so far.. and I'm starting to shoot video more so that's going to balloon!). 

 

Currently my workstation is running an external 12TB SAS array (which is failing, hence the move to DrivePool) with an internal 1TB Samsung 850 EVO that I use for scratch, commonly used files like my git/code repos and those active project files, e.g. if working on a recent shoot or a video project. I've also got an external ReadyNAS I use for backup but supports being mounted as an iSCSI target that I've been thinking about adding in as an archive tier for those files which I really very rarely access. Lastly, I've got a couple of older SSD disks for the SSD Optimizer.

 

Manually moving files around is becoming a pain, though the speed benefits are worthwhile for the active projects at least. My ideal scenario for working data is as below. 

 

Front-end SSD Cache

Use older Samsung SSD Disks with SSD Optimizer as a purely front-end cache. Read/Write active data that is running on the large data store. I set this up with the SSD Optimizer and initial tests seemed to work as expected - I've seen the other posts about how to optimize this and will do so. 

 

Primary Data Array

For the rest of the primary data tier there are 4x 4TB drives for now, I've put them into the pool and enabled 2x duplication. Again, basic setup seems to be working as expected. 

 

High Speed Working Data

I'd like to add the 850 EVO to the pool. But I don't want it to be written to EVER unless explicitly told to do so. To implement this, I:

  • configured the default Drive Usage Limiter to never use the EVO for Duplicated or Unduplicated data.
  • On main balancing page, DISABLED the setting for File placement rules to respect the file placement limits set by the balancing plug ins.  

To move a "Project" folder over I:

  • Set a "Folder Duplication" setting for File Protection of 1x to the project folder.
  • In File Placement tab, went to the project folder, and unchecked all of the disks except for the 850 EVO
  • Changed the rule to "Never Save to other Disks".
  • Hit SAVE

At this point, and the data moved to the EVO and the pool shows as healthy, and my folder/file structure is intact. For the duration of working on the project it can now live on the fast storage. When I'm done actively working on it, I'd reverse the process.

 

It just occurred to me that the SSD Optimizer is probably going to try and sit in front of the EVO for this, though. Not sure if there is any way around that? 

 

Archive Tier

To implement an "Archive" tier I'd add an iSCSI LUN from my ReadyNas to the DrivePool then essentially do the same thing as above, but instead selecting the older folders. For example, I'd just move all of my photo folders from pre-2015. This way I can still access them, admittedly at a slower speed, without having to do a full archiving, create separate lightroom catalogs or update all of the paths. 

 

So all that said, did I just create a monstrosity or would this work as expected? With the obvious caveat of I'm taking responsibility for setting up the rules properly by hand. This is no different than I'm already doing when I'm manually moving stuff to the project drive, and far superior b/c of not screwing up file paths. I'm just worried about unforseen outcomes from setting up something that seems to be a bit outside of the current design expectations! :)

 

 

Link to comment
Share on other sites

11 answers to this question

Recommended Posts

  • 0

I'm sorry for the long delay here.  (check the off topic section for why). 

 

 

As for doing this natively, no, this really wouldn't be supported.  At least not well.   It may be possible to change the SSD optimizer to do this, or more likely, create a new version of it that does support this. 

 

 

That said, you can probably accomplish this using the SSD Optimizer balancer, and some file placement rules. 

The SSD Optimizer doesn't "auto detect" the drives, so only the specified drives are marked as SSDs.  So you could choose to not set the 850 Evo as an SSD, but as an archive drive.   

 

From there, create a specific folder (or folders), and set up the file placement rules to limit this folder(s) to that drive.

In the main balancing settings, uncheck the "File placement rules respect real-time file placement limits set by the balancing plug-ins", so that the files in these folders are written directly to the SSD drive in question. 

 

 

As for the "archive", that is where it would get really complex here.  And this is something we don't really support, sort of using file placement rules to put certain files/folders on specific disks.  

 

Otherwise, it may be simpler to using syncing software (or robocopy) to copy/move these files to the NAS. 

Link to comment
Share on other sites

  • 0

Thanks Christopher, I appreciate the reply. This jives with what I was seeing as I've been using DP the last couple weeks. I had turned off all of this config stuff and am just using the basic duplication and ssd optimizer in the interim. It's still a nicer solution than my old external array, though I still covet that next tier (without having to build and manage too much additional hardware :D)

 

I'm unfortunately experiencing some other issues that I can't yet pinpoint but I think may be related to drivepool. If/when I can pinpoint it more exactly I'll make another post with more details -

Link to comment
Share on other sites

  • 0

Well, I'll feature request this, anyways, as a 3rd tier could be useful for some people.  The issue being how we determine what goes into the 1st and 3rd tier. 

 

https://stablebit.com/Admin/IssueAnalysis/27444

 

 

And please do let me know if you're running into any other odd issues.  Post in the forums, or in https://stablebit.com/Contact(we check this more often). 

Link to comment
Share on other sites

  • 0

I know at work, we have multiple tiers of data spread over Pure Storage which are all SSD arrays and various other storage solutions which consist of spinning disk. These tiers are based on file accessed date or modified date. I forget which one. So older data gets pushed to slower disks and remote data centers with free space depending on how old the data is. I forget the details because I'm no longer on the team that worked with the storage team but I just wanted to throw that out there as a way to determine what goes into different tiers of storage. That's all in an enterprise environment with multiple data centers across the country with over 100k employees but the same concept could be used in a home environment. New files under a given age would stay on tier 1 which could be SSD's, then moved to local spinning disks between an older age range, and then moved to cloud drives or remote shares after another given age.

Link to comment
Share on other sites

  • 0

I would like to 2nd the desire to have the ability to move files based on various criteria to certain storage.  This would give the ability to add an external NAS, but allow me to automatically place files there that haven't been changed in X amount of days/months/weeks/years.  Further having the ability to mark different groups of drives as different tier based on configurable criteria would be great.

 

Possible criteria:

file age

last file access/change 

folder/file location

file type

 

The biggest issue that myself and the OP face is that the data we need speed/performance from may have already been moved to tier 3 storage for non-use and then we have the desire to have it on fast storage, and most probably we need it immediately.  In my case, the files/data I'm working with at this point are 50-200gb in size.  The time to move, re-balance, etc is taxing and can make the tier'ing process unusable.  For me, with the proper logic this could be offset.  VMDK/VHD always on tier 1, everything else to tier 2 or tier 3 based on file age/access and/or file type.

 

Sounds like a great idea!

Link to comment
Share on other sites

  • 0

I would also like to add my "+1" to this.

I am currently evaluating my options here in the office. We are a small (ish) VFX/Post/VR house with currently 2x 96TB "Live" mirrored, 1x 56TB "Archive" and 1x 24TB "General stuff" all on Synology RS devices of various ages.

I would love to build a new server "front end" with some fast NVMe cache locally, along with a chunk of capacity drives (let's say, 64TB) that then connects to the SYNOs over iSCSI and manages the data in an automated fashion. The dream scenario would be "data ageing" where the oldest files accessed get pushed off to the Synology's and eventually, CloudDrive (along with a DR copy of the hot data).

From the users perspective, they are all still there, aggregated in DrivePool into one location, they are just served from "fastest required location".

Link to comment
Share on other sites

  • 0

So in my case, I guess, that could mean that I have a Pool that consists of 1 SSD, 1 regular spinner and 1 Archive HDD and files would reside on the SSD until they get older. move to regular spinner, get older, move to Archive HDD? Balancing rules would accomplish this and meanwhile keep duplication, to the extent possible, within one tier?

 

I think I am a +1 for this (and that is saying something as I am opposed to many/most suggestions made here :D ).

 

I just wonder whether, e.g., SQL Server DBs would be able to be relocated as SQL Server tends to lock the files pretty darn well.

Link to comment
Share on other sites

  • 0

Unless your SSD's are big, dont forget to Storage Space them into one big chunk. Else you will fall for the "240gb video file will not be written to a single 240GB SSD, even if you have a second 240GB SSD standing by". :)  

Nah, no reason to do that anymore.  Just use DrivePool.  :P  

Create a big pool of SSDs. :)

 

 

 

 

Possible criteria:

file age

last file access/change 

folder/file location

file type

 

This does sounds like a great idea.

 

The problem is that the current balancing system doesn't even use these.  

 

To implement this would require a massive rework of the balancing system.  And would essentially require either heavy disk access or a database to hold this. 

 

 

THIS IS NOT A PROMISE OR EVEN A PLAN,  it's just "thinking out loud"

This would require something like what we have brainstormed for StableBit FileVault.  use the info from this to expand the balancing system immensely and add massive flexibility to it. 

 

 

But again, this would be a huge change.  So it may not happen, or may be a long ways off. 

 

 

Also, much of this could be handled by using File Placement Rules, and managing this stuff on the pool directory structure itself.  it's a lot more manual, but it would work. 

Link to comment
Share on other sites

  • 0

To implement this would require a massive rework of the balancing system.  And would essentially require either heavy disk access or a database to hold this. 

 

 

THIS IS NOT A PROMISE OR EVEN A PLAN,  it's just "thinking out loud"

This would require something like what we have brainstormed for StableBit FileVault.  use the info from this to expand the balancing system immensely and add massive flexibility to it. 

 

 

But again, this would be a huge change.  So it may not happen, or may be a long ways off. 

 

So, uhm, by next week? :P

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...