Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

Everything posted by darana

  1. Thanks Christopher, I appreciate the reply. This jives with what I was seeing as I've been using DP the last couple weeks. I had turned off all of this config stuff and am just using the basic duplication and ssd optimizer in the interim. It's still a nicer solution than my old external array, though I still covet that next tier (without having to build and manage too much additional hardware ) I'm unfortunately experiencing some other issues that I can't yet pinpoint but I think may be related to drivepool. If/when I can pinpoint it more exactly I'll make another post with more details -
  2. Howdy - just found DrivePool today, what a great program! I've been poking at a config here for a bit, wanted to see if I was missing anything as far as setting up a tiered storage solution. Use case is that I am a photographer/designer so I have lots of data that I access all the time, "project" focused chunks of data that I access intensely for a while and want to be fast, and then huge chunks of data that I very rarely access (7.8GB of raw images so far.. and I'm starting to shoot video more so that's going to balloon!). Currently my workstation is running an external 12TB SAS array (which is failing, hence the move to DrivePool) with an internal 1TB Samsung 850 EVO that I use for scratch, commonly used files like my git/code repos and those active project files, e.g. if working on a recent shoot or a video project. I've also got an external ReadyNAS I use for backup but supports being mounted as an iSCSI target that I've been thinking about adding in as an archive tier for those files which I really very rarely access. Lastly, I've got a couple of older SSD disks for the SSD Optimizer. Manually moving files around is becoming a pain, though the speed benefits are worthwhile for the active projects at least. My ideal scenario for working data is as below. Front-end SSD Cache Use older Samsung SSD Disks with SSD Optimizer as a purely front-end cache. Read/Write active data that is running on the large data store. I set this up with the SSD Optimizer and initial tests seemed to work as expected - I've seen the other posts about how to optimize this and will do so. Primary Data Array For the rest of the primary data tier there are 4x 4TB drives for now, I've put them into the pool and enabled 2x duplication. Again, basic setup seems to be working as expected. High Speed Working Data I'd like to add the 850 EVO to the pool. But I don't want it to be written to EVER unless explicitly told to do so. To implement this, I: configured the default Drive Usage Limiter to never use the EVO for Duplicated or Unduplicated data. On main balancing page, DISABLED the setting for File placement rules to respect the file placement limits set by the balancing plug ins. To move a "Project" folder over I: Set a "Folder Duplication" setting for File Protection of 1x to the project folder. In File Placement tab, went to the project folder, and unchecked all of the disks except for the 850 EVO Changed the rule to "Never Save to other Disks". Hit SAVE At this point, and the data moved to the EVO and the pool shows as healthy, and my folder/file structure is intact. For the duration of working on the project it can now live on the fast storage. When I'm done actively working on it, I'd reverse the process. It just occurred to me that the SSD Optimizer is probably going to try and sit in front of the EVO for this, though. Not sure if there is any way around that? Archive Tier To implement an "Archive" tier I'd add an iSCSI LUN from my ReadyNas to the DrivePool then essentially do the same thing as above, but instead selecting the older folders. For example, I'd just move all of my photo folders from pre-2015. This way I can still access them, admittedly at a slower speed, without having to do a full archiving, create separate lightroom catalogs or update all of the paths. So all that said, did I just create a monstrosity or would this work as expected? With the obvious caveat of I'm taking responsibility for setting up the rules properly by hand. This is no different than I'm already doing when I'm manually moving stuff to the project drive, and far superior b/c of not screwing up file paths. I'm just worried about unforseen outcomes from setting up something that seems to be a bit outside of the current design expectations!
  3. +1 to this request, though for a slightly different reason - to be able to use storage tiers without disrupting folder structures.
  • Create New...