Jump to content
  • 0

Specified duplication drives


bissquitt

Question

I bought too many 8TB easystores when they were on sale and I've decided that I want to use the ones that wont fit in my case as backup drives. Just USB attached and sitting on top of the case. What I want to do however is make sure that in those 5 drives there is at least 1 copy of every file. If there's a fire or emergency or whatnot, I want to be able to grab the 5 externals sitting on top and be confidant that I have a full copy of everything (assuming working hardware). I know that at one point there was a feature that was similar to this but it didn't quite do this. Additionally i would ideally like all file access to NOT come from these USB drives, both for speed and wear on the drives.

 

My current setup (by accident due to a migration) is my main pool and a pool of 5 8TB drives that has a complete copy of the data. I know it would be easy to just set a scheduled task to backup to that pool, but I didn't know if there was a more elegant solution built in.

Link to comment
Share on other sites

10 answers to this question

Recommended Posts

  • 0

So you have 5 drives, and want to have a copy of all the files on all of the drives? 

 

If so, then just enable duplication.  There is a checkbox to "Protect my files from more than one disk failure" option.  Check this, and set the level to 5.   This will enable x5 duplication, and do just this.

Link to comment
Share on other sites

  • 0

I have 29 drives. 24 in a case, 5 sitting on top via usb. The 5 on top have enough capacity to hold all the data on the 24. I want to ensure the 5 connected via usb have a full copy of the data. In an emergency I can grab those 5 and run rather than having to hunt for which drives have which data in the server.

Link to comment
Share on other sites

  • 0

Aah, okay. 

 

Adn you have the internal drives in a pool, as well, correct? 

 

If so, the simplest option here would be to make sure you're on the public beta of stableBit DrivePool (or newer). 

 

Add the 5 drives to a pool.   Then add both this pool and the existing pool to a new pool (so you have a top level pool with the two other pools).

 

And then seed this top level pool (on the "internal pool", move the contents into the newly created Poolpart folder).  Remeasure the pools (all of them), and then enable duplication on the top level pool, and verify duplication settings on the underlying pool.

 

 

This will mean that new data is written both the internal and external pools, at the same time.  One copy will be internal and one will be external.  However, if you remove the external disks, the pool would show a missing disk, but in this case.... that wouldn't really matter. 

 

 

 

The other option is to just pool the external disks, and use something like Free File Sync to sync the pools.

Link to comment
Share on other sites

  • 0

Did not know you could do a pool inside a pool. Thanks.

 

Is there a way to set read priority? As in, if a file is accessed on the disk, access it from the internal pool rather than hammer the externals over usb? Ideally the only access to the external pool would be writing/editing data as it happens and then the occasional re balance or verify. I will be running plex, so if plex decides its going to read the file from the USB drive, its going to be REALLLLY slow.

Link to comment
Share on other sites

  • 0

Yeah, we added it "recently". It's only supported in the beta build, right now.  But with CloudDrive, it really became necessary.  (speaking of which, the pool is "cloudDrive aware" and will avoid using them if there is a local option).

 

As for the drive priority, the "Read striping" feature should handle this form the most part.  If one or more drives are slower or have higher latency, it should use the faster drives.  

 

At least for reading.  Writing is done in parallel to all destination disks. But balancers such as the "SSD Optimizer" should help out with that. 

Link to comment
Share on other sites

  • 0

Wouldn't read striping do the exact opposite of what I want, It reads from both duplicated drives simultaneously. I guess it would stop it from being slow if it ONLY read from the external, but one of the main points would be to not put wear on the external backup unless needed.

Link to comment
Share on other sites

  • 0

the feature is called read striping, but it doesn't always.  It will read from one disk or the other, based on the disks activity/latency. 

 

http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Performance%20Options

 

StableBit DrivePool balances the reads across different speed hard drives, depending on the current I/O load.

 
For example, if you have 2 hard drives, one connected via. USB 2.0 and another via. eSATA, StableBit DrivePool will issue more read calls to the eSATA drive because it has more bandwidth.

 

And you can substitute internal SATA for eSATA here as well. 

 

 

If you're using USB 3, it may not switch as actively to the internal drives, as USB3 has a lot more bandwidth. 

 

 

 

Either way, I've added a feature request to prefer non-USB drives.  We already do this for CloudDrive disks, so hopefully, we can add a togglable option for removable storage, like this. 

https://stablebit.com/Admin/IssueAnalysis/27655

Link to comment
Share on other sites

  • 0

well...  Added.

 

If you check the above link, it shows the order of preference now. 

 

So, internal drives should be read from first, as opposed to USB/external drives.

 

 

You can download this build here:

http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.0.866_x64_BETA.exe

Link to comment
Share on other sites

  • 0

Well that's awesome. If you could specify "priority duplication drives" as well that may solve the issue of having to use a pool within a pool. The most logical place would be under Balancers > drive usage limiter. Currently the options are "duplicated" and "unduplicated". You could add something like "Backup" as another option. When duplicating a file, it would take preference on drives labeled "backup" and it would guarantee that "at most and at least" one of the copies of the file are on one of the drives labeled backup (So you don't end up in a situation where the file copies are ONLY on the backup drives). Have it throw an alert if there's insufficient "backup" drive space and then failover to a normal drive for duplication. You could also take those drives labeled "backup" and make them a lower priority like you did for USB drives so that they aren't used as much. In the event that I wanted to use the top row of drive bays as backup drives, I could label them backup and despite being normal sata, they would have a lower priority.

 

The other appropriate place would be either a checkbox in the pool/folder duplication, or another text box so you could specify how many times you want the file to exist on a "backup" drive....or to make it simpler with the least UI/backend change, When you add a drive, add the option to specify it as a backup and possibly a "convert to backup" option next to "remove" that would function as removing it gracefully and then re-adding as a backup drive. Then just have duplication function as "Keep one copy of every file on a drive tagged backup if possible. Once every file has a duplicate on backup, start putting the additional copies on the backups. Keep at least X % of cumulative backup space free for new files. During rebalancing, remove additional duplications of files from backups until that % is reached.

 

Yet a third option would be to add the ability to link a pool as a mirror of another pool such that as real time duplication is happening it would write one to each pool rather than to 2 drives in the same pool. Then you could choose duplication within the individual pools to decide if you want secondary duplications to exist on the main or backup pool....Essentially this would be like you described above as a pool of pools as far as the backend is concerned, it would just automate it in the UI rather than having to do it manually.

 

 

</end rant> Sorry I have a background in coding and proceeded to word vomit solutions. Thank you for putting in the feature request and taking care of it.

Link to comment
Share on other sites

  • 0

Well, in part, the Ordered File Placement kind of does this, as does the File placement rules. 

 

Otherwise, using hierarchical pooling would do this.  Add the existing pool and a single drive to the pool, and then duplicate as needed (on both pools). Duplication on the top level pool would ensure that one copy is on the individual disk, and one is on the pool.  

 

The same goes with multiple pools, rather than individual disks.

 

 

 

And we do, for the duplication.  there is a checkbox on the option to specify more than x2 duplication (more than 2 copies of the file). 

 

 

 

 

And the third ... well, hierarchical pooling accomplishes this. 

 

And the hierarchical pooling allows you to specify duplication for both the top level pool, and for underlying pools.  

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...