Jump to content
  • 0

Drivepool + Clouddrive integration - limiting cloud backups for files w/ 3x duplication


borez

Question

Hi all,

 

Need some tips on cloud backups: I'm running a drivepool with 2x duplication, and would like to create a secondary cloud backup for critical files. These files are in a specific folder, where I've turned on 3x duplication (e.g. 2x local, 1x cloud). Specifically, the cloud drive should only store files with 3x duplication.

 

Tried fiddling around with the settings, but Drivepool keeps wanting to move files with 2x duplication over to the cloud. Or rather that's what I think, because it's proposing to move 1TB of data from my HDD pool to the cloud (my filesizes are below that).

 

My approach of doing this:

 

1) Limit the cloud drive to only duplicated files (Balancing section)

2) In the Placement Rules section, set the root folders to not use the cloud drive, but create a separate rule that sets the specific sub-folder to use the clouddrive. The subfolder rules are ordered above the root folder rules.

3) Turn on 3x duplication for the specific folders

 

Thanks again, and happy holidays!

 

EDIT: I went ahead with the duplication, and worked after all. What I realised was that the re-balancing wasn't correct. I get the same issue below.

http://community.covecube.com/index.php?/topic/2061-drivepool-not-balancing-correctly/

 

post-2257-0-38915000-1482735710_thumb.png

 

Clouddrive: 1.0.0.0.784 BETA

Drivepool: 2.2.0.737 BETA

 

Furthermore, when I click on the "increase priority" arrow in Drivepool, the duplication process speeds up but Clouddrive's uploads dramatically slow down. Any idea why?

Link to comment
Share on other sites

7 answers to this question

Recommended Posts

  • 0

http://community.covecube.com/index.php?/topic/1226-how-can-i-use-stablebit-drivepool-with-stablebit-clouddrive/

 

 

Unfortuantely, there isn't a good way to do this right now.  

You can set the "Drive Usage Limiter" to only have duplicated data on the CloudDrive disk.  Since it needs 2 valid disks, it WILL use this drive first, and then it will find a second drive for this as well.

 

For x3 duplication, that will store one copy on the CloudDrive disk, and 2 on the local disks. 

 

However, this degrades the pool condition, because the balancer settings have been "violated" (duplicated data on drives not marked for duplication). 

 

 

 

 

However, we do plan on adding "duplication grouping" to handle this seamlessly. But there is no ETA on that.

Link to comment
Share on other sites

  • 0

Well, without more info, I can't really tell... But likely one or more settings are violating the others, so it's degrading the pool condidtion. 

 

 

As for the priority change and upload decrease, this is because more writing is occurring to the drive, and it adds some processing to optimize the upload, I believe.

 

Once it's settled down, it should "normalize".

Link to comment
Share on other sites

  • 0

Well, without more info, I can't really tell... But likely one or more settings are violating the others, so it's degrading the pool condidtion. 

 

 

As for the priority change and upload decrease, this is because more writing is occurring to the drive, and it adds some processing to optimize the upload, I believe.

 

Once it's settled down, it should "normalize".

 

Thanks for this. So are there any best practices on how to integrate Clouddrive with Drivepool? Specifically on the following:

 

1) CD for only duplicated data (Drive Usage Limiters)

2) Specific folders to have duplicated copy on CD - set folder rules for 3x duplication (2x offline, 1x online?)

3) All other folders to have 2x duplication offline

 

Thanks again!

Link to comment
Share on other sites

  • 0

http://community.covecube.com/index.php?/topic/1226-how-can-i-use-stablebit-drivepool-with-stablebit-clouddrive/

 

 

Unfortuantely, there isn't a good way to do this right now.  

 

You can set the "Drive Usage Limiter" to only have duplicated data on the CloudDrive disk.  Since it needs 2 valid disks, it WILL use this drive first, and then it will find a second drive for this as well.

 

For x3 duplication, that will store one copy on the CloudDrive disk, and 2 on the local disks. 

 

However, this degrades the pool condition, because the balancer settings have been "violated" (duplicated data on drives not marked for duplication). 

 

However, we do plan on adding "duplication grouping" to handle this seamlessly. But there is no ETA on that.

 

Thanks, very interesting. I tried CD separately by manually copying my files. And it worked perfectly.

 

Apart from duplication grouping, I think what needs to be done is on integration with Drivepool, particularly on read/write flow control:

 

1) In my previous test (where Drivepool was managing the duplication), the cache drive (a 256GB SSD) was filled up and become terribly slow. As a result, the whole dup process (~600GB) took more than 1 full day. In today's test (Goodsync x CD), I had full manual control of the copying process. What I did was to flood the cache up to 60-70GB of uploads, pause the copying (to allow CD to flush those uploads into the cloud) and repeat. This whole process took me not more than 6 hours. Perhaps DP could have better control of the copying process.

 

2) In my previous experience, DP would always have difficulty in pulling the CD out from the system, and I always have to force a "missing disk" forced pull (e.g. by killing the CD service, trigger a missing disk in DP, and removing it out). However, at the next reboot, DP would remember the cloud drive, and just put it back. Strange.

 

Looking forward to future builds though!

Link to comment
Share on other sites

  • 0

Copying the files or using a sync utility absolutely works fantastically! 

And for now, it's probably the best method. 

 

 

As for the first issue, there are a number of optimizations that we could (and probably should) be doing.   And it's something we definitely plan on improving, but the focus has been on StableBit CloudDrive right now, and getting that "out the door". 

 

That said, StableBit DrivePool does use a background IO priority for balancing, so it should be "as fast" as a straight copy.  But if the data is being written straight to the CloudDrive, that can cause issues.

 

And yes, if the underlying disk for the cache gets completely full (well mostly), StableBit CloudDrive will actively throttle the connection.  This is done for a few reasons, but mostly for stability. 

We talk about this issue specifically here:

http://community.covecube.com/index.php?/topic/1610-how-the-stablebit-clouddrive-cache-works/

 

 

 

 

As for the removal issue... Well, the best way is "you're doing it wrong".  I don't mean to be overtly blunt here, but from the sounds of it, what you're trying to do isn't really supported, not properly. 

In StableBit DrivePool, the removal process wants to move all of the data off of the drive.  For the CloudDrive disk, that means that it gets redownloaded and them moved to a local disk.  this can take hours or days, depending on the exact configuration (hardware, software, network, ISP, etc). 

 

There is a "force detach" option when detaching the CloudDrive disk from the system. That closes "open files" and will detach the drive regardless.  This is most likely what you would want to do, but ...even still, probably not.

This will cause the disk to show up as "missing" when it does disappear, and will reduplicate data that was on the CloudDrive disk. 

 

See what I mean about "not supported". 

 

 

That said, StableBit DrivePool is designed to detect pooled drives and re-add them automatically. The exception is drives that have been actually removed from the system.  In this case, we "mark" the drive as removed, so it doesn't get re-added. 

 

If you want, the latest versions of StableBit DRivePool (2.2.0.700+) support the option to "drop" a drive from participation without going through the normal removal process.  

The caveats here is that it shows up as "missing" in the UI, the contents are NOT moved off of the drive, and it will still try to reduplicate data based on the settings. 

 

If you're interested in this, run "dpcmd" on the system, and it will show you all of the options.

the "ignore-poolpart" option is what you want. 

(though, in regards to the reduplication, you could use the same command to change the duplication level for the folders in question, in case you wanted to "script" this). 

Link to comment
Share on other sites

  • 0

Thanks again for your comments, as always. Specifically on the part below: 

 

 

As for the removal issue... Well, the best way is "you're doing it wrong".  I don't mean to be overtly blunt here, but from the sounds of it, what you're trying to do isn't really supported, not properly. 

In StableBit DrivePool, the removal process wants to move all of the data off of the drive.  For the CloudDrive disk, that means that it gets redownloaded and them moved to a local disk.  this can take hours or days, depending on the exact configuration (hardware, software, network, ISP, etc). 

 

There is a "force detach" option when detaching the CloudDrive disk from the system. That closes "open files" and will detach the drive regardless.  This is most likely what you would want to do, but ...even still, probably not.

This will cause the disk to show up as "missing" when it does disappear, and will reduplicate data that was on the CloudDrive disk. 

 

See what I mean about "not supported". 

 

 

Get what you mean on this, and understand on the logic. However, the issue was that it was taking forever to remove the CD disk, even when I initiated a "force detach" option. No idea on what was the bottleneck (slow download speeds?). Was comfortable with this as the clouddrive had duplicated data, and it would have been faster to re-dup the files from the offline storage pool.

 

There might be an option for this, but can Drivepool focus on evacuating non-duplicated files, rather than the whole drive? This can be a life saver, particularly when you're trying to salvage data from a dying drive (with limited read cycles left).

Link to comment
Share on other sites

  • 0

Well, the detach option has to do a bunch of cleanup, including finishing the remaining "to upload" tasks.  So it can take a while.

 

 

As for StableBit DrivePool, there is a "Duplicate Data Later" option when removing the drive.  This EXPLICITLY skips duplicated data and only moves off unduplicated data from the drive.  It then runs a duplication pass after the disk is removed. 

Additionally, there is a "force damaged disk removal" option that skips problem files, and leaves them on the problem disk.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...