Jump to content
  • 0

About Hierarchical Pooling


JohnKimble

Question

From here: http://blog.covecube.com/2017/09/stablebit-drivepool-2-2-0-847-beta/

 

I am a bit confused, this example tells about existing DATA that is already stored locally. But lets say I want to add a CloudPool as backup within a new Hierarchical Pool, I need to move everything from the current pool into the Hierarchical pool? (Since it is a new pool i guess so)

 

But is there a way to leave (in my case) 40TB of data but add a Clouddrive as a backup without moving the data at all?

Link to comment
Share on other sites

6 answers to this question

Recommended Posts

  • 0

There is a feature in DrivePool that I have found very useful in your particular question.

In Balancing Option -> File Placement.

New Drives.

If you and when you add the new drive, You must tell DB to use the new drive or not to.

You can have a drive that holds only the folders you want to on that drive.  

Set up a rule and it will happen.  Exclude the cloud drive from DP and the 2 stay separate and only the folders assigned to the cloud drive go on the cloud drive.

I found this works great.

 

Also, A bug I found has a great workaround here.  

If for any reason you need to remove a drive.  IE a number of things but for Just saying.  If you use the remove on the main screen, your read and write permissions to all of you pool is suspended until the drive is completely emptied.  I found this to be annoying if you have a very large Drive Pool.  Mine is 25TB with a lot of searches going on all the time.  If a drive is acting flaky.  Add a new drive and check the New drive box and then uncheck the drive you want to empty.  The balancing portion of DP will remove the drive while keeping all of the permissions active.

 

And for the record.  I don't fully know why but it works and I am not going to complain.  DP is a great software and I have license for it all and love it.  I have tried windows fat drive feature and but still love DP.

 

I have gone a far as removing a drive and Cloning the bad drive to a new one.  DP recognized the new drive as part of itself.  Went in and self healed itself will little or no intervention by myself.  Let's see Windows do that.

 

Also, I have read in this forum that Nobody want to spend money on an obsolete program.  If you want to give it up.  Then I hope you luck.  You could ask for better help when needed.  I have been using Version 2.1.1.561 until the crew moves it up.  if a bug is known, and you can contain it.  It is not a bug,  It is a limited feature.  Maybe that is why Windows is so F***ed up.  Too many tweaking for features not everyone used.

 

The only addition to this software collection I personally would love is a plugin that would give us the file explorer features to the pool per drive when troubleshooting.

When you have 10 drives in the system and none of them has a drive designation because it would complicate your regular file explorer, when you have a problem drive, Such as USB instability, you have to reboot with drives, hunt down the problem and then revert back.  Happened to me with my Drobo but thats another story.

Link to comment
Share on other sites

  • 0

To be honest, hierarchical pooling is probably one of our most complicated features.  Second only to the file placement rules. 

 

 

So don't feel bad if it has you confused.

 

 

 

That said, hierarchical pooling allows you to add a pool... to another pool as if it was a normal, physical disk.  In all regards it's treated like normal.

 

Well, that's a lie.  If a pool contains CloudDrive disks, there is some special handling for those drives, and the entire pool (we avoid reading from CloudDrive based pools to minimize internet usage). 

 

 

 

As for using a CloudDrive disk to back up the pool... it's simple, really:

 

Add your existing pool to a new pool. 

Add a CloudDrive disk to that new pool. 

Enable duplication on the top level pool.

... Seed your new pool, just like you'd do so with a normal drive:

http://wiki.covecube.com/StableBit_DrivePool_Q4142489

 

 

This will cause the data to show up in the new, top level pool.  And once you've remeasured, it will start duplicating the data from the local disk pool, to the CloudDrive disk.  

 

This will keep a copy locally, and on the cloud.   And because the software avoids reading from the CloudDrive disk....

 

 

That said, this is redundancy, and not a backup.  If you delete files, they're gone. Period. 

 

Alternative, you could just not add the CloudDrive disk to any pool, and use something like Free File Sync, SyncToy, GoodSync or the like to copy the contents of your pool to the CloudDrive disk.

 

 

There is a feature in DrivePool that I have found very useful in your particular question.

In Balancing Option -> File Placement.

New Drives.

If you and when you add the new drive, You must tell DB to use the new drive or not to.

You can have a drive that holds only the folders you want to on that drive.  

Set up a rule and it will happen.  Exclude the cloud drive from DP and the 2 stay separate and only the folders assigned to the cloud drive go on the cloud drive.

I found this works great.

 

Also, A bug I found has a great workaround here.  

If for any reason you need to remove a drive.  IE a number of things but for Just saying.  If you use the remove on the main screen, your read and write permissions to all of you pool is suspended until the drive is completely emptied.  I found this to be annoying if you have a very large Drive Pool.  Mine is 25TB with a lot of searches going on all the time.  If a drive is acting flaky.  Add a new drive and check the New drive box and then uncheck the drive you want to empty.  The balancing portion of DP will remove the drive while keeping all of the permissions active.

 

 

 

The "Drive Usage Limiter" balancer would work better for this.

 

That said, this shouldn't happen any longer in the new public beta build.  We've completely reworked the drive removal, so the pool should stay writable while removing a drive. 

.757
* When a drive is being removed from the pool, and there are in use unduplicated files preventing the drive removal, a dialog will show
  exactly which files are in use and by which process.
* [D] Fixed crash when cloning an exceptionally long path due to running out of stack space.
* [D] Rewrote how drive removal works:
    - The pool no longer needs to go into a read-only mode when a drive is being removed from the pool.
    - Open duplicated files can continue to function normally, even if one of their file parts is on the drive that's being removed.
    - Open unduplicated files that reside on the drive that's being removed will need to be closed in order for drive removal to complete 
      successfully. An option to automatically force close existing unduplicated files is now available on the drive removal dialog.
      When not enabled, the service will move everything that's not in use before aborting drive removal.

Also, I have read in this forum that Nobody want to spend money on an obsolete program.  If you want to give it up.  Then I hope you luck.  You could ask for better help when needed.  I have been using Version 2.1.1.561 until the crew moves it up.  if a bug is known, and you can contain it.  It is not a bug,  It is a limited feature.  Maybe that is why Windows is so F***ed up.  Too many tweaking for features not everyone used.

 

 

Well, DrivePool definitely isn't obsolete,  We're just a small company, and didn't really have plans for "if CloudDrive took too long".  Which is what happened.

 

That said, we have plans in place to prevent this from happening again.   But it's bad that it happened in the first place.  Hence why the plans to make sure it never happens again.

 

 

As for Windows, that's a much more complicated topic.  But basically, just because you don't use it, doesn't mean it's not a well used feature.  And the opposite is true (such as WMC :'( )

 

 

 

The only addition to this software collection I personally would love is a plugin that would give us the file explorer features to the pool per drive when troubleshooting.

When you have 10 drives in the system and none of them has a drive designation because it would complicate your regular file explorer, when you have a problem drive, Such as USB instability, you have to reboot with drives, hunt down the problem and then revert back.  Happened to me with my Drobo but thats another story.

 

I'm not sure what you mean here. 

Though, the "dpcmd" utility has a number of tools to enumerate drives.

Between that, and "mountvol", identifying drives and file locations should be relatively easy.

 

(dpcmd list-poolparts, dpcmd check-pool-fileparts, namely)

 

 

However, a more graphical, friendly utility would be nice.  And this may be something that ends up in StableBit FileVault. 

Link to comment
Share on other sites

  • 0
On 9/15/2017 at 1:38 PM, Christopher (Drashna) said:

To be honest, hierarchical pooling is probably one of our most complicated features.  Second only to the file placement rules. 

 

 

So don't feel bad if it has you confused.

 

 

 

That said, hierarchical pooling allows you to add a pool... to another pool as if it was a normal, physical disk.  In all regards it's treated like normal.

 

Well, that's a lie.  If a pool contains CloudDrive disks, there is some special handling for those drives, and the entire pool (we avoid reading from CloudDrive based pools to minimize internet usage). 

 

 

 

As for using a CloudDrive disk to back up the pool... it's simple, really:

 

Add your existing pool to a new pool. 

Add a CloudDrive disk to that new pool. 

Enable duplication on the top level pool.

... Seed your new pool, just like you'd do so with a normal drive:

http://wiki.covecube.com/StableBit_DrivePool_Q4142489

 

 

This will cause the data to show up in the new, top level pool.  And once you've remeasured, it will start duplicating the data from the local disk pool, to the CloudDrive disk.  

 

This will keep a copy locally, and on the cloud.   And because the software avoids reading from the CloudDrive disk....

 

 

That said, this is redundancy, and not a backup.  If you delete files, they're gone. Period. 

 

Alternative, you could just not add the CloudDrive disk to any pool, and use something like Free File Sync, SyncToy, GoodSync or the like to copy the contents of your pool to the CloudDrive disk.

 

 

 

 

The "Drive Usage Limiter" balancer would work better for this.

 

That said, this shouldn't happen any longer in the new public beta build.  We've completely reworked the drive removal, so the pool should stay writable while removing a drive. 


.757
* When a drive is being removed from the pool, and there are in use unduplicated files preventing the drive removal, a dialog will show
  exactly which files are in use and by which process.
* [D] Fixed crash when cloning an exceptionally long path due to running out of stack space.
* [D] Rewrote how drive removal works:
    - The pool no longer needs to go into a read-only mode when a drive is being removed from the pool.
    - Open duplicated files can continue to function normally, even if one of their file parts is on the drive that's being removed.
    - Open unduplicated files that reside on the drive that's being removed will need to be closed in order for drive removal to complete 
      successfully. An option to automatically force close existing unduplicated files is now available on the drive removal dialog.
      When not enabled, the service will move everything that's not in use before aborting drive removal.

 

Well, DrivePool definitely isn't obsolete,  We're just a small company, and didn't really have plans for "if CloudDrive took too long".  Which is what happened.

 

That said, we have plans in place to prevent this from happening again.   But it's bad that it happened in the first place.  Hence why the plans to make sure it never happens again.

 

 

As for Windows, that's a much more complicated topic.  But basically, just because you don't use it, doesn't mean it's not a well used feature.  And the opposite is true (such as WMC :'( )

 

 

 

 

I'm not sure what you mean here. 

Though, the "dpcmd" utility has a number of tools to enumerate drives.

Between that, and "mountvol", identifying drives and file locations should be relatively easy.

 

(dpcmd list-poolparts, dpcmd check-pool-fileparts, namely)

 

 

However, a more graphical, friendly utility would be nice.  And this may be something that ends up in StableBit FileVault. 

What would be the fastest/best option of backing up a physical DrivePool?

As I understand, going the route of Hierarchical pooling is automatic and serves as a mirror which helps repopulate data in the event of a drive failure.

But is an ACTUAL backup a better solution, and is it faster?

If so, what software do you recommend for it

Link to comment
Share on other sites

  • 0

Ok, I get the hierarchical pooling setup, but I don't quite understand the seeding process. First of all, the actual data will be on the same physical drives, so would I just need to cut and paste inside the new hidden folder for the hybrid pool? I assume you don't copy/paste in the logical Drivepool volumes, since they won't work if the services are stopped?

Link to comment
Share on other sites

  • 0

It depends on the exact file structure, but yes.  

And you can do either the drives or the pools, as it translates to the same thing, but it may be safer/easier to deal with the pool. 

 

And for the service, no, it's so that it won't move files when you're doing this, causing issues. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...