Jump to content
  • 0

Deployment / Duplication sanity check


madbuda

Question

What I am trying to accomplish is to have a main file server with a base set of folders that have duplication enabled to multiple destinations.

File Server (drivepool and cloud drive)

  • Data 1 (2 local copies)
  • Data 2
  • Data 3

On Site backups (drivepool)

  • Data 1
  • Data 2

Google Drive

  • Data 1
  • Data 3

My plan is to use Drivepool on both local servers and cloud drive on the primary to handle both onsite and offsite replication.

My assuption is I can do this using folder placement and duplication rules...

Any caveats to this approach? I have debated duplicating all of the data locally and using the secondary server for the cloud duplication.

**edit**

Lastly, can you import an existing cloud drive on another install?

Edited by madbuda
more questions
Link to comment
Share on other sites

11 answers to this question

Recommended Posts

  • 1

There are many ways to accomplish it, using either 1 or 2 copies of DrivePool.  2 DrivePool installations makes things a bit smoother overall.  Here are two ways I'd envision setting it up:

 

First solution (2 copies of DrivePool, 1 copy of Cloud Drive):

  • Server A: Pool 1 (local resources)
  • Server B: Pool 2 (local resources) - Pool drive shared to Server A
  • Either Server: Pool 3 (Gdrive resource via Cloud Drive)
  • Whichever server manages Gdrive via Cloud Drive, create your Top-Tier Pool there (using Hierarchical pooling) consisting of Pools 1/2/3 and share back to network.
  • The machine using Cloud Drive will control all duplication via DP on the top-tier pool.  Both machines would share duplication workload.

 

Second Solution (1 copy of DrivePool, 1 copy of Cloud Drive):

  • Server A: Pool 1 (local resources)
  • Server B: share storage space to Server A via network
  • Server A: Pool 2 (network storage space) - Pool is built either using drive mappings from Server B, or via Cloud Drive's File Sharing feature (where Cloud Drive creates a Drive on the network resource and mounts a letter for it locally)
  • Server A: Pool 3 (Gdrive resource via Cloud Drive)
  • Server A in this case would contain all sub-pools, and the top-tier Hierarchical pool for sharing back to the network.  It would control all duplication, and have virtually all of the workload.
  • Server B in this case is just a resource for space.

 

The cleanest and best balanced implementation would be the first, though it requires 2 copies of DrivePool.

 

There are three ways to handle drive mappings for Pools across the network:

  • iSCSI drive mappings are faster than other methods and have no overhead, but not very flexible.
  • Mounted network folder shares are easy to setup, slower than iSCSI, but faster than Cloud Drive's File Sharing.
  • Cloud Drive's File Sharing, which allows you to control space used on the target resource.  Slower than other methods, highly flexible.

 

@Christopher (Drashna) - do we have a way for multiple networked installations of DrivePool to see each other's Pools and include them as children in higher tier Pools, *without* first mapping drive letters for them?  Seamless interoperability across the network would be a nice feature for server clusters, and help cut down on drives/letters.

Link to comment
Share on other sites

  • 1
4 minutes ago, madbuda said:

More on this, say my primary server is down for an extended maintenance or failure... I would need to install cloud drive on the second server and use the local folder in order to read this data??

From what I understand is this data is not readable without CD, as in I cannot just browse to a folder and see a file

Yep, the Cloud Drive data is stored in hidden "CloudPart" folders (or in the case of File Sharing, a hidden "StableBit CloudDrive Share Data" folder) inside the storage volume / provider you used.  You could either copy across the configuration folder (the service folder mentioned a few posts up) from the first computer and then edit CD's config to point to the location, or just install Cloud Drive and re-configure so it knows where to see the data.  Keep in mind if the first server went down ungracefully, you'll have to contact CoveCube to release that license for activation on the other computer.

And no - it's stored in chunks that are .dat files in there, so unreadable without Cloud Drive.

Link to comment
Share on other sites

  • 0
9 hours ago, madbuda said:

Lastly, can you import an existing cloud drive on another install?

It should be possible by copying over the settings' JSON file to the same location on the other machine.  You may also need to copy the "store" folder across along with it's contents.  I'd just copy the entire "service" folder across for good measure.  You can find info on the settings JSON here.  If you're using file share cloud drives, you'll want the resource shares to be mapped the same way to the second machine.

Not sure if you can run dual-access (two Cloud Drive installations accessing the same cloud resources) as they'd probably fight if connected at the same time, though you can use multiple cloud drives per provider/service.

Link to comment
Share on other sites

  • 0
1 minute ago, Jaga said:

Not sure if you can run dual-access (two Cloud Drive installations accessing the same cloud resources)

I was thinking more of having the secondary server a cloud drive for the primary and then the secondary using google drive... Not 100% sure about this now that I think of it

A mounts B

A duplicates to B

B mount gdrive

B duplicates to G Drive

Link to comment
Share on other sites

  • 0
1 minute ago, Jaga said:

First solution (2 copies of DrivePool, 1 copy of Cloud Drive):

This is precisely what I was thinking (just didn't spell it out as well as you did)

I'm 50/50 on the iSCSI, ideally speed locally would be a nice bonus

Link to comment
Share on other sites

  • 0

Absolutely, quite easy in fact.  When you add the top-tier pool in the Hierarchy (which consists of all the smaller pools), have a SSD partition (or entire drive) connected to the same machine, and add it to that top-tier pool as the SSD Cache drive via the SSD Optimizer plugin.  https://stablebit.com/DrivePool/Plugins

You'd effectively create a high-speed file-write buffer for the entire set of pools.  Though remember that anything residing on the SSD Cache drive isn't duplicated *until* it hits the sub-pools in the hierarchy.  I'm not completely sure how aggressive the plugin is at moving files from the SSD Cache to the rest of the pool - that's a question for @Christopher (Drashna) to answer I think.  So it's a point of failure, but a minor one.

Link to comment
Share on other sites

  • 0
19 hours ago, Jaga said:

Lastly, can you import an existing cloud drive on another install?

More on this, say my primary server is down for an extended maintenance or failure... I would need to install cloud drive on the second server and use the local folder in order to read this data??

From what I understand is this data is not readable without CD, as in I cannot just browse to a folder and see a file

Link to comment
Share on other sites

  • 0
20 hours ago, Jaga said:

do we have a way for multiple networked installations of DrivePool to see each other's Pools and include them as children in higher tier Pools, *without* first mapping drive letters for them?  Seamless interoperability across the network would be a nice feature for server clusters, and help cut down on drives/letters

Pretty sure "No" is the answer here. 

StableBit DrivePool can be used to control the Pools on other systems, but it can ONLY pool disks that are local to that system.  It does not support adding disks (or pools) from a remote system.

4 hours ago, Jaga said:

You'd effectively create a high-speed file-write buffer for the entire set of pools.  Though remember that anything residing on the SSD Cache drive isn't duplicated *until* it hits the sub-pools in the hierarchy.  I'm not completely sure how aggressive the plugin is at moving files from the SSD Cache to the rest of the pool - that's a question for @Christopher (Drashna) to answer I think.  So it's a point of failure, but a minor one.

The SSD Optimizer Balancer Plugin is as aggressive as the balancing settings. 
https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing Settings

 

If you want it super aggressive, set it to "balance immediately", and disable the "Not more often" option. 

Otherwise setting this to once an hour is pretty aggressive, as well. 

2 hours ago, madbuda said:

More on this, say my primary server is down for an extended maintenance or failure... I would need to install cloud drive on the second server and use the local folder in order to read this data??

 From what I understand is this data is not readable without CD, as in I cannot just browse to a folder and see a file

Correct, you'd need StableBit CloudDrive. 
But if the drive you used for the cache is intact, and you still have it, you can pop this into a new system, install StableBit CloudDrive, and it should pick up the cache. You'd just need to reauthenticate the drive, and then it should pick up where it left off. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...