Jump to content
  • 0

Hierarchical file duplication?


Pichu0102

Question

If I were to create a pool that consisted of multiple local drives, and one cloud pool with cloud drives, if a drive is lost from the cloud pool, would the cloud pool be able to regenerate from the pool above?

An example:

Pool A (3x duplication)
	Local Drive A
	Local Drive B
	Local Drive C
	Local Drive D
	Local Drive E
	Cloud Pool A (no duplication)
		FTP Drive A
		B2 Drive A
		Google Drive Drive A

 

If one of the drives under Cloud Pool A goes kaput:

 

Pool A (3x duplication)
	Local Drive A
	Local Drive B
	Local Drive C
	Local Drive D
	Local Drive E
	Cloud Pool A (no duplication)
		F̶T̶P̶ ̶D̶r̶i̶v̶e̶ ̶A̶ (Lost)
		B2 Drive A
		Google Drive Drive A

 

Will Cloud Pool A, which had no duplication on the files in it, be able to pull files from Local Drive A, B, C, D, or E to regenerate itself and also regenerate duplication of Pool A correctly? Or will Cloud Pool A need to be destroyed then remade and readded for it to work properly?

Edited by Pichu0102
Link to comment
Share on other sites

5 answers to this question

Recommended Posts

  • 0

What you might want to do instead, is make a Local Pool 1 which holds local drives A-E,  rename your cloud pool to Cloud Pool 1, and then make a Master Pool that holds Local Pool 1 and Cloud Pool 1.  It's easier if different levels have different nomenclature (numbers vs letters at each level).

 

Master Pool (2x duplication)
	Local Pool 1 (any duplication you want)
		Local Drive A
		Local Drive B
		Local Drive C
		Local Drive D
		Local Drive E
	Cloud Pool 1 (no duplication)
		FTP Drive A
		B2 Drive A
		Google Drive Drive A

 

Note that with this architecture, your cloud drive space needs to be equal to the size of your Local Pool 1, so that 2x duplication on the Top Pool can happen correctly.

If the FTP Drive A goes kaput, Cloud Pool 1 can pull any files it needs from Local Pool 1, since they are all duplicated there.  Local Pool 1 doesn't need duplication, since it's files are all over on Cloud Pool 1 also.  You can (if you want) give it duplication for redundancy in case one of the cloud sources isn't available - your choice.

 

As an alternate architecture, you can leverage your separate cloud spaces to each mirror a small group of local files:

Master Pool (no duplication)
	Pool 1 (2x duplication)
		Local Pool A (no duplication)
			Local Drive a
			Local Drive b
		Cloud Pool A (no duplication)
			FTP Drive
	Pool 2 (2x duplication)
		Local Pool B (no duplication)
			Local Drive c
			Local Drive d
		Cloud Pool B (no duplication)
			B2 Drive
	Pool 3 (2x duplication)
		Local Pool C (no duplication)
			Local Drive e
		Cloud Pool C (no duplication)
			Google Drive

 

What this does is allow each separate cloud drive space to backup a pair of drives, or single drive.  It might be more advantageous if your cloud space varies a lot, and you want to give limited cloud space to a single drive (like in Pool 3).

 

Edited by Jaga
Edited to correct after reading following post and objectives
Link to comment
Share on other sites

  • 0
18 hours ago, Jaga said:

Only if all of the files from ALL of the local drives were duplicated onto the Cloud pool A, which with only 3x duplication wouldn't happen.  When using pool duplication, Drivepool likes to spread files around, often to more drives than the duplication level set (i.e. every drive in the pool).

What you'd want to do instead, is make a Local Pool 1 which holds local drives A-E,  rename your cloud pool to Cloud Pool 1, and then make a Master Pool that holds Local Pool 1 and Cloud Pool 1.  It's easier if different levels have different nomenclature (numbers vs letters at each level).

 


Master Pool (2x duplication)
	Local Pool 1 (any duplication you want)
		Local Drive A
		Local Drive B
		Local Drive C
		Local Drive D
		Local Drive E
	Cloud Pool 1 (no duplication)
		FTP Drive A
		B2 Drive A
		Google Drive Drive A

 

Note that with this architecture, your cloud drive space needs to be equal to the size of your Local Pool 1, so that 2x duplication on the Top Pool can happen correctly.

If the FTP Drive A goes kaput, Cloud Pool 1 can pull any files it needs from Local Pool 1, since they are all duplicated there.  Local Pool 1 doesn't need duplication, since it's files are all over on Cloud Pool 1 also.  You can (if you want) give it duplication for redundancy in case one of the cloud sources isn't available - your choice.

 

As an alternate architecture, you can leverage your separate cloud spaces to each mirror a small group of local files:


Master Pool (no duplication)
	Pool 1 (2x duplication)
		Local Pool A (no duplication)
			Local Drive a
			Local Drive b
		Cloud Pool A (no duplication)
			FTP Drive
	Pool 2 (2x duplication)
		Local Pool B (no duplication)
			Local Drive c
			Local Drive d
		Cloud Pool B (no duplication)
			B2 Drive
	Pool 3 (2x duplication)
		Local Pool C (no duplication)
			Local Drive e
		Cloud Pool C (no duplication)
			Google Drive

 

What this does is allow each separate cloud drive space to backup a pair of drives, or single drive.  It might be more advantageous if your cloud space varies a lot, and you want to give limited cloud space to a single drive (like in Pool 3).

 

I ended up testing my idea before seeing your post, sorry. I simulated failure by moving test files out of FTP Drive A's poolpart folder to the root. Drivepool only reacted once I chose to recheck duplication, and only realized the problem when it checked Pool A, not when it checked Cloud Pool A. It then reduplicated back to FTP Drive A.

I was thinking about it in the wrong direction. Cloud Pool A isn't concerned about duplication or missing duplicates, as it is not duplicated, but Pool A was concerned and safely regenerated into Cloud Pool A, which put the files in FTP Drive A again. So Cloud Pool A wouldn't ask the parent pool to regenerate missing files, as that's not it's problem, but Pool A will notice and duplicate back into Cloud Pool A. My bad.

Link to comment
Share on other sites

  • 0

Sorry, I think I misunderstood your goals in the first post.

As long as your cloud pool is just a member drive of the larger pool, and you have at minimum 2x pool duplication on the main pool, then you'd be covered against any failure of the cloud pool, yes.  But if you lost 2 or more of the local drives and those were the sole holders of a file that was duplicated, the cloud drive wouldn't save you, the data would still be lost.

It looks like you're relying on local drives for redundancy, and the cloud drives for expansion.  Normally it's the other way around.

Link to comment
Share on other sites

  • 0

Mm, it's more the cloud drives are a stopgap until I get a new local drive to replace an older one here, but yeah. My pool has been running with 3x duplication over 3 internal and 7 external drives, so adding cloud drives is just a way to give a spare amount of space to work with, as I put the cloud pool to not allow duplicated or unduplicated files unless the other drives were all 90% full. Also have Crashplan backup everything both locally (to the 3x duplicated pool itself, forbidden from using the cloud drives to store backup files, bad practice, but local backup is more for files I need to roll back) and to Crashplan central, so I try to keep a few backup situations running. But yeah, the cloud drives are kind of expansion to give the local drives more space to work with.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...