Jump to content
Covecube Inc.

Question

Hi,

 

I am running DP 2.x with WHS 2011. I have 2 Pools of 2x2TB HDDs, duplication is set to x2 for everything. I backup everything using WHS 2011 Server Backup.

 

One Pool (392 GB net) contains all the shares except for the Client Backups. The other (1.16 TB net) contains only the Client Backups. Of each Pool, I backup one underlying HDD.

 

Given that the Client Backup database changes, the Server Backup .vhd fills up tracking changes and reaches its limit of 2TB. At that time, all Server Backups are deleted and WHS Server Backup starts over again with a full backup. This is fine in principle but it does mean that the history I actually keep is not as long as I would like. Also, should a HDD fail then there is no HDD to which it can migrate the data.

 

So I would like to create one Pool of 4 x 2TB, x2 duplication. That way, each HDD would contain about 750GB so that the history I keep is longer. The problem is though that files may be placed on the two HDDs that I do not backup.

 

So I am wondering whether it is possible to tell DP to, in a way, group the HDDs in a 2x2 fashion, e.g., keep one duplicate on either E: of F: and the other on either G: or H:? Or, put otherwise, keep an original on E: or F: and the copy on G: or H: (I realise the concept of orginal/copy is not how DP works but as an example/explanation of what I want it to do), to the extent possible. It would not be possible if, for instance:

- E: or F: failed, I would still have duplicates after migration but some will be on G: and H:

- G: or H: failed, I would still have duplicates but some would be on both E: and F:

 

I do realise that once either E: or F: fails, my Server Backup will be incomplete. However, that is true for my current setup as well. The intention would be to replace E: of F: and then ensure that the duplication/placement scheme is correct again (and I would hope this works automatically if the new HDD as the appropriate drive letter and gets added to the Pool).

 

I have looked at the File Placement tab but I don't see how I should set up rules to accomplish what I want.

 

Kind rgds,

Umf

Share this post


Link to post
Share on other sites

10 answers to this question

Recommended Posts

  • 0

no, unfortunately, there really isn't a way to do that with the current architecture. 

 

You could specify a single, specific disk, and if duplication is enabled, it will put the duplicates on any other disk. (as long as you don't have the "never allow placement on other disks" option enabled).

Aside from that, no.

 

Sorry.

Share this post


Link to post
Share on other sites
  • 0

Feared it would be thus. Any chance, in a future not to distant, that something like this could be considered to implement? I have not really thought about it but I guess what I could be looking for is that if duplication is set to a certain number that I could then assign HDDs to, say, columns/series/groups (2 groups for x2 dupl, 3 for x3 etc.).

Share this post


Link to post
Share on other sites
  • 0

Can I create a Pool consisting of (two) Pools? As an example:

Pool P: E:\ + F:\, not duplicated

Pool Q: G:\ + H:\, not duplicated

Pool R: P:\ + Q:\, duplicated, so that of a single file there will be a copy on either E or F and a copy on either G or H.

 

I realise my wishes are rather, uhm, exotic, just trying to get the most out of it.

Share this post


Link to post
Share on other sites
  • 0

Not currently, no. 

Sorry.

 

I say currently, because... well, you'll have to wait and see (it may not be the perfect solution, either).

 

 

However, if you have the two pools, you could use something like Allways Sync, FreeFile Sync, SyncBack, etc.

You could run these as a scheduled job and since periodically.

Share this post


Link to post
Share on other sites
  • 0

Are you suggesting robocopy as an alternative for Server Backup?

 

I had thought about it a bit. I could even try to decrease the partition of the Server Backup HDD and just do an OS backup to (hopefully) the one and a robocopy to the other. It might work perhaps. But I do not see how I would keep any history. I guess files that were deleted from the Server would remain, that is good, but no versioning. Also, if a Backup HDD is lost, there are .vhds that someone would have to know how to read to get to the data. Perhaps not that hard but it is a hurdle. With robocopy, they'd all simply be there for anyone. I am not thrilled about the idea but may consider it. Should there be other (dis)advantages then I'd like to hear them.

 

Thx.

Share this post


Link to post
Share on other sites
  • 0

"in conjunction with". Backup the system disk, and use Robocopy (or whatever else) to handle the pooled data.

 

 

However, talking with Alex, he does want to implement the "grouping" feature, so it will appear sooner or later. But it may take a while, because it will DEFINITELY be complex (at least in the code).

Share this post


Link to post
Share on other sites
  • 0

So is CloudDrive close to being done? I'm not interested in that at all actually but I would really like Alex to find the time to implement FileSafe and DupliGroup (couldn't think of a better name yet ;) ).

 

Edit: I have no idea how this thread ended up in Scanner, sorry.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...