Jump to content
  • 0

Recommended server backup method?


Jeff

Question

Hi everyone,

My home server runs Windows 10 Pro, with DrivePool used to manage five drives.  I historically used CrashPlan to back up all the pooled data from virtual drive P: to the cloud, and also to a local NAS.  Since CrashPlan has exited the consumer market, I'm redoing my backup approach.

I'm now using BackBlaze to back up the pool to the cloud, and had planned to use Windows 10's File History to backup to the local NAS.  However, File History won't allow me to select anything from the pooled drive.  How do you recommend I perform local, automated backups of the pool?

Edit: It has been suggested that each individual drive's PoolPart directory be backed up, but that's not feasible, as the duplication exceeds the NAS' storage capacity!

Thanks!

Link to comment
Share on other sites

Recommended Posts

  • 0

Drashna,

I also tried using the older Windows 7-style backup, but it kept failing within a few minutes with the error "A shadow copy could not be created".  The VSS event log shows "The bootfile is too small to support persistent snapshots", and the Windows Backup event log shows "Backup did not complete successfully because a shadow copy could not be created. Free up disk space on the drive that you are backing up by deleting unnecessary files and then try again."

I had previously disabled shadow copies for the pooled drives when I set up the server (I don't really understand them or how much drive space they take up), but I re-enabled them at 20% for each drive and the backup still failed with the same error.

Any idea how to fix this and if the older Windows Backup would work with DrivePool?

Thanks!

Link to comment
Share on other sites

  • 0

Windows Backup (the Win7 style) also uses VSS. In fact, a majority of backup solutions use VSS.  

There is nothing to "fix" here.  We would need to add support.  Which is incredible hard, as there is absolutely no documentation on how to. So we'd be completely reverse engineering the feature. 

The alternative is sync software, as these rarely use VSS. 

Link to comment
Share on other sites

  • 0

HI Jeff, how large is the Pool, the backup destination and do you use duplication?

Edit: Ah, the original post says it does. In that case, you might rearrange your Pool hierarchical. E.g.

unduplicated Pool A: DIsk 1 and Disk 2
unduplicated Pool B: DIsk 3 and Disk 4 and DIsk 5
Duplicated Pool C: Pool A and Pool B.

Then you would backup either the HDDs of Pool A or those of Pool B, whichever takes your fancy. THis is what I do with my WHS2011 Server (although, it is rather small storage-wise)

Link to comment
Share on other sites

  • 0
On 11/18/2017 at 9:12 AM, Umfriend said:

HI Jeff, how large is the Pool, the backup destination and do you use duplication?

Edit: Ah, the original post says it does. In that case, you might rearrange your Pool hierarchical. E.g.

unduplicated Pool A: DIsk 1 and Disk 2
unduplicated Pool B: DIsk 3 and Disk 4 and DIsk 5
Duplicated Pool C: Pool A and Pool B.

Then you would backup either the HDDs of Pool A or those of Pool B, whichever takes your fancy. THis is what I do with my WHS2011 Server (although, it is rather small storage-wise)

@Umfriend: I'm with @ikon in that I'm intrigued by your suggestion. Would you mind explaining it a bit further?

Link to comment
Share on other sites

  • 0

Sure.

So DP supports pool hierarchies, i.e., a Pool can act like it is a HDD that is part of a (other) Pool. This was done especially for me. Just kidding. To make DP and CloudDrive (CD) work together well (but it helps me too). In the CD case, suppose you have two HDDs that are Pooled and you use x2 duplication. You also add a CD to that Pool. What you *want* is one duplicate on either HDD and the other duplicate on the CD. But there is no guarantee it will be that way. Both duplicated could end up at one of the HDDs. Lose the system and you lose all as there is no duplicate on CD.

To solve this, add both HDDs to Pool A. This Pool is not duplicated. You also have CD (or another Pool of a number of HDDs) and create unduplicated Pool B witrh that. If you then create a duplicated Pool C by adfding Pool A and Pool B, then DP, through Pool C will ensure that one duplicate ends up at (HDDs) in Pool A and the other duplicate will en up at Pool B. This is becuase DP will, for the purpose of Pool C, view Pool A and Pool B as single HDDs and DP ensures that duplicates are not stored on the same "HDD".

Next, for backup purposes, you would backup the underlying HDDs of Pool A and you would be backing up only one duplicate and still be certain you have all files.

Edit: In my case, this allows me to backup a single 4TB HDD (that is partitioned into 2 2TB partitions) in WHS2011 (which onyl supports backups of volumes/partitions up to 2TB) and still have this duplicated with another 4TB HDD. So, I have:

Pool A: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated
Pool B: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated
Pool C:  Pool A + Pool B, duplicated.

So, every file in Pool C is written to Pool A and Pool B. It is therefore, at both 4TB HDDs that are in the respective Pools A and B. Next, I backup both partitions of either HDD and I have only one backup with the guarantee of having one copy of each file included in the backup.

Link to comment
Share on other sites

  • 0

Wow, that is really clever, and a great way to work around your 2TB backup limitation.

Not all my data is duplicated, and not everything is in the pool.  I'll have to give your approach some thought to see if it will (easily) work for me.  Do you know if there's any performance hit with adding an additional level of drive abstraction, like that?

Thanks for sharing!

Link to comment
Share on other sites

  • 0

I am sure there is but noticeable? Absolutely not.

If not all data is duplicated then you could define two different sized pools. There is a file placement balancer iirc where you would check both  child pools for duplicated but only the larger child pool for unduplicated data. Backup the HDDs of the larger child pool.

Edited by Umfriend
Typo: u duplicated -> unduplicated
Link to comment
Share on other sites

  • 0

Agreed, very clever. My father really believed in the KISS principle, so he set up his backup system so it makes physical copies to other sets of drives. So, he has a total of 5 drives sets:

Main Storage
NearLine Storage
MyDocumentsReserve Storage
OffSite A Storage
OffSite B Storage

As you might suspect, Main Storage is where all files go when they're created. It's also where the vast majority of files are read from when other computers access the server. These drives mounted in the server chassis and connected by SATA.

NearLine Storage is a USB3-connected Lian-Li 5-drive enclosure, currently with 3 drives.

MyDocumentsReserve is an extra copy of only the My Documents folder on Main Storage. This is the most critical data of all, so it gets an extra measure of security.

OffLine A and B Storage are 2 sets of drives that are swapped in and out of a second USB3-connected Lian-Li 5-drive enclosure. One of the sets of drives is kept off site. My father swapped them in and out of the enclosure every day. I'm not that dedicated; I swap them more like once a week. Both sets are in the house overnight on the day of a  swap over but, other than that, one set is at another site.

Every night, at 2AM, a set of CMD files is run by a single Scheduled Task. These CMD files use RoboCopy to copy the My Documents folder on Main Storage to MyDocumentsReserve. Then, having backed up the most critical files, all the files on Main Storage (including the My Documents folder again) are copied first to NearLine, then OffSite A or B (whichever set is currently in the enclosure).

Because of the way RoboCopy works, only new and changed files are physically copied, so the process is pretty quick

This procedure ensures there are 4 copies of everything; 5 copies of My Documents.

My father used to tell me that many people told him his system is overkill, but he would then tell me he has never lost a file completely since implementing it. Soooo, why would I mess with it.... it works. :rolleyes:

Link to comment
Share on other sites

  • 0

@Umfriend, your system game me a thought. I'm wondering if you, or Christopher or anyone else can tell me if this is viable.

Following on from my previous post, let's say my Main Storage is drive W:, and my NearLine Storage is drive X:

Now, if I create another pool, say drive V:, and add drives W: and X: to the pool, can I then turn on duplication only for the drive W: pool and have it automatically duplicated to drive X:?

If that would work, do I have to:
  a> empty drive X: so there's room for the duplicated files & folders from drive W:? I ask this because drive X: is already basically a duplicate of drive W: and there isn't enough room for a complete 2nd set of files.

  b> move the files & folders on drive X: from inside the PoolPart folder? I'm wondering this because I'm wondering if nesting pools like this creates 2 sets of PoolPart folders on the drives: one for the nested pools (W: and X:) and one for the parent pool, V:.

I could go ahead and try it to find out, but then it might take a lot of work to straighten things out if it doesn't work the way I think.

Anyway, I'm wondering about this because it might be a way to get an almost immediate backup of files placed on Main Storage, without having to wait for the nightly backup.

Link to comment
Share on other sites

  • 0

Yes, if I understand correctly, that would work. However:
1. You would turn on duplication on drive V as that is the Master Pool consisting of W and X. One duplicate would end up at W and the other duplicate on X.
2. I believe USB is not optimally suited for permanent continuous storage. Connections may break sometimes. Christopher would know better but IMO YMMV (and you don't want it to vary for something this critical).

Yes, I guess you would have to clear out X first. Assuming W and X already exists as Pools (but checjk with Christopher as I have not done this before):
1. Create Pool V
2. Add W and X
3. Turn duplication X2 on
4. Now it is important to realise that the content of W is not in V!!!!! Crucial to understand this.
5. To get it to V, you *could* move the contents of all poolpart*** folders on the HDDs underlying W to the poolpart*** folders on the same HDDs that are underlying the V Pool.
 

It is a bit hard to explain why/how but the thing to know is that if a HDD is part of a Pool that is again part of a Pool, that HDD will have a nested Poolpart.*** folder.  The first poolpart.*** is the folder for W, the second for Pool V (I really think and am pretty sure). It is in fact possible to get files stored on W through V and on W seperately as well (just like you can store data on a HDD that is part of a Pool outside of that Pool the HDD is part of.

6. Then use re-measure to force rebalancing and it will duplicate all data on W (that is part of Pool V!) to pool X.

Link to comment
Share on other sites

  • 0

Thanks @Umfriend I think I need to explain a little better. Here goes:

1. Right now, RoboCopy is used every night to copy all files and folders from W: to X:. I'm looking to replicate that behaviour, but using DP. I don't really want to replicate anything on X: to anywhere. I simply want to get everything on W: replicated to X:. What I hope to gain is almost real time duplication of new files written to W:. Hopefully duplication will cause them to be copied to X: very quickly, or at least a lot faster than waiting for a nightly backup.
2. the two enclosures have been connected via USB for a number of years without ever showing any evidence of a disconnect :)

I hope that's more clear.

UPDATE: So, I tried an experiment, and it looks like I can't do what I was hoping. I took 2 drives, added each of them to its own pool (A and B), then created a pool of both of the pools (C). If I try to set up duplication on A or B, it says it can't without adding another drive, so they must not be aware of each other. If I set up duplication on C, it does work: the files written to C get written to A and B. So I guess that's what I would have to do: add my W: and X: pools to a 3rd pool, then set up all the server shares using the 3rd pool. That might be a bit of work. I'll have to think on it, unless anyone's got a better idea.... anyone?? :)

Oh, and wasn't it a good idea (albeit a lucky fluke) that I only had 1 drive in each test pool. It made it clear that duplication only seems to work to drives in the same pool.

Link to comment
Share on other sites

  • 0

LOL, I thought: "That is what I said!" but see that at my point 3, "V" was omitted. It should have read "3. Turn duplication X2 on V".

Yes, new files will be written to both instantaneously.

On USB, no, you wouldn't, because you use it for shorter periods when running a robocopy. Christopher knows more about this, I believe he stated that USB spec allows for periodic temporary disconnects and those are an issue when writing and scanning and what not.

Link to comment
Share on other sites

  • 0

LOL, it's always the little things, like a simple "V" :)

As for the USB, if the spec allows for periodic disconnect, I haven't noticed it, and not just because the periods are shorter. When I first started to set up this new Windows 10 server, I was also migrating from PoolHD to DrivePool (PoolHD is no longer supported). Unfortunately, this required me to recopy everything to each pool, which meant copying the 9TB of data on W: to X:, and then to Y: and then Z:. Each one took a couple of days, at least (I didn't time it exactly (something my father would have done)). In any case, all the copying worked flawlessly. Maybe there's some very specific circumstances required before a USB-connected drive will disconnect? I have no idea myself. Scanner has never seemed to complain either. I wonder if there's some log file or something that can be checked to see there are any disconnects.

By the way, I was forced to use USB for the external enclosures (which also support eSATA) because the motherboard's BIOS objected to having more than 12 SATA drives connected... it wouldn't list them all. Most importantly, it always left out my boot SSD :angry: Once I reduced the number of SATA drives, by switching the enclosures to USB the problem disappeared. Go figure.

Link to comment
Share on other sites

  • 0
On 11/20/2017 at 3:11 PM, Jeff said:

Do you know if there's any performance hit with adding an additional level of drive abstraction,

There should be little to none.  This is because the I/O is redirected to the correct disk.

Better yet, on the internal betas, the Pool prefers reads from local disks over CloudDrive disks ... or pools with CloudDrive disks in them. 

On 11/22/2017 at 10:19 PM, Umfriend said:

On USB, no, you wouldn't, because you use it for shorter periods when running a robocopy. Christopher knows more about this, I believe he stated that USB spec allows for periodic temporary disconnects and those are an issue when writing and scanning and what not.

Yeah. That's what I've said.  

For backups, this is fine.  But for constant usage, I'd recommend against it

On 11/23/2017 at 6:31 AM, ikon said:

By the way, I was forced to use USB for the external enclosures (which also support eSATA) because the motherboard's BIOS objected to having more than 12 SATA drives connected... it wouldn't list them all. Most importantly, it always left out my boot SSD :angry: Once I reduced the number of SATA drives, by switching the enclosures to USB the problem disappeared. Go figure.

Most likely because it's relying on a Port Multiplier, and that has limits.  A SAS card with external ports and a SAS to eSATA breakout cable would work better, but be much more pricey. 

That, or it's an I/O issue. The above should still work. 

Link to comment
Share on other sites

  • 0
15 hours ago, Christopher (Drashna) said:

Most likely because it's relying on a Port Multiplier, and that has limits.  A SAS card with external ports and a SAS to eSATA breakout cable would work better, but be much more pricey. 

That, or it's an I/O issue. The above should still work. 

The part about 'port multiplier' sounds familiar. So, you're thinking that using a add-in card with 4 external SAS ports (not SATA??), combined with a cable that has 4 SAS connectors leading to a single eSATA connector, would work? I'll have to research those a bit; I've never heard of them.

Link to comment
Share on other sites

  • 0

Yeah, PMs have a "low" limit, so most likely the cause.

9 hours ago, ikon said:

So, you're thinking that using a add-in card with 4 external SAS ports (not SATA??), combined with a cable that has 4 SAS connectors leading to a single eSATA connector, would work? I'll have to research those a bit; I've never heard of them.

Backwards. :)

SAS cables (and ports) are generally 4 lanes. So that's 4x eSATA connections from a single SAS port.
One port is usually denoted as "4e", while two is "8e". So that's how many drives you could hook up.  And most SAS cards support 64+ devices.... so ... :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...