Jump to content
  • 0

Backing up with 2012R2 Essentials


Royce Daniel

Question

Sorry to resurrect an old discussion - I got this in a forum post from 2 years ago where Christopher was answering a question about why Server 2012 R2 Essentials fails during a back up:

 

http://stablebit.com/Support/DrivePool/1.X/Manual?Section=Using%20Server%20Backup%20to%20Backup%20the%20Pool

 

So this fix worked for me but I have a HUGE problem with backing duplicated files. Why? Because I have over 60TB of raw storage. Using Amazon Cloud Drive @ 1.5 MB/s it would take me roughly 2 years to do a complete backup - incremental will be faster sure but damn! That first one is a doozy. I considered only backing up the most critical data but even still that's at least 1 year. I've done some comparison with other providers; sure there are other cloud storage solutions that don't throttle that severely but you're either limited to a few hundred gigs or beyond a couple terabytes it gets Enterprise level expensive. The only other reliable service I could find was Crash Plan but there is a huge Reddit thread where people are saying that they are only getting about 1.5 MB/s too and they claim that their service packages are "unlimited". We haven't even officially arrived at the 4K video era yet so folks buying movies in 4K and archiving them in digital media servers... do you see my point yet? If we're stuck doing huge backups at 1.5 MB/s and it takes 1-2 years to do a complete back up, oh god. I've gone through all my files and the best I can do is 13.5 TB of un-duplicated data that I NEED to backup. Is it possible to engineer DrivePool to put files tagged as "duplicated" in separate folders outside the PoolPart* folder so we can at least manually select only un-duplicated data?

 

Please don't make me go back to Storage Spaces. :)

 

 

Link to comment
Share on other sites

6 answers to this question

Recommended Posts

  • 0

Well, the above link applies to anything that uses VSS as the backend for the backup.  Some products don't have this issue. 

 

 

As for Amazon Cloud Drive, do you mean the StableBit CloudDrive provider? If so, that's because we're stuck in "development" status, because Amazon ... well freaked out (we had several users with very high usage) and demoted us.  And getting proper parameters to use was like pulling teeth.... 

 

Hopefully, we can get this sorted out and re-approved, but we don't have an ETA on this. 

 

If you mean the official Amazon Cloud Drive client, then this should work from the pool directly, I believe. But I'm not sure about the upload rate for the client. 

 

 

 

 

The other issue is your upstream bandwidth.  It doesn't matter how good the service is, if you can't saturate it. And unfortunately, a lot of consumer internet packages (in the US) are .... well, shitty when it comes to upload.  10mbps, if you're lucky. 

Link to comment
Share on other sites

  • 0

Well, the above link applies to anything that uses VSS as the backend for the backup.  Some products don't have this issue.

I'm using the Windows Server 2012 R2 Essentials server back up option. It does rely on VSS. Using the info contained in your link I was able to start the backup without error but, like I said, I have to select all the PoolPart folders manually rather than just simply selecting the folders in the DrivePool that I want. Which means I'm backing up all the data with the duplicated files. Raw data plus all the duplicated files is roughly 30TB of data vs just raw data without the duplicated files would only be about 12TB of data. Some of the folders I have set to x3 but most are x2. It would be nice if DrivePool was VSS friendly. :)

 

As for Amazon Cloud Drive, do you mean the StableBit CloudDrive provider? If so, that's because we're stuck in "development" status, because Amazon ... well freaked out (we had several users with very high usage) and demoted us.  And getting proper parameters to use was like pulling teeth.... 

 

Hopefully, we can get this sorted out and re-approved, but we don't have an ETA on this. 

 

If you mean the official Amazon Cloud Drive client, then this should work from the pool directly, I believe. But I'm not sure about the upload rate for the client.

 

I meant Amazon Cloud Drive's upload caps specifically. The CloudDrive app is really awesome compared to all the others I've tried. All your stuff is really awesome. It's why I bought the whole package. :) I LOVE your approach to DrivePool in how you can control the amount of redundancy on a per-folder basis.

 

The other issue is your upstream bandwidth.  It doesn't matter how good the service is, if you can't saturate it. And unfortunately, a lot of consumer internet packages (in the US) are .... well, shitty when it comes to upload.  10mbps, if you're lucky.

Well turns out that I WAS saturating my upload pipe. LOL! I haven't looked at my Internet account in a couple years. It's never been an issue until now. I just remember getting the fastest I could get my hands on. 150 / 5 mbps. I did some checking that that is their "advertised" speed not what the line is provisioned for. It's actually provisioned for 50 (up to 200 burst) mbps / 12 mbps. This makes total sense since that is what I'm seeing as an average upload speed... 12.6 on average. So I upgraded my account, after I swap the modem out tomorrow, and I'll have a provisioned speed of 300 / 30 mbps. But still, 30TB will take roughly 97 days to complete @ 30 mbps. If I can solve the issue of only backing up RAW data and upload only 12 TB that would only take roughly 39 days @ 30 mbps. But I did some digging and apparently, in the state that it's in now, the CloudDrive app is capped at 20 mbps. :(

Link to comment
Share on other sites

  • 0

I'm using the Windows Server 2012 R2 Essentials server back up option. It does rely on VSS. Using the info contained in your link I was able to start the backup without error but, like I said, I have to select all the PoolPart folders manually rather than just simply selecting the folders in the DrivePool that I want. Which means I'm backing up all the data with the duplicated files. Raw data plus all the duplicated files is roughly 30TB of data vs just raw data without the duplicated files would only be about 12TB of data. Some of the folders I have set to x3 but most are x2. It would be nice if DrivePool was VSS friendly. :)

 

 

Okay, I wasn't sure. And yeah, it's not the most efficient option.   :(

 

As for VSS support, we really, really want to. But doing so is incredibly difficult.  Impersonating NTFS is easy, as the source code for it is readily available and the file system is well documented.. But VSS? we'd have to completely reverse engineer it, without any assistance or documentation.  

 

It's something we'd love to do, but doing so will be very time consuming. 

 

The other option here is to use a backup solution that accesses the files.  

Something like GoodSync, Free File Sync, or the like, and copy the files to a different drive (or another pool, or a CloudDrive disk) for storage.  

 

 

I meant Amazon Cloud Drive's upload caps specifically. The CloudDrive app is really awesome compared to all the others I've tried. All your stuff is really awesome. It's why I bought the whole package. :) I LOVE your approach to DrivePool in how you can control the amount of redundancy on a per-folder basis.

 

The Amazon Cloud Drive bandwidth caps are actually pretty high, from what I've seen. They've gotten better about it. But that doesn't really help us. 

 

But we're still stuck on Development status, and not production status. So our account (app) is throttled massively.  And to help prevent bottlenecking everyone, we've reduced the number of threads, and the speed that the software connects at.  

 

This means that you may not saturate your connection. Depending. 

 

 

And glad that you are loving StableBit DrivePool. ;)

 

 

 

Well turns out that I WAS saturating my upload pipe. LOL! I haven't looked at my Internet account in a couple years. It's never been an issue until now. I just remember getting the fastest I could get my hands on. 150 / 5 mbps. I did some checking that that is their "advertised" speed not what the line is provisioned for. It's actually provisioned for 50 (up to 200 burst) mbps / 12 mbps. This makes total sense since that is what I'm seeing as an average upload speed... 12.6 on average. So I upgraded my account, after I swap the modem out tomorrow, and I'll have a provisioned speed of 300 / 30 mbps. But still, 30TB will take roughly 97 days to complete @ 30 mbps. If I can solve the issue of only backing up RAW data and upload only 12 TB that would only take roughly 39 days @ 30 mbps. But I did some digging and apparently, in the state that it's in now, the CloudDrive app is capped at 20 mbps. :(

 

*cough* StableBit CloudDrive *cough* and sync pool to a CloudDrive disk *cough*. 

 

:)

 

 

Aside from that, yeah, I'm essentially in the same boat. But I have closer to 50TB of unique data.....

Link to comment
Share on other sites

  • 0

After installing build 711 beta build for DrivePool I tried this again for kicks. Same error. However, I did some digging and the same issue pops up under FlexRaid with a running VM. The person reporting the issue solved it by shutting down the VM... thus releasing the drive.

 

That got me thinking. The error I get when the backup fails basically translates into not being able to create a snapshot. This requires full write access or the ability to make modifications to the drives properties. I remembered when I attempted to unmount the DrivePool drive when I first created it and tried to reformat it from NTFS to ReFS. I learned later that it's not a real NTFS drive but its access is limited and you can't reformat it. Perhaps the API that's trying to create a snapshot or VSS copy doesn't have sufficient rights or access to write a bootfile (as the error message in the event log suggests)? Or some other underlying mechanic that is preventing the back up from creating a snapshot? I'll be honest, I know very little about how VSS works but from what I was able to read about it my assumption seems logical. If you recall from the older days of FAT and FAT32 you couldn't make any changes to the file allocation table when the drive was "in use". Even applications like Partition Magic freaked out if something was using the drive while it attempted to make changes to the drives partition table. Perhaps VSS is something along those lines... *shrug*

 

Just a thought. :)

Link to comment
Share on other sites

  • 0

VSS: OK OK I get it. :) Sucks...

 

So what are you using to back up 50TB?!?! :blink:

 

VSS is very complex, and ... "poorly documented" is the nice way to say it. :)

But it is very powerful.   That said, VSS does work on StableBit CloudDrive just fine! 

 

 

And right now? Hopes and dreams..... I don't have a backup plan, and I know that it's really bad.

 

 

 

After installing build 711 beta build for DrivePool I tried this again for kicks. Same error. However, I did some digging and the same issue pops up under FlexRaid with a running VM. The person reporting the issue solved it by shutting down the VM... thus releasing the drive.

 

That got me thinking. The error I get when the backup fails basically translates into not being able to create a snapshot. This requires full write access or the ability to make modifications to the drives properties. I remembered when I attempted to unmount the DrivePool drive when I first created it and tried to reformat it from NTFS to ReFS. I learned later that it's not a real NTFS drive but its access is limited and you can't reformat it. Perhaps the API that's trying to create a snapshot or VSS copy doesn't have sufficient rights or access to write a bootfile (as the error message in the event log suggests)? Or some other underlying mechanic that is preventing the back up from creating a snapshot? I'll be honest, I know very little about how VSS works but from what I was able to read about it my assumption seems logical. If you recall from the older days of FAT and FAT32 you couldn't make any changes to the file allocation table when the drive was "in use". Even applications like Partition Magic freaked out if something was using the drive while it attempted to make changes to the drives partition table. Perhaps VSS is something along those lines... *shrug*

 

Just a thought. :)

 

The Pool drive is an emulated disk, so we can change the file system being used, if/when we want.  The issue is feature support. We were using "CoveFS" as the file system, but most apps treat this as FAT32, causing issues (such as not copying files larger than 4GB).  So, we report as NTFS. 

 

Other issues are Reparse points (hard links, junctions, symbolic links, etc) were a real challenge to implement, because they're a deep, file system feature, and not very well documented.  

 

VSS is the same way.  The API for it is well documented, but how it's actually implemented on the file system is completely undocumented.  There are a few companies that have done it, but they don't document it either.  So we don't know what disk commands it uses, how to respond, etc.  When I say "reverse engineer", I mean it literally.  Hooking up debuggers to a system, watching how it works, imitating it in our driver, and testing it out, extensively.  It's something that we'd like to do, but it will be a massive investment to do so. 

 

 

Alex is no stranger to reverse engineering things (that's why we have reparse point support), but VSS is a BEAST. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...