Sorry if this has been covered a quick search did not find me what I was looking for.
I have a CloudDrive that I send windows server backups to nightly. The full backup size is about 75 GB, but the nightly incremental change is only 6-7GB, easily uploadable in my backup window.
I have set the cache on this drive to 100GB to ensure the majority of the "full backup" data is stored in cache, so that when windows server backup is comparing blocks to determine the incremental changes, clouddrive does not have to (slowly) download ~75GB of data for comparison every single night.
This works very well.
The problem comes when there is a power outage, crash, BSOD, etc. Even though the clouddrive is fully current and "To Upload" is 0, when I bring the server back up, after we go though drive recovery (which takes 5-8 hours for this 100GB), cloud drive then moves ALL 100GB of the cache into "To Upload" and starts re-uploading all of that data.
Why? I can't think of how this is necessary. Incase a little data was written to the cache at the last minute before the unexpected reboot? If so, certainly there is a better way of handling this than a 100GB re-upload, some sort of new-unuploaded-blocks tag/database? What if a drive has a massive cache, a re-upload could take days/weeks!
Thanks for any insight, I've gone through this process a couple of times using clouddrive and it has been painful every time. I'd be happy even if we downloaded every single block that is cached, compared it to the local cache block, and then only uploaded the ones that have changed.
Question
modplan
Sorry if this has been covered a quick search did not find me what I was looking for.
I have a CloudDrive that I send windows server backups to nightly. The full backup size is about 75 GB, but the nightly incremental change is only 6-7GB, easily uploadable in my backup window.
I have set the cache on this drive to 100GB to ensure the majority of the "full backup" data is stored in cache, so that when windows server backup is comparing blocks to determine the incremental changes, clouddrive does not have to (slowly) download ~75GB of data for comparison every single night.
This works very well.
The problem comes when there is a power outage, crash, BSOD, etc. Even though the clouddrive is fully current and "To Upload" is 0, when I bring the server back up, after we go though drive recovery (which takes 5-8 hours for this 100GB), cloud drive then moves ALL 100GB of the cache into "To Upload" and starts re-uploading all of that data.
Why? I can't think of how this is necessary. Incase a little data was written to the cache at the last minute before the unexpected reboot? If so, certainly there is a better way of handling this than a 100GB re-upload, some sort of new-unuploaded-blocks tag/database? What if a drive has a massive cache, a re-upload could take days/weeks!
Thanks for any insight, I've gone through this process a couple of times using clouddrive and it has been painful every time. I'd be happy even if we downloaded every single block that is cached, compared it to the local cache block, and then only uploaded the ones that have changed.
Link to comment
Share on other sites
19 answers to this question
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.