Hi, I've a few questions which are probably simple to answer. I've had a search but not found anything concrete.
Firstly, any news on when it will leave Beta (how longs a piece of string, right)? I'm not particularly keen on using beta software for the type of data I want to store remotely.
Secondly, any issues using it with in a Hyper-V client sitting on Server 2012 R2 Datacentre (I'm probably going to be running WHSv2011 as a virtual machine assuming I can get the P2V working)? Unlikely I'd have thought but I'm in the process of moving everything to virtual "hardware" so worth checking.
Thirdly, when adding a cloud storage account, does setting the limit of the drive size (eg I've just selected 10GB for my testing) prevent uploads after this? You can then of course resize the drive, but just wondering what the point of this figure is, if not a hard limit. And if it is a limit, does this raise warnings or alerts, and if so, where? WHS Dashboard, EventVwr?
Any plans to integrate into the Dashboard in WHS?
I presume closing the GUI doesn't prevent the Clouddrive from operating as normal (I'm assuming the service does all the work)?
Lastly for informational purposes, it's not immediately clear how I retrieve files. I'm using Azure blob storage, and everything gets uploaded in 10MB chucks as thats what I happened to select in configuring the drive. Lets say disaster occurs, and my local machine burns down, or blows up or whatever. I can retrieve these blobs...how? Ok I can log into Azure and download the seperate data chunks, but that would take ages and then I've still only got the data in blob.data format. Is it just a case of rebuilding the server, reinstalling CloudDrive, and reconnecting to my cloud hosting (a bit like recreating an existing Drivepool I suppose)?
After installing this on a test box, it looks good.
Where does the cache drive store the cache? Can't find it on the disk I chose.
Also as a suggestion: is it possible to have a feature that doesn't replicate deletes? I'm thinking that the best solution at present might be to have the CloudDrive sat outside my Drivepool, that is mirrored from a share within the Drivepool using something like an Rsync job that runs regularly only copying over new files. That way I can negate the risks of accidental deletes from the DrivePool, as the Rsync job would not delete from the CloudDrive. Whilst I can't see why this wouldn't work, it's a bit of a faff!
Question
freefaller
Hi, I've a few questions which are probably simple to answer. I've had a search but not found anything concrete.
Firstly, any news on when it will leave Beta (how longs a piece of string, right)? I'm not particularly keen on using beta software for the type of data I want to store remotely.
Secondly, any issues using it with in a Hyper-V client sitting on Server 2012 R2 Datacentre (I'm probably going to be running WHSv2011 as a virtual machine assuming I can get the P2V working)? Unlikely I'd have thought but I'm in the process of moving everything to virtual "hardware" so worth checking.
Thirdly, when adding a cloud storage account, does setting the limit of the drive size (eg I've just selected 10GB for my testing) prevent uploads after this? You can then of course resize the drive, but just wondering what the point of this figure is, if not a hard limit. And if it is a limit, does this raise warnings or alerts, and if so, where? WHS Dashboard, EventVwr?
Any plans to integrate into the Dashboard in WHS?
I presume closing the GUI doesn't prevent the Clouddrive from operating as normal (I'm assuming the service does all the work)?
Lastly for informational purposes, it's not immediately clear how I retrieve files. I'm using Azure blob storage, and everything gets uploaded in 10MB chucks as thats what I happened to select in configuring the drive. Lets say disaster occurs, and my local machine burns down, or blows up or whatever. I can retrieve these blobs...how? Ok I can log into Azure and download the seperate data chunks, but that would take ages and then I've still only got the data in blob.data format. Is it just a case of rebuilding the server, reinstalling CloudDrive, and reconnecting to my cloud hosting (a bit like recreating an existing Drivepool I suppose)?
After installing this on a test box, it looks good.
Where does the cache drive store the cache? Can't find it on the disk I chose.
Also as a suggestion: is it possible to have a feature that doesn't replicate deletes? I'm thinking that the best solution at present might be to have the CloudDrive sat outside my Drivepool, that is mirrored from a share within the Drivepool using something like an Rsync job that runs regularly only copying over new files. That way I can negate the risks of accidental deletes from the DrivePool, as the Rsync job would not delete from the CloudDrive. Whilst I can't see why this wouldn't work, it's a bit of a faff!
Link to comment
Share on other sites
8 answers to this question
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.