Jump to content
  • 0

Clouddrive & Drivepool, Beta


freefaller

Question

Hi, I've a few questions which are probably simple to answer. I've had a search but not found anything concrete.

 

Firstly, any news on when it will leave Beta (how longs a piece of string, right)? I'm not particularly keen on using beta software for the type of data I want to store remotely.

 

Secondly, any issues using it with in a Hyper-V  client sitting on Server 2012 R2 Datacentre (I'm probably going to be running WHSv2011 as a virtual machine assuming I can get the P2V working)? Unlikely I'd have thought but I'm in the process of moving everything to virtual "hardware" so worth checking.

 

Thirdly, when adding a cloud storage account, does setting the limit of the drive size (eg I've just selected 10GB for my testing) prevent uploads after this? You can then of course resize the drive, but just wondering what the point of this figure is, if not a hard limit. And if it is a limit, does this raise warnings or alerts, and if so, where? WHS Dashboard, EventVwr?

 

Any plans to integrate into the Dashboard in WHS?

 

I presume closing the GUI doesn't prevent the Clouddrive from operating as normal (I'm assuming the service does all the work)?

 

Lastly for informational purposes, it's not immediately clear how I retrieve files. I'm using Azure blob storage, and everything gets uploaded in 10MB chucks as thats what I happened to select in configuring the drive. Lets say disaster occurs, and my local machine burns down, or blows up or whatever. I can retrieve these blobs...how? Ok I can log into Azure and download the seperate data chunks, but that would take ages and then I've still only got the data in blob.data format. Is it just a case of rebuilding the server, reinstalling CloudDrive, and reconnecting to my cloud hosting (a bit like recreating an existing Drivepool I suppose)?

 

After installing this on a test box, it looks good.

 

Where does the cache drive store the cache? Can't find it on the disk I chose.

 

Also as a suggestion: is it possible to have a feature that doesn't replicate deletes? I'm thinking that the best solution at present might be to have the CloudDrive sat outside my Drivepool, that is mirrored from a share within the Drivepool using something like an Rsync job that runs regularly only copying over new files. That way I can negate the risks of accidental deletes from the DrivePool, as the Rsync job would not delete from the CloudDrive. Whilst I can't see why this wouldn't work, it's a bit of a faff!

Link to comment
Share on other sites

8 answers to this question

Recommended Posts

  • 0

"SoonTM".  Sorry, we don't have a release date, and right now we're looking into a particularly nasty (though fairly rare) bug.  Once that is resolved, hopefully, we should be out of beta soon afterwards.  But until then ....

 

 

 

Secondly, any issues using it with in a Hyper-V  client sitting on Server 2012 R2 Datacentre (I'm probably going to be running WHSv2011 as a virtual machine assuming I can get the P2V working)? Unlikely I'd have thought but I'm in the process of moving everything to virtual "hardware" so worth checking.

 

None whatsoever.  Alex uses VMWare to develop and test, and I use HyperV. So it's fairly well tested on either platform, and there shouldn't be any issues. 

 

 

 

Thirdly, when adding a cloud storage account, does setting the limit of the drive size (eg I've just selected 10GB for my testing) prevent uploads after this? You can then of course resize the drive, but just wondering what the point of this figure is, if not a hard limit. And if it is a limit, does this raise warnings or alerts, and if so, where? WHS Dashboard, EventVwr?

 

Depends on what you mean.  For the size of the drive, then yes, this is the hard limit, as the drive itself will show as this size. This specifically sets the drive size, including how it's reported to Windows. 

As for the cache, no, this isn't a hard limit, and can (will) grow past it.

 

As for the drive itself, it is handled just like a physical drive, so it will show you how much free space is on the disk.  Also, when creating the disk we do make sure that there is enough space.  

 

 

Any plans to integrate into the Dashboard in WHS?

 

I presume closing the GUI doesn't prevent the Clouddrive from operating as normal (I'm assuming the service does all the work)?

 

yes, the service does all of the work. it runs in the background and handles everything. The UI just connects to that and authenticates the providers. 

 

And if the dashboard is present (WHS2011 or Windows Server Essentials), it does add a dashboard tab for the program. But there isn't any more integration than that, at least for now. 

 

 

 

Lastly for informational purposes, it's not immediately clear how I retrieve files. I'm using Azure blob storage, and everything gets uploaded in 10MB chucks as thats what I happened to select in configuring the drive. Lets say disaster occurs, and my local machine burns down, or blows up or whatever. I can retrieve these blobs...how? Ok I can log into Azure and download the seperate data chunks, but that would take ages and then I've still only got the data in blob.data format. Is it just a case of rebuilding the server, reinstalling CloudDrive, and reconnecting to my cloud hosting (a bit like recreating an existing Drivepool I suppose)?

 

As for recovering the data, what's stored on the drive is raw data (or encrypted). 

The way to recover it would be to install StableBit CloudDrive on a new system, authentication with Azure, and you should see it listed as an available drive to mount. 

 

You could then use it as it was before, without having to do anything else.  If the drive was encrypted, you'll need the passkey or passphrase for the drive. 

 

 

Where does the cache drive store the cache? Can't find it on the disk I chose.

 

It puts it on the drive with the most free space.  However, it's stored in a hidden "CloudPart" or CloudDrive folder by default. You can detach and reattach the disk to change the cache location. 

 

Also as a suggestion: is it possible to have a feature that doesn't replicate deletes? I'm thinking that the best solution at present might be to have the CloudDrive sat outside my Drivepool, that is mirrored from a share within the Drivepool using something like an Rsync job that runs regularly only copying over new files. That way I can negate the risks of accidental deletes from the DrivePool, as the Rsync job would not delete from the CloudDrive. Whilst I can't see why this wouldn't work, it's a bit of a faff!

 

If you don't want deletes "replicated", then yes, you definitely want the CloudDrive outside of the pool.  Using a utility such as rsync, robocopy, free file sync, allways sync or the like would be the best solution at that point.

 

The point of the pooling software is to show the entire set of disks as a single unit, so not deleting on parts would ... well, leave the contents in the pool, so it doesn't really work here.  However, the files will be sent the recycle bin, if appropriate. 

Link to comment
Share on other sites

  • 0

When i add CloudDrive to a DrivePool Pool and i use replication of all Files. For example i have a Hard Disk 'A', which is full of Data and a CloudDrive 'B' with equal Size. No more other Hard Disks.

Does this mean i need a second hard disk with at least the size of A, so that CloudDrive can use this Drive as Cache? Or does DrivePool only replicates as much data as the cache can hold?

Link to comment
Share on other sites

  • 0

When i add CloudDrive to a DrivePool Pool and i use replication of all Files. For example i have a Hard Disk 'A', which is full of Data and a CloudDrive 'B' with equal Size. No more other Hard Disks.

Does this mean i need a second hard disk with at least the size of A, so that CloudDrive can use this Drive as Cache? Or does DrivePool only replicates as much data as the cache can hold?

 

 

I'm assuming that you are talking about StableBit CloudDrive and the cache for that drive. 

 

If so, "yes and no" is the answer.  You do need to use a cache drive, but the size can be just about any size. But it may be better to have a large drive, so you can keep more locally. 

 

You can specify that the cache size be any size (and for clarification, this is a "target" size and not a hard limit). The software will try to use up to that amount. It will pin NTFS data, and parts containing the folder structure of the disk, to help optimize access, and these will be cached.   You can use as little or as much as you would like, but the smaller the cache is, the more the software will download over time. 

 

And since the cache can be any size, you could use a small disk for this, if you want (such as an SSD). It doesn't have to match the pooled drive size (but it isn't a horrible idea, actually). 

Link to comment
Share on other sites

  • 0

I thought I'd stick my robocopy job up that I've been using for testing - succesfully so far - on the chance it might help someone else achieve what I've been trying to achieve :).

@echo 
@echo Begin Monitoring
@echo 
robocopy P:\ E:\ /E /Z /XA:H /W:10 /MT:2 /MON:1

This runs on drive P:\ to drive E:\ and the switches do the following:

/E - copy subdirectories, including Empty ones.
/Z - copy files in restartable mode.
/XA:H - excludes hidden files.
/W:10 - waits 10 seconds between retries
/MT:2 - uses two threads for copying
/MON:1 - monitor the source (P:\) and run every 1 minute

Because I'm not using the /MIR (mirror) switch, when files are deleted from my Drivepool drive (P:\), the Robocopy job picks this up, but does nothing. This is exactly what I want as I can look back through the job and see if files have been deleted, and if accidental, I can then copy them back down from my cloud storage.

 

Here's a picture for verification of that - I've run a test by deleting (all duplicated test data of course!) a folder to simulate an accidental delete. The job runs every minute and detects the change, that the cloud drive now has all these extra files:

KdYs6ps.png

 

So I can work out from here what I need to restore.

 

I'll be working on the batch file so that I can output a proper log file of all folder differences to detect accidental deletes to make things easier, but the jist of this is it works!

Link to comment
Share on other sites

  • 0

I'm just running the robocopy job in a batch file on startup on one of my servers. Unless there's a better way?

Task Scheduler. 

 

If you're not doing anything complex, it may be a better solution, as it can run without logging in. 

Assuming that's not now you're running the batch/script file. 

 

I use it to load a number of things (such as utorrent, a powershell script for scraping all of our downloads, etc). 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...