Jump to content
  • 0

CloudDrive Disk Size and DrivePool Allocation


EricGRIT09

Question

Question which I haven't seen documented anywhere but I feel like it must have been asked before.... 

I have unlimited Google Drive and am currently migrating from my local storage, which is managed with DrivePool, ~28TB of data.  My cache drive on CloudDrive is 3TB.  I'd like to simply set my Google Drive to a huge number (500TB, whatever) and let DrivePool/CloudDrive manage the move from the local drives to the cloud.  However, I'm not sure if DrivePool is integrated enough with CloudDrive to know that there is, for example only 2TB of CloudDrive cache available and thus only attempts to allocate that much data to Google Drive.  Or, does DrivePool simply attempt to move all data to the most amount of free space, regardless of whether or not it is available cache or Google Drive space.

Currently I simply continue to increase the size of my Google Drive in CloudDrive according to the amount of cache I have available.  So, I'll upload 1.5-2TB then increase the overall drive size that amount - over and over.  With a 20mbps upload speed I'll be doing this for a while at this rate :). 

I figure this will depend on DrivePool rules and all that, which I can figure out once I know the question - does DrivePool know the difference between the CloudDrive cache and the overall (larger) Google Drive?  Will it throttle allocations to that drive (not CloudDrive throttling, which ultimately would be the speed of my upload)?

 

Thanks in advance.

Link to comment
Share on other sites

6 answers to this question

Recommended Posts

  • 0

StableBit DrivePool treats all StableBit CloudDrive disks as normal disks.  It will use the drive like a normal disk.  

However, StableBit DrivePool won't actively move data around, unless there is an actual reason to do so.   So adding a CloudDrive disk to the pool won't cause the pool to move stuff around. 

Also, StableBit DrivePool will actually prefer the local disks when reading, rather than the CloudDrive disks, when the data is duplicated.  This should reduce bandwidth and API usage of the CloudDrive disks.

 

Link to comment
Share on other sites

  • 0

Thank you.  

The reason for DrivePool to move the data around is that I want to migrate everything to CloudDrive as a migration process - I want the data to move around (either as an automated migration process or drive removal process).  Also, if I understand correctly, if I were to set the CloudDrive size to something like 200TB and I have six 3TB drives (lets say 50% utilized each), with 10TB used on the CloudDrive then DrivePool is going to want to put new files on the CloudDrive first.  Let's say my CloudDrive cache is 3TB... as soon as I have 3TB cached then I would want DrivePool to see that there's no room left on the local cache drive and to start placing files on the other available local space.  Otherwise the writing to the DrivePool is going to be extremely slow (as slow as the upload speed, right?).

Maybe there's another way of achieving this currently?  If not then I suppose this is a feature request.  Seems strange to me that I should have to increase the CloudDrive size by the size of my cache every time I upload that amount in the cache - repetitive resizing. If there were an option to temporarily stop placing files on the CloudDrive when the cache utilization hits a certain value I think that would do it, but from the sounds of it DrivePool just doesn't know the status of the cache.

Link to comment
Share on other sites

  • 0

If you want to force everything out of the local disks, you could add the CloudDrive disk to the pool and then just remove the local disk(s). 

Otherwise, the "Disk Usage Limiter" balancer can also do this (again, forcefully). 

 

As for CloudDrive's cache, no, it doesn't work that way.  As you use it, the data is cycled out. Frequently used data is more likely to be retained, but older data is pruned, so there is room for the data on the drive. 
https://stablebit.com/Support/CloudDrive/Manual?Section=The Local Cache

 

Link to comment
Share on other sites

  • 0

Appreciate the reply, not sure if my main point is making it across or if I'm explaining it correctly.  If it is and the functionality is not there (which I don't believe it is) then is there a place to submit enhancement requests officially?  I understand I can remove a drive and it will, depending on rules, move the data to the CloudDrive.  Also understand the cache on the CloudDrive is dynamic.  My concern is more with the fact that DrivePool is never aware of the cache status on the CloudDrive in any way - it will continue to attempt to write to the CloudDrive at whatever the evacuation speed is of the data pending upload.

Let's say I have a 10TB drive at 70% capacity and a 2TB CloudDrive cache drive for my 200TB Cloud Drive.  If I attempt to download 5TB of data at once, DrivePool is going to of course put the first 2TB (approx) onto the CloudDrive.  In reality it will attempt to also place the remaining 3TB onto the CloudDrive first, too - right (because the CloudDrive shows it has the lowest utilization or most amount of free space)?  If that were to happen, though, the downloading/writing of the remaining 3TB of data is going to be as slow as the upload of the CloudDrive - correct?  Even though there is 3TB left on the local disk(s) which could be written at disk write speed then moved to the CloudDrive when the cache frees up.

That's the issue - speed of transfers due to throttling or full cache on the CloudDrive.  I don't believe DrivePool knows when the cache is full, right?  I would imagine it would make sense to have an option to temporarily offload writes from the CloudDrive to a local disk when the cache is full, maybe until the Cache reaches 'x' amount of free space again?

EDIT:  May have just came up with an example which explains it better and more succinctly.  If I have a DrivePool with a 10TB drive (empty) and a 500TB CloudDrive with a 3TB cache, wouldn't the 10TB drive essentially *always* be empty when using semi-default rules?  Even if the cache filled up I would, in my situation, have a 10TB drive sitting there doing nothing and writes to my DrivePool now limited to whatever my upload speed is to my Cloud provider.  In my case I have 450mb/20mb internet - so my downloads to the DrivePool get throttled to 20mb while the cache is full.

 

Link to comment
Share on other sites

  • 0

Sorry. 

This would be controlled by balancing, and unfortunately, no... there is no balancer that does this, and I don't think that the balancing system will even support something like this, as is (eg, not enough integration. 

StableBit DrivePool does avoid reading from the CloudDrive, if the data is on a local disk, though. 

As for requesting it, anywhere is fine. But I'll treat this as a request and add a ticket for it. 

https://stablebit.com/Admin/IssueAnalysis/27868

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...