Jump to content

EricGRIT09

Members
  • Posts

    3
  • Joined

  • Last visited

EricGRIT09's Achievements

Newbie

Newbie (1/3)

0

Reputation

  1. Appreciate the reply, not sure if my main point is making it across or if I'm explaining it correctly. If it is and the functionality is not there (which I don't believe it is) then is there a place to submit enhancement requests officially? I understand I can remove a drive and it will, depending on rules, move the data to the CloudDrive. Also understand the cache on the CloudDrive is dynamic. My concern is more with the fact that DrivePool is never aware of the cache status on the CloudDrive in any way - it will continue to attempt to write to the CloudDrive at whatever the evacuation speed is of the data pending upload. Let's say I have a 10TB drive at 70% capacity and a 2TB CloudDrive cache drive for my 200TB Cloud Drive. If I attempt to download 5TB of data at once, DrivePool is going to of course put the first 2TB (approx) onto the CloudDrive. In reality it will attempt to also place the remaining 3TB onto the CloudDrive first, too - right (because the CloudDrive shows it has the lowest utilization or most amount of free space)? If that were to happen, though, the downloading/writing of the remaining 3TB of data is going to be as slow as the upload of the CloudDrive - correct? Even though there is 3TB left on the local disk(s) which could be written at disk write speed then moved to the CloudDrive when the cache frees up. That's the issue - speed of transfers due to throttling or full cache on the CloudDrive. I don't believe DrivePool knows when the cache is full, right? I would imagine it would make sense to have an option to temporarily offload writes from the CloudDrive to a local disk when the cache is full, maybe until the Cache reaches 'x' amount of free space again? EDIT: May have just came up with an example which explains it better and more succinctly. If I have a DrivePool with a 10TB drive (empty) and a 500TB CloudDrive with a 3TB cache, wouldn't the 10TB drive essentially *always* be empty when using semi-default rules? Even if the cache filled up I would, in my situation, have a 10TB drive sitting there doing nothing and writes to my DrivePool now limited to whatever my upload speed is to my Cloud provider. In my case I have 450mb/20mb internet - so my downloads to the DrivePool get throttled to 20mb while the cache is full.
  2. Thank you. The reason for DrivePool to move the data around is that I want to migrate everything to CloudDrive as a migration process - I want the data to move around (either as an automated migration process or drive removal process). Also, if I understand correctly, if I were to set the CloudDrive size to something like 200TB and I have six 3TB drives (lets say 50% utilized each), with 10TB used on the CloudDrive then DrivePool is going to want to put new files on the CloudDrive first. Let's say my CloudDrive cache is 3TB... as soon as I have 3TB cached then I would want DrivePool to see that there's no room left on the local cache drive and to start placing files on the other available local space. Otherwise the writing to the DrivePool is going to be extremely slow (as slow as the upload speed, right?). Maybe there's another way of achieving this currently? If not then I suppose this is a feature request. Seems strange to me that I should have to increase the CloudDrive size by the size of my cache every time I upload that amount in the cache - repetitive resizing. If there were an option to temporarily stop placing files on the CloudDrive when the cache utilization hits a certain value I think that would do it, but from the sounds of it DrivePool just doesn't know the status of the cache.
  3. Question which I haven't seen documented anywhere but I feel like it must have been asked before.... I have unlimited Google Drive and am currently migrating from my local storage, which is managed with DrivePool, ~28TB of data. My cache drive on CloudDrive is 3TB. I'd like to simply set my Google Drive to a huge number (500TB, whatever) and let DrivePool/CloudDrive manage the move from the local drives to the cloud. However, I'm not sure if DrivePool is integrated enough with CloudDrive to know that there is, for example only 2TB of CloudDrive cache available and thus only attempts to allocate that much data to Google Drive. Or, does DrivePool simply attempt to move all data to the most amount of free space, regardless of whether or not it is available cache or Google Drive space. Currently I simply continue to increase the size of my Google Drive in CloudDrive according to the amount of cache I have available. So, I'll upload 1.5-2TB then increase the overall drive size that amount - over and over. With a 20mbps upload speed I'll be doing this for a while at this rate :). I figure this will depend on DrivePool rules and all that, which I can figure out once I know the question - does DrivePool know the difference between the CloudDrive cache and the overall (larger) Google Drive? Will it throttle allocations to that drive (not CloudDrive throttling, which ultimately would be the speed of my upload)? Thanks in advance.
×
×
  • Create New...