Search the Community
Showing results for tags 'bandwidth'.
Is this where I should be submitting feature requests? I ask because I have just gone through the forums about the Google Drive API limits/throttling having bumped up against the infamous userRateLimitExceeded issues - presumably after hitting their 750GB per day limit. What I noticed was that once this rate limit is hit there isn't really anything for the application "to do" except for cache writes until Google lifts the ban/quota resets/etc. but I noticed that the write attempts just keeps hammering google which takes bandwidth unnecessarily. I was curious about the potential to just stop making the attempts after a while, and just go dormant (though continuing to write data to cache) until the throttle is lifted? I would imagine the logic would be something like: CloudDrive starts receiving userRateLimitExceeded responses and it then puts itself in a local caching only mode (opt-in or by default - doesn't really matter to me) CloudDrive then starts sending some type of a "canary" small data packet every few minutes to test and see if Google Drive API and/or Google Drive Backend are accepting writes again, and then start writing full chunks again whenever applicable. Rinse/repeat. I realize that there is a method people have used to throttle the traffic in the settings basically to make it impossible to hit the 750GB per day quota but, in my tests for what I am using CloudDrive for I expect to ONLY stumble upon this limit maybe 10% of the time. The other 90% of the time I want to be able to use the full on bandwidth. So while a mbps throttle can help 10% of the time it ends up being an unnecessary bottleneck for the other 90%. Does this sound useful to people or am I crazy? I don't mind hitting the limit from Google every once in a while but I don't really understand why the CloudDrive cannot be more efficient when it becomes clear that the upload quota has been reached. To me it looks like it keeps trying to write the same chunks over and over (sending the full chunks all the way to Google, for the chunks to end up being denied at the door) I think for bandwidth efficiency something like this could be helpful. But maybe this is just me trying to min/max the efficiency of the application too much in a rare situation. Thanks
I'm trying out your software suíte as I am to replace my old WHS1 with a new home server. Pool & Scanner seems to run just fine, but CloudDrive seems to mess with me :-( Problem 1: I set up a drive on box.com When the drive is set up I get a lot of error messages, in short: I/O Error CD drive h:\ having trouble uploading data Error: The request was aborted: The request was canceled The new drive get some MB of data that is marked "To Upload" what is this? The drive is still empty? Problem 2: I really need some way to control bandwidth. CD basicly kills my internet connection trying to upload those MB of secret data. Just called my internet provider and yelled some because my up speed was crippled CD actully I suggest an option for schedule syncing of CD, with some extra options for * Disable schedule for X hours * Set max bandwidth to XXX (non scheduled) * Set max bandwidth to XXX for X hours (non scheduled) Problem 3: A minor thing. UAC asks every time i star CD UI if I want to allow CD to change my PC. Why? Doesn't happen when using scanner or pool UI. And maybe a stupid question; I assumed that CD would mirror my cloudaccount, but it seems to reserve space for a new "virtual drive" in my pc?