Jump to content

Nick

Members
  • Posts

    3
  • Joined

  • Last visited

Profile Information

  • Gender
    Not Telling

Nick's Achievements

Newbie

Newbie (1/3)

0

Reputation

  1. Is this where I should be submitting feature requests? I ask because I have just gone through the forums about the Google Drive API limits/throttling having bumped up against the infamous userRateLimitExceeded issues - presumably after hitting their 750GB per day limit. What I noticed was that once this rate limit is hit there isn't really anything for the application "to do" except for cache writes until Google lifts the ban/quota resets/etc. but I noticed that the write attempts just keeps hammering google which takes bandwidth unnecessarily. I was curious about the potential to just stop making the attempts after a while, and just go dormant (though continuing to write data to cache) until the throttle is lifted? I would imagine the logic would be something like: CloudDrive starts receiving userRateLimitExceeded responses and it then puts itself in a local caching only mode (opt-in or by default - doesn't really matter to me) CloudDrive then starts sending some type of a "canary" small data packet every few minutes to test and see if Google Drive API and/or Google Drive Backend are accepting writes again, and then start writing full chunks again whenever applicable. Rinse/repeat. I realize that there is a method people have used to throttle the traffic in the settings basically to make it impossible to hit the 750GB per day quota but, in my tests for what I am using CloudDrive for I expect to ONLY stumble upon this limit maybe 10% of the time. The other 90% of the time I want to be able to use the full on bandwidth. So while a mbps throttle can help 10% of the time it ends up being an unnecessary bottleneck for the other 90%. Does this sound useful to people or am I crazy? I don't mind hitting the limit from Google every once in a while but I don't really understand why the CloudDrive cannot be more efficient when it becomes clear that the upload quota has been reached. To me it looks like it keeps trying to write the same chunks over and over (sending the full chunks all the way to Google, for the chunks to end up being denied at the door) I think for bandwidth efficiency something like this could be helpful. But maybe this is just me trying to min/max the efficiency of the application too much in a rare situation. Thanks
  2. thanks. I will test this soon and see. cheers.
  3. Reading through these steps I originally missed the tidbit at the end: So, to be clear, does this mean I am unable to switch an existing drive to use my newly setup API keys. I went through this process 3 or 4 days ago and I just signed back into the Google Cloud Console and looking at the API monitoring sections it shows no usage, which is what triggered me to go reread the document. I am looking to confirm if the changes made in the ProviderSettings.json will not kick in until I make a NEW DRIVE connection and what that means for a drive that was mounted/connected BEFORE going through this process. Basically is there a way to use my keys with a drive that was original setup using the applications embedded pool of API keys or will I have to create a new drive and transfer everything from one to the other? Forgive me if this seems like a stupid question, I know enough to be dangerous so I cannot imagine why I would have to go through a transfer process but that could just be my ignorance showing. Thanks in advance all!
×
×
  • Create New...