Jump to content
Covecube Inc.
  • 0
IanN

Rate Limit Exceeded

Question

I keep getting "Error: Rate Limit Exceeded", just wondering what it means and if I need to do something to reduce the amount of errors I get?

 

Share this post


Link to post
Share on other sites

Recommended Posts

  • 0

Since I am probably going to post this a lot: 

http://community.covecube.com/index.php?/topic/2228-google-drive-rate-limits-and-threadingexp-backoff/

 

Basically, download 1.0.0.725 for now. It should fix/help the issue. 

 

But we either need to overhaul the provider, or appeal to google for a rate limit increase.

There is a "hard" limit to the number of APIs made "per app", and we're hitting that. Either that limit needs to be increased, we need to optimize the code to reduce the number of API calls, or both

 

 

Wth limit pr app and not pr user?

 

So they wanna punish an app for getting popular?

 

Regarding lowering API calls i say bigger chunks :-D

Share this post


Link to post
Share on other sites
  • 0

Regarding lowering API calls i say bigger chunks :-D

 

I say have the pre-fetcher consolidate consecutive reads into fewer threads. That could dramatically lower API calls - instead of 10 calls to download 1MB each, a single call could do all 10MB.

 

Also.... why not both?

Share this post


Link to post
Share on other sites
  • 0

Wth limit pr app and not pr user?

 

So they wanna punish an app for getting popular?

 

Regarding lowering API calls i say bigger chunks :-D

 

They do both.  And the limit is decently high IIRC.  

 

And they don't punish, not really.  As I said, we can appeal/apply/whatever term to increase the rate limit, which I believe Alex plans on doing. 

 

I say have the pre-fetcher consolidate consecutive reads into fewer threads. That could dramatically lower API calls - instead of 10 calls to download 1MB each, a single call could do all 10MB.

 

Also.... why not both?

 

 

I'm pretty sure the plan is both, actually. 

 

Fewer API calls means less overhead, so it's a win-win. 

Share this post


Link to post
Share on other sites
  • 0

They do both.  And the limit is decently high IIRC.  

 

And they don't punish, not really.  As I said, we can appeal/apply/whatever term to increase the rate limit, which I believe Alex plans on doing. 

 

 

 

I'm pretty sure the plan is both, actually. 

 

Fewer API calls means less overhead, so it's a win-win. 

 

It really sounds great - I'm pretty sure all the high bandwidth users are pulling their limits :) Using it for cold storage I would only benefit from large chunks, as i rarely would need to do much more than open a folder.

 

Really look forward to these things getting in :) - The rate exceeded issue also seems to have gotten better

Share this post


Link to post
Share on other sites
  • 0

It really sounds great - I'm pretty sure all the high bandwidth users are pulling their limits :) Using it for cold storage I would only benefit from large chunks, as i rarely would need to do much more than open a folder.

 

Really look forward to these things getting in :) - The rate exceeded issue also seems to have gotten better

 

 

Yup. And glad to hear it. Backing off when getting these errors definitely seems to have helped everyone.  

 

Now we just need to do some optimization, it seems. :)

Share this post


Link to post
Share on other sites
  • 0

Do we have any update on getting that "server side" rate limit increase? The newer beta certainly helped, but now the turtles are disappointingly frequent. It's definitely still having an effect on overall throughput. I'm not even getting close to my "personal" API call limit. Just wondering if we should expect some relief any time soon. 

Share this post


Link to post
Share on other sites
  • 0

Do we have any update on getting that "server side" rate limit increase? The newer beta certainly helped, but now the turtles are disappointingly frequent. It's definitely still having an effect on overall throughput. I'm not even getting close to my "personal" API call limit. Just wondering if we should expect some relief any time soon. 

 

Check the changelog. :)

http://dl.covecube.com/CloudDriveWindows/beta/download/changes.txt

 

Alex is actively investigating this issue.

 

Specifically, planning on implementing a file ID cache, so that we can significantly reduce the number of API calls that are being made. 

 

However, until this is a numbered build (eg a number above the entry), this code change isn't ready for use.  Please hold off until  then, so that you get a working version instead. 

(eg, the latest version you should use is 1.0.0.726)

 

 

 

Do we have any update on getting that "server side" rate limit increase? The newer beta certainly helped, but now the turtles are disappointingly frequent. It's definitely still having an effect on overall throughput. I'm not even getting close to my "personal" API call limit. Just wondering if we should expect some relief any time soon. 

 

And yes, it will affect throughput, as it's treating the "rate limit exceeded" as throttling requests.  

 

This is a quick fix to minimize the impact to everyone, and to handle it better. 

Share this post


Link to post
Share on other sites
  • 0

Ok, I'll just keep an eye out. Not trying to rush anything, was just curious.

 

Not a problem. I just wanted to clarify here, and let you guys know where we are at. 

 

The 730+ builds are using this fix, but they're not ready for use yet. There are some issues that at least one person has ran into, but Alex is actively working on it. :)

Share this post


Link to post
Share on other sites
  • 0

I was severely affected by this issue with GCD and am now testing buld 730.  It's working great in terms of stability; 3 days now without a drive unmount error, which is the longest I have ever gone.

 

Definitely headed in the right direction, thanks guys.

Share this post


Link to post
Share on other sites
  • 0

Not sure if I should put this here or just start a new thread, but I'll start here since it is related.

 

With OneDrive for Business, sometimes I get an error "The remote server returned an error: (429)". When it happens, it happens many times, for example just now it occurred 257 times. Then it goes back to running normally.

 

In the OneDrive Dev Center, there is documentation that says this means the API rate limit has been exceeded and so connections have been throttled. Additionally, a Retry-After header is returned indicating how long the throttling will last.

 

I was wondering if there is any plan to respect the Retry-After header rather than hitting it repeatedly? Maybe there is also a Retry-After header for Google Drive too?

Share this post


Link to post
Share on other sites
  • 0

Not sure if I should put this here or just start a new thread, but I'll start here since it is related.

 

With OneDrive for Business, sometimes I get an error "The remote server returned an error: (429)". When it happens, it happens many times, for example just now it occurred 257 times. Then it goes back to running normally.

 

In the OneDrive Dev Center, there is documentation that says this means the API rate limit has been exceeded and so connections have been throttled. Additionally, a Retry-After header is returned indicating how long the throttling will last.

 

I was wondering if there is any plan to respect the Retry-After header rather than hitting it repeatedly? Maybe there is also a Retry-After header for Google Drive too?

 

On OneDrive for Business, this "HTTP 429" error code is for "too many requests" and definitely related. 

 

It's a throttling request, which we do respect.  Specifically, we do back off, and we listen to what it says. And if it doesn't, or we keep on hitting this, we back off exponentially.  That's why it "goes away". 

 

Decreasing the number of threads may help, though. 

 

But this change (the File ID caching) would absolutely help with this as well. 

Share this post


Link to post
Share on other sites
  • 0

Would it be possible for us to somehow use our own "client ID" and "client secret"? Would this help stop the rate limit exceeded problems?

Share this post


Link to post
Share on other sites
  • 0

Would it be possible for us to somehow use our own "client ID" and "client secret"? Would this help stop the rate limit exceeded problems?

 

It would likely, yes.  However, there are other associated issues with doing this. Both technical and otherwise.  

It's something that Alex has thought about, but wants to avoid for now. 

 

 

 

however, the file ID caching feature should be done soon, and we plan on testing and then submitting the new build then.  Hopefully, this should be done rather quickly. 

Share this post


Link to post
Share on other sites
  • 0

Would it be possible for us to somehow use our own "client ID" and "client secret"? Would this help stop the rate limit exceeded problems?

 

Amazon made their API invite only shortly after they did this with Amazon Drive.

Share this post


Link to post
Share on other sites
  • 0

Amazon made their API invite only shortly after they did this with Amazon Drive.

 

Sorry I should have said I meant with Google Drive, similar to what you can do with rsync.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...