Jump to content
  • 0

[.901] Google Drive - server side disconnects/throttling


borez

Question

Have been getting lots of throttling/server side disconnect messages from Google Drive recently, see below. Have been using the same settings for some time, seems that there's some changes to their throttling mechanism?

 

If this helps, some comments:

 

1) I had an unsafe shutdown, and as a result i need to reupload 18GB of data. From the tech page, it seems that I'm redownload chunks for partial re-writes. Seems to be fairly intensive vs a straight-upload?

2) The errors have started since I upgraded to the .900/901 builds.

3) I typically run 6 threads, with a max upload speed of 400mbps. Scaling this down to 3/4 threads doesn't help.

0:10:29.5: Warning: 0 : [IoManager:32] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:15:35.5: Warning: 0 : [ReadModifyWriteRecoveryImplementation:62] [W] Failed write (Chunk:564, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:15:35.9: Warning: 0 : [TemporaryWritesImplementation:62] Error performing read-modify-write, marking as failed (Chunk=564, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:62] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:62] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:61] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:15:35.9: Warning: 0 : [IoManager:62] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:15:35.9: Warning: 0 : [IoManager:61] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:16:51.3: Warning: 0 : [ReadModifyWriteRecoveryImplementation:95] [W] Failed write (Chunk:2743, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:16:51.6: Warning: 0 : [TemporaryWritesImplementation:95] Error performing read-modify-write, marking as failed (Chunk=2743, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:95] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:95] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:96] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:16:51.6: Warning: 0 : [IoManager:95] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:16:51.6: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
0:16:56.1: Warning: 0 : [ApiGoogleDrive:102] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded
0:16:56.1: Warning: 0 : [ApiHttp:102] HTTP protocol exception (Code=ServiceUnavailable).
0:16:56.1: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,517ms and retrying.
0:16:56.2: Warning: 0 : [ApiGoogleDrive:79] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded
0:16:56.2: Warning: 0 : [ApiHttp:79] HTTP protocol exception (Code=ServiceUnavailable).
0:16:56.2: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,873ms and retrying.
0:16:56.3: Warning: 0 : [ApiGoogleDrive:100] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded
0:16:56.3: Warning: 0 : [ApiHttp:100] HTTP protocol exception (Code=ServiceUnavailable).
0:16:56.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,561ms and retrying.
0:16:56.3: Warning: 0 : [ApiGoogleDrive:84] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded
0:16:56.3: Warning: 0 : [ApiHttp:84] HTTP protocol exception (Code=ServiceUnavailable).
0:16:56.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,664ms and retrying.
0:16:57.3: Warning: 0 : [ApiGoogleDrive:96] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded
0:16:57.3: Warning: 0 : [ApiHttp:96] HTTP protocol exception (Code=ServiceUnavailable).
0:16:57.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,546ms and retrying.
Link to comment
Share on other sites

Recommended Posts

  • 0

does it actually block from downloading? I don't think I've noticed that and I definitely haven't had the drive unmount when uploads are failing.

 

If you're getting the "User rate limit exceeded", yes, it will block ALL API calls to the provider (that aren't from it's official app), and that will prevent you from downloading from the provider. 

Link to comment
Share on other sites

  • 0

according to some posts on the rclone thread (https://forum.rclone.org/t/sending-data-to-gdrive-403-rate-limit-exceeded/3469/148) about this issue:

 

 

 

With my Enterprise account I received this message 40 minute ago from the Google Cloud Platform support, API team:

 

Recently the Drive engineering team implemented new daily upload quotas on a user account basis, once these limits are hit they take 24 hours to reset. The Drive engineering team do not make their limits public and limits are subject to change at their discretion. Support have not been informed of the limit however recent cases indicate that it is indeed a 750GB daily upload limit per user account.

 

but who knows?

Link to comment
Share on other sites

  • 0

 

That was what I was seeing in the error logs. The errors kicked in around 20-30 mins of operations so I'm definitely way below the bandwidth caps.

 

A couple of observations:

  • The issues arose when CD was working on partial re-read/write chunk operations. In that instance the GUI showed i was running way more threads than what I stipulated in the settings. For example, I was downloading with 6-7 threads where my settings indicated 4.
  • The errors seemed to have stopped when I was doing full writes to CD.

 

 

To recap on my issues, I have been hitting the API call limits even I've not hit the proverbial 750GB cap.

 

To provide some context: I'm trying to create hierarchical pools in Drivepool (backup-ing my local drivepool to the cloud). I manually replicated my data in the cloud using GoodSync, and had relocated the files into the Poolpart folder. Created a combined pool and did a duplication check.

 

This triggered off a series of read and partial writes (and re-uploads) on CD. Based off the technical details, i see the following:

 

  • Partial re-writes are extremely slow (< 130kbps). I'm not sure if it's the same chunk that's being written (and re-written) again. See below for screenshot and error logs.
  • Once that partial write clears, the upload speeds are back up to 100-200mbps. Still not as fast as what I used to get, but will leave that for another day.

So apart from the bandwidth caps, it seems that there are caps with the number of calls you can make, for a specific file. It's my gut intuition, but appreciate some thoughts. Pulling my hair out here!

2:09:05.8: Warning: 0 : [ReadModifyWriteRecoveryImplementation:96] [W] Failed write (Chunk:34168, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
2:09:06.0: Warning: 0 : [TemporaryWritesImplementation:96] Error performing read-modify-write, marking as failed (Chunk=34168, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:96] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:4] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:27] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:104] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
2:09:06.0: Warning: 0 : [IoManager:4] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
2:09:06.0: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
2:09:06.0: Warning: 0 : [IoManager:27] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
2:09:06.0: Warning: 0 : [IoManager:104] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 

MS5e9Oz.png

Link to comment
Share on other sites

  • 0

If you're getting the "User rate limit exceeded", yes, it will block ALL API calls to the provider (that aren't from it's official app), and that will prevent you from downloading from the provider. 

 

To be clear, Christopher: that actually is not the case here. When you hit the threshold it will begin to give you the 403 errors (User rate limit exceeded), but ONLY on uploads. It will still permit downloaded data to function as normal. It isn't actually an API limit, wholesale. They're just using the API to enforce an upload quota. Once you exceed the quota, all of your upload API calls fail but your download calls will succeed (unless you hit the 10TB/day quota on those as well). 

 

AH HA! it's because I have Arq running backups to the same Gsuite account. must be hitting the limit there.

 

Right. For the people who think they aren't hitting the 750GB/day limit, remember that it's a total limit across *every* application that is accessing that drive. So your aggregate cannot be more than that. But enough people have confirmed that the limit is somewhere right around 750GB/day, at this point, that I think we can call that settled. 

Link to comment
Share on other sites

  • 0

To be clear, Christopher: that actually is not the case here. When you hit the threshold it will begin to give you the 403 errors (User rate limit exceeded), but ONLY on uploads. It will still permit downloaded data to function as normal. It isn't actually an API limit, wholesale. They're just using the API to enforce an upload quota. Once you exceed the quota, all of your upload API calls fail but your download calls will succeed (unless you hit the 10TB/day quota on those as well). 

 

 

Right. For the people who think they aren't hitting the 750GB/day limit, remember that it's a total limit across *every* application that is accessing that drive. So your aggregate cannot be more than that. But enough people have confirmed that the limit is somewhere right around 750GB/day, at this point, that I think we can call that settled. 

 

BTW is the 750GB/day cap enforced on a per user account basis, or is it enforced amongst the whole user group?

Link to comment
Share on other sites

  • 0

BTW is the 750GB/day cap enforced on a per user account basis, or is it enforced amongst the whole user group?

 

That actually seems to depend. I use a edu account (legit edu account via my Alma Mater), for example, and I seem to get my own personal 750GB/day quota that exists independently of any other users. But *most* people, who are using their own personal GSuites accounts on their own domain, seem to get 750gb/day for all accounts on that domain. So it would seem that Google is treating different organizations differently. I'm sure they're just smart enough to recognize that 750GB/day is very low for larger organizations, but I don't know at what threshold they start treating it differently. 

Link to comment
Share on other sites

  • 0

srcist, thank you for that clarification! 

 

Would be nice of Google outright documented this rather than letting the community find out the hard way. 

 

 

AH HA! it's because I have Arq running backups to the same Gsuite account. must be hitting the limit there.

 

And that would do it.  If you're hitting the limit between the two different products... I can image that not taking that long

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...