Jump to content
  • 1

Google Drive: The limit for this folder's number of children (files and folders) has been exceeded


KiLLeRRaT

Question

Hi All,

I've recently noticed that my drive has 40GB to be uploaded, and saw there are errors in the top left.  Clicking that showed 

Quote

The limit for this folder's number of children (files and folders) has been exceeded

What's the deal here, and is this going to be an ongoing issue with growing drives?

My drive is a 15TB drive, with 6TB free (so only using 9TB).

Attached is my drive's info screen.

EDIT: Cloud Drive Version: 1.1.5.1249 on Windows Server 2016.

 

Cheers,

2020-06-18 11_42_07-mRemoteNG - confCons.xml - oscar.gouws.org.png

Link to comment
Share on other sites

Recommended Posts

  • 0

I don't have my api configured and I have never met that error, maybe these google guys have touched again something related to the API causing the program to fail. Probably because of what you're saying. Looks like the developers are on vacation.

Link to comment
Share on other sites

  • 0

This issue appeared for me over 2 weeks ago (yay me) and it seems to be a gradual rollout.

The 500.000 items limit does make sense, as clouddrive stores your files in (up to) 20mb chunks on the provider, and thus the issue should appear somewhere around the 8-10tb mark depending on file duplication settings. 
In this case, the error actually says 'NonRoot', thus they mean any folder, which apparently can now only have a maximum of 500.000 children.

I've actually been in contact with Christopher since over 2 weeks ago about this issue, and he has informed Alex of the issue and they have confirmed they are working on a solution. 
(other providers already has such issues, and thus it will likely revolve around storing chunks in subfolders with ~100.000 chunks per folder, or something similar.)

It is very interesting that you mention reverting to the non-personal api keys resolved it for you, which does suggest that indeed it may be dependent on some (secret) api level limit.
Though, that would not explain why adding something to the folder manually also fails...

@JulesTop Have you tried uploading manually since the issue 'disappeared' ? If that still fails, it would confirm that the clouddrive api keys are what are 'overriding' the limit, and any personal access, including web, is limited.

Either way hopefully there is a definitive fix coming soon, perhaps until then using the default keys is an option.

 

Link to comment
Share on other sites

  • 0
1 hour ago, Jellepepe said:

This issue appeared for me over 2 weeks ago (yay me) and it seems to be a gradual rollout.

The 500.000 items limit does make sense, as clouddrive stores your files in (up to) 20mb chunks on the provider, and thus the issue should appear somewhere around the 8-10tb mark depending on file duplication settings. 
In this case, the error actually says 'NonRoot', thus they mean any folder, which apparently can now only have a maximum of 500.000 children.

I've actually been in contact with Christopher since over 2 weeks ago about this issue, and he has informed Alex of the issue and they have confirmed they are working on a solution. 
(other providers already has such issues, and thus it will likely revolve around storing chunks in subfolders with ~100.000 chunks per folder, or something similar.)

It is very interesting that you mention reverting to the non-personal api keys resolved it for you, which does suggest that indeed it may be dependent on some (secret) api level limit.
Though, that would not explain why adding something to the folder manually also fails...

@JulesTop Have you tried uploading manually since the issue 'disappeared' ? If that still fails, it would confirm that the cloudflare api keys are what are 'overriding' the limit, and any personal access, including web, is limited.

Either way hopefully there is a definitive fix coming soon, perhaps until then using the default keys is an option.

 

You called it. I just tried uploading a small file to the content folder directly from the Google drive web interface and got an 'upload failure (38)', same as before.

It looks like this restriction rollout is definitely happening. It will be fantastic if @Alex has a solution coming soon! But also good that the default keys are keeping us going for now.

Link to comment
Share on other sites

  • 0
4 hours ago, steffenmand said:

Maybe this is related to the issues we have with Internal Error etc. if we are already above the 10 TB mark. Then it fails when trying to write new chunks because we are already above the 10 TB mark

So far, with google's own api, I haven't had that kind of error and my files take up more than 100 TB

Link to comment
Share on other sites

  • 0
4 hours ago, steffenmand said:

Maybe this is related to the issues we have with Internal Error etc. if we are already above the 10 TB mark. Then it fails when trying to write new chunks because we are already above the 10 TB mark

I also doubt this is related to this issue. I have well over 10TB used and i have never had a 'internal error' error. 

The only errors i have seen are user rate limit exceeded (when exceeding the upload or download limits) and since about 2 weeks this exceeded maximum number of children error.

Link to comment
Share on other sites

  • 0
On 6/21/2020 at 11:58 AM, JulesTop said:

Also, I realized that at some point I removed my personal gdrive API credentials from the config file, and this is about the time the issue resolved. I just put back my personal API credentials and re-authorized and the issue came back. I yet again switched back to 'null' (the default stablebit API credentials) and the issue went away again... I wonder if there are higher API limits with the Stablebit credentials.

If that is the case, this makes things easier to fix, at least. 

And it's very possible that our API key is treated differently. However, it's something we'd rather not assume, just in case. 

Link to comment
Share on other sites

  • 0

I'm guessing this latest beta changelog is referencing the solution to this

.1305
* Added a progress indicator when performing drive upgrades.
* [Issue #28394] Implemented a migration process for Google Drive cloud drives to hierarchical chunk organization:
    - Large drives with > 490,000 chunks will be automatically migrated.
        - Can be disabled by setting GoogleDrive_UpgradeChunkOrganizationForLargeDrives to false.
    - Any drive can be migrated by setting GoogleDrive_ForceUpgradeChunkOrganization to true.
    - The number of concurrent requests to use when migrating can be set with GoogleDrive_ConcurrentRequestCount (defaults to 10).
    - Migration can be interrupted (e.g. system shutdown) and will resume from where it left off on the next mount.
    - Once a drive is migrated (or in progress), an older version of StableBit CloudDrive cannot be used to access it.
* [Issue #28394] All new Google Drive cloud drives will use hierarchical chunk organization with a limit of no more than 100,000 children per folder.

Some questions, seeing as the limit appears to be around 500,000, is there an option to set the new hierarchical chunk organization folder limit to something higher than 100,000?

Has anyone performed the migration yet, what is the approximate time it takes to transfer a 500,000 chunk drive to the new format? Seeing as there are concurrency limit options, does the process also entail a large amount of upload or download bandwidth?

After migrating, is there any performance difference compared to the prior non hierarchical chunk organization?

Edit: if the chunk limit is 500,000, and if chunks are 20Mb, shouldn't this be occurring on all drives over 10Tb in size?

Note, I haven't actually experienced this issue and I have a few large drives under my own api key, so it may be a very slow rollout or an A/B test.

Link to comment
Share on other sites

  • 0
4 hours ago, Firerouge said:

I'm guessing this latest beta changelog is referencing the solution to this


.1305
* Added a progress indicator when performing drive upgrades.
* [Issue #28394] Implemented a migration process for Google Drive cloud drives to hierarchical chunk organization:
    - Large drives with > 490,000 chunks will be automatically migrated.
        - Can be disabled by setting GoogleDrive_UpgradeChunkOrganizationForLargeDrives to false.
    - Any drive can be migrated by setting GoogleDrive_ForceUpgradeChunkOrganization to true.
    - The number of concurrent requests to use when migrating can be set with GoogleDrive_ConcurrentRequestCount (defaults to 10).
    - Migration can be interrupted (e.g. system shutdown) and will resume from where it left off on the next mount.
    - Once a drive is migrated (or in progress), an older version of StableBit CloudDrive cannot be used to access it.
* [Issue #28394] All new Google Drive cloud drives will use hierarchical chunk organization with a limit of no more than 100,000 children per folder.

Some questions, seeing as the limit appears to be around 500,000, is there an option to set the new hierarchical chunk organization folder limit to something higher than 100,000?

Has anyone performed the migration yet, what is the approximate time it takes to transfer a 500,000 chunk drive to the new format? Seeing as there are concurrency limit options, does the process also entail a large amount of upload or download bandwidth?

After migrating, is there any performance difference compared to the prior non hierarchical chunk organization?

 

Note, I haven't actually experienced this issue and I have a few large drives under my own api key, so it may be a very slow rollout or an A/B test.

i would guess chunks just gets moved and not actually downloaded/reuploaded! only upload is to update chunk db i suspect

Link to comment
Share on other sites

  • 0
5 hours ago, steffenmand said:

i would guess chunks just gets moved and not actually downloaded/reuploaded! only upload is to update chunk db i suspect

I'm going through the process now.
It started at 6:45am this morning and is currently at 2.62% complete... I think this will take some time. But I don't think it actually downloads/re-uploads the chunks, it just moves them.

I have 55TB in google drive BTW.

 

Also, as far as I can tell, the drive is unusable during the process, and there is no way of pausing the process to access the drive content. However, it will be able to resume from where it left off after a shutdown.

 

Having said all that, I have to say thanks to @Alex and @Christopher (Drashna) for mobilizing quickly on this, before it becomes a really serious problem. At least right now, we can migrate on our schedule, and until then there is a workaround with the default API keys.

Link to comment
Share on other sites

  • 0

Have any formal announcements been made by the developers? How much hurry to migrate to a new system with a beta version that you don't even know what it leads to... I do not intend to adopt this system, if that is the case, without first making an official announcement by developers, with 130 TB I do not think it is a good idea.  That said, with my current version, which is by no means the last stable one officially released, I have no problem with Gdrive at the moment, everything is perfect. To say that I have the standard google api. 

Link to comment
Share on other sites

  • 0

Is anyone having issues creating drives on .1305 ? 

It seems to create the drives, but are unable to display it in the application!

 

I also notice that all of you get a % counter on your "Drive is upgrading"... mine doesnt show that - is there something up with my installation?


EDIT: A reboot made the new drive appear, but i can't unlock it. Service logs is spammed with:

0:02:04.0: Warning: 0 : [ApiHttp:64] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable).

 

EDIT 2:

After ouputting verbose i can see:

0:04:42.8: Information: 0 : [ApiGoogleDrive:69] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=962388314550

As im not using my own API keys, the issue must be with Stablebit's

Did the drive updates make Stablebit's API keys go crazy?

Link to comment
Share on other sites

  • 0
2 hours ago, steffenmand said:

Is anyone having issues creating drives on .1305 ? 

It seems to create the drives, but are unable to display it in the application!

 

I also notice that all of you get a % counter on your "Drive is upgrading"... mine doesnt show that - is there something up with my installation?


EDIT: A reboot made the new drive appear, but i can't unlock it. Service logs is spammed with:

0:02:04.0: Warning: 0 : [ApiHttp:64] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable).

 

EDIT 2:

After ouputting verbose i can see:

0:04:42.8: Information: 0 : [ApiGoogleDrive:69] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=962388314550

As im not using my own API keys, the issue must be with Stablebit's

Did the drive updates make Stablebit's API keys go crazy?

I was wondering if this would happen as more and more people begin the upgrade process...

The issue is, I don't think you can re-authorize the drive once the upgrade process begins... at least I don't see a way to do this. Otherwise, I would switch back to my own key.

Link to comment
Share on other sites

  • 0
On 6/24/2020 at 6:00 PM, JulesTop said:

I'm going through the process now.
It started at 6:45am this morning and is currently at 2.62% complete... I think this will take some time. But I don't think it actually downloads/re-uploads the chunks, it just moves them.

I have 55TB in google drive BTW.

 

Also, as far as I can tell, the drive is unusable during the process, and there is no way of pausing the process to access the drive content. However, it will be able to resume from where it left off after a shutdown.

 

Having said all that, I have to say thanks to @Alex and @Christopher (Drashna) for mobilizing quickly on this, before it becomes a really serious problem. At least right now, we can migrate on our schedule, and until then there is a workaround with the default API keys.

I have two drives with 10TB on one and 20TB on the other. I'm at about 17 hours in and one the smaller drive is at 58% and the bigger drive is at 30%.

 

How long did it take for you? Did everything work when it was done? I have a lot of large files (~7GB...media) but not many small files.

Link to comment
Share on other sites

  • 0
7 minutes ago, Chase said:

I have two drives with 10TB on one and 20TB on the other. I'm at about 17 hours in and one the smaller drive is at 58% and the bigger drive is at 30%.

 

How long did it take for you? Did everything work when it was done? I have a lot of large files (~7GB...media) but not many small files.

I'm currently at 12.02%, and I started on June 24th at 6:45am... so it's been over 2 days.

I have generally larger files... from 15GB to 60GB, but I'm not sure what affect that has.

Link to comment
Share on other sites

  • 0
1 hour ago, JulesTop said:

I'm currently at 12.02%, and I started on June 24th at 6:45am... so it's been over 2 days.

I have generally larger files... from 15GB to 60GB, but I'm not sure what affect that has.

I just restarted my computer...the percentage started over. Not sure if the whole process is starting again or if this is the percentage of what is left. FML

Link to comment
Share on other sites

  • 0
3 minutes ago, Chase said:

I just restarted my computer...the percentage started over. Not sure if the whole process is starting again or if this is the percentage of what is left. FML

My guess it is the % of the remaining. = it will increase faster than initially from every reboot.

 

Mine is stuck however with the user limit

Link to comment
Share on other sites

  • 0
42 minutes ago, steffenmand said:

My guess it is the % of the remaining. = it will increase faster than initially from every reboot.

 

Mine is stuck however with the user limit

What user limit are you referring to? It doesn't appear that it is downloading or uploading anything so it shouldn't effect the daily upload cap.

 

Thanks in advance.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...