Jump to content
Covecube Inc.
  • 1
KiLLeRRaT

Google Drive: The limit for this folder's number of children (files and folders) has been exceeded

Question

Hi All,

I've recently noticed that my drive has 40GB to be uploaded, and saw there are errors in the top left.  Clicking that showed 

Quote

The limit for this folder's number of children (files and folders) has been exceeded

What's the deal here, and is this going to be an ongoing issue with growing drives?

My drive is a 15TB drive, with 6TB free (so only using 9TB).

Attached is my drive's info screen.

EDIT: Cloud Drive Version: 1.1.5.1249 on Windows Server 2016.

 

Cheers,

2020-06-18 11_42_07-mRemoteNG - confCons.xml - oscar.gouws.org.png

Share this post


Link to post
Share on other sites

Recommended Posts

  • 0
7 hours ago, JulesTop said:

I was wondering if this would happen as more and more people begin the upgrade process...

The issue is, I don't think you can re-authorize the drive once the upgrade process begins... at least I don't see a way to do this. Otherwise, I would switch back to my own key.

I was on the stablebit API key and it locked me out for the limits. So I created my own API key, saved it in the provider settings json, and reset my machine. The race is back on now.

 

it started the percent over again the same as it did when i restarted my PC earlier. And it's not climbing any faster as if the new percent level is "of what was left"...its more like it completely started over the full process.

Share this post


Link to post
Share on other sites
  • 0
1 hour ago, Chase said:

I was on the stablebit API key and it locked me out for the limits. So I created my own API key, saved it in the provider settings json, and reset my machine. The race is back on now.

 

it started the percent over again the same as it did when i restarted my PC earlier. And it's not climbing any faster as if the new percent level is "of what was left"...its more like it completely started over the full process.

What field does the key go in? And is it in ProviderSettings.json?

Share this post


Link to post
Share on other sites
  • 0
33 minutes ago, steffenmand said:

What field does the key go in? And is it in ProviderSettings.json?

That is where I put it.

 

Now upon closer look the log is still giving me the same error but it is no longer telling me it cant mount the drive and the percentage is going up still where it wasn't before.

Share this post


Link to post
Share on other sites
  • 0
33 minutes ago, Chase said:

That is where I put it.

 

Now upon closer look the log is still giving me the same error but it is no longer telling me it cant mount the drive and the percentage is going up still where it wasn't before.

Seems to make no difference here...

Same error and still no % counter. My guess is that you guys got to a certain point before the limit was reached, where mine didnt get started. Anyway i hope they will have a fix for this soon - just sucks that stuff like this always have to happen in weekends

Share this post


Link to post
Share on other sites
  • 0

Soo..... I'm still on version 1.1.2.1178 (been reluctant to upgrade with the various reports I keep seeing), but my gdrive has over 100TB of data on it and I've never seen the error about the number of children being exceeded. I use my own API keys. Have I just be extremely lucky up until this point?

Share this post


Link to post
Share on other sites
  • 0
2 hours ago, darkly said:

Soo..... I'm still on version 1.1.2.1178 (been reluctant to upgrade with the various reports I keep seeing), but my gdrive has over 100TB of data on it and I've never seen the error about the number of children being exceeded. I use my own API keys. Have I just be extremely lucky up until this point?

my advice, don't even think about upgrading.  :D

Share this post


Link to post
Share on other sites
  • 0
1 hour ago, kird said:

my advice, don't even think about upgrading.  :D

I mean, I'd imagine I'd have to sooner or later. The child directory/file limit still exists on google drive, and 1178 isn't equipped to handle that in any capacity. I'm just wondering why I haven't gotten any issues yet considering how much data I have stored.

Share this post


Link to post
Share on other sites
  • 0
47 minutes ago, darkly said:

I mean, I'd imagine I'd have to sooner or later. The child directory/file limit still exists on google drive, and 1178 isn't equipped to handle that in any capacity. I'm just wondering why I haven't gotten any issues yet considering how much data I have stored.

Here one with 121 TB used and without any problem, so i don't need anything and even less to lose my data by doing an upgrade that i don't need at all. Perhaps the secret lies in not having configured any personal API (I've always been with the standard/default google api).

Share this post


Link to post
Share on other sites
  • 0
5 minutes ago, kird said:

Here one with 121 TB used and without any problem, so i don't need anything and even less to lose my data by doing an upgrade that i don't need at all. Perhaps the secret lies in not having configured any personal API (I've always been with the standard/default google api).

that doesn't change the fact that versions prior to the beta mentioned above don't respect the 500K child limit that is forced directly by google...

unless one of the devs wants to chime in and drop the bombshell that earlier versions actually did respect this and only the later versions for some reason stopped respecting the limit, only to have to go back to respecting it in the betas . . .  /s

Share this post


Link to post
Share on other sites
  • 0

The logic suggests that the version you've been in most of the time has never had this problem... it's totally compatible with google limitation/imposition, in fact this one has always been there, it's not new, another thing is that we (not the developers) didn't know it. 

Share this post


Link to post
Share on other sites
  • 0

Is there no possibility of implementing a method of upgrading CloudDrives to the new format without making the data completely inaccessible? How about literally cloning the data over onto a new drive that follows the new format while leaving the current drive available and untouched? Going days without access to the data is quite an issue for me...

Share this post


Link to post
Share on other sites
  • 0
6 minutes ago, kird said:

The logic suggests that the version you've been in most of the time has never had this problem... it's totally compatible with google limitation/imposition, in fact this one has always been there, it's not new, another thing is that we (not the developers) didn't know it. 

again, this makes no sense. Why would they conform to google's limits, then release an update that DOESN'T CONFORM TO THOSE LIMITS, only to release a beta months later that forces an entire migration of data to a new format THAT CONFORMS TO THOSE LIMITS AGAIN?

Share this post


Link to post
Share on other sites
  • 0

Look how many clients have this problem in their gdrive with StablebitCD, everyone here (not too many) reporting the error has something in common, they all had their own api configured, this is what really changed in google that makes this error that is being reported. I haven't seen anyone say they have this bug with the standard api. Without getting into technical details, it's clear that this is not an SCD incompatibility but an API incompatibility, in my case I didn't need my own api to work with my gdrive on a daily basis. Perhaps others have needed it. If you feel more safe upgrading to beta when you say you are not having problems yourself... go ahead and get lucky.

Share this post


Link to post
Share on other sites
  • 0

Just to clarify for everyone here, since there seems to be a lot of uncertainty:

  • The issue (thus far) is only apparent when using your own api key
    • However, we have confirmed that the clouddrive keys are the exception, rather than the other way around, as for instance the web client does have the same limitation
  • Previous versions (also) do not conform with this limit (and go WELL over the 500k limit)
  • Yes there has definitely been a change at googles side that implemented this new limitation
  • Although there may be issues with the current beta (it is a beta after all) it is still important to convert your drives sooner rather than later, here's why:
    • Currently all access (api or web/googles own app) respects the new limits, except for the clouddrive keys (probably because they are verified)
    • Since there has been no announcement from google that this change was happening, we can expect no such announcement if (when) the clouddrive key also stops working either
    • It may or may not be possible to (easily) convert existing drives if writing is completely impossible (if no api keys work)
  • If you don't have issues now, you don;t have to upgrade, instead wait for a proper release, but do be aware there is a certain risk associated.

I hope this helps clear up some of the confusion!

Share this post


Link to post
Share on other sites
  • 0

Has anyone with a large drive completed this process at this point? If you don't mind, can you folks drop a few notes on your experience? Setting aside the fact that the data is inaccessible and other quality of life issues like that, how long did the process take? How large was your drive? Did you have to use personal API keys in order to avoid rate limit errors? Anything else you'd share before someone began to migrate data?

Trying to get a feel for the downtime and prep work I'm looking at to begin the process.

Share this post


Link to post
Share on other sites
  • 0

Personally if I am not forced to do it I will be perfectly as I am now, I don't want to think for a second about doing this with 121TB of data... if everything goes well without any loss in the process, I am sure, it would be more than months in doing the whole process. 

Note to developers, please for people who don't have any problem we hope that future stable releases of SCD don't force us to do the migration because of the API issue, please develop versions where we are not forced to do this step since we don't have any need at the moment and we have been already with gdrive for years, this will not change if there is no need to apply our own api. 

Share this post


Link to post
Share on other sites
  • 0
8 hours ago, kird said:

Personally if I am not forced to do it I will be perfectly as I am now

You will be forced to.. eventually. I have 6 clouddrives, 50+ TB each, for sure more than 500k chunks in each of them. On sunday only one of those six drives was unable to write/upload data, rest was perfectly fine while uploading. This is on 2 different accounts mind you. I suspect Google is rolling out those API changes gradually, not on all accounts at once, but sooner or later everyone and every account will be restricted by this new 500k files per each folder rule. 

Share this post


Link to post
Share on other sites
  • 0
13 hours ago, srcrist said:

Has anyone with a large drive completed this process at this point? If you don't mind, can you folks drop a few notes on your experience? Setting aside the fact that the data is inaccessible and other quality of life issues like that, how long did the process take? How large was your drive? Did you have to use personal API keys in order to avoid rate limit errors? Anything else you'd share before someone began to migrate data?

Trying to get a feel for the downtime and prep work I'm looking at to begin the process.

I'll tell you my path on this...yours may differ

I upgraded to the dreaded beta on Wednesday at about 5pm. I have two drives both 56TB in capacity. Drive "A" has 10TB of data and Drive "B" has 20TB in data.  On Thursday I restarted my computer at around the 24 hour mark and the progress percentage started over. I can pretty confidently say that the whole process started over. About two hours later one of my drives stopped even showing the percentage complete and just had an error. This is when I went into the Json and added my own API key information...again restarting my computer and restarting the process. I don't know if adding my API to that file actually did anything because the logs still show i was being limited but the percentage started and was going up again so that was a sign of progress.

Drive A with the 10TB of actual data took approximately 48 hours to complete from the last time I restarted my computer. Now that it is complete it appears that my data was unharmed and I can access it. 

On drive B with the 20TB of data is (as i write this is at the 60 hr mark) only at 55.33% .

I'm refusing to be the first of us to restart my computer after it is "complete"....hell no

Share this post


Link to post
Share on other sites
  • 0

on the drive that has completed the upgrade. I can see and access my data but when I try to change my I/O performance on that drive it tells me "Error saving data. Cloud drive not found"

 

EDIT: Prefetch is not working

Share this post


Link to post
Share on other sites
  • 0
1 hour ago, Chase said:

on the drive that has completed the upgrade. I can see and access my data but when I try to change my I/O performance on that drive it tells me "Error saving data. Cloud drive not found"

 

EDIT: Prefetch is not working

OK, now that is concerning. Two of my six 10TB drives are now converting at around 25% and counting. No errors on these two. However, for those waiting for initialization, the log file is flooded with the following error:

Warning: 0 : [ApiHttp:148] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable).

Default API settings, no unmounting errors for now

Share this post


Link to post
Share on other sites
  • 0
1 hour ago, Chase said:

on the drive that has completed the upgrade. I can see and access my data but when I try to change my I/O performance on that drive it tells me "Error saving data. Cloud drive not found"

 

EDIT: Prefetch is not working

Oh dear...

 

Did you submit a ticket to support? This will make sure @Alex can take a look and roll out a fix if it's on the software end. I don't think he's on the forums too often.

Share this post


Link to post
Share on other sites
  • 0
6 hours ago, Chase said:

on the drive that has completed the upgrade. I can see and access my data but when I try to change my I/O performance on that drive it tells me "Error saving data. Cloud drive not found"

 

EDIT: Prefetch is not working

This indeed sounds like a bug, the update was probably rushed quite a bit and i doubt it was tested extensively.
Be sure to submit a support ticket so the developers are aware and can work on resolving all the issues

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...