Jump to content
  • 1

Google Drive: The limit for this folder's number of children (files and folders) has been exceeded


KiLLeRRaT

Question

Hi All,

I've recently noticed that my drive has 40GB to be uploaded, and saw there are errors in the top left.  Clicking that showed 

Quote

The limit for this folder's number of children (files and folders) has been exceeded

What's the deal here, and is this going to be an ongoing issue with growing drives?

My drive is a 15TB drive, with 6TB free (so only using 9TB).

Attached is my drive's info screen.

EDIT: Cloud Drive Version: 1.1.5.1249 on Windows Server 2016.

 

Cheers,

2020-06-18 11_42_07-mRemoteNG - confCons.xml - oscar.gouws.org.png

Link to comment
Share on other sites

Recommended Posts

  • 0
1 hour ago, kird said:

my advice, don't even think about upgrading.  :D

I mean, I'd imagine I'd have to sooner or later. The child directory/file limit still exists on google drive, and 1178 isn't equipped to handle that in any capacity. I'm just wondering why I haven't gotten any issues yet considering how much data I have stored.

Link to comment
Share on other sites

  • 0
47 minutes ago, darkly said:

I mean, I'd imagine I'd have to sooner or later. The child directory/file limit still exists on google drive, and 1178 isn't equipped to handle that in any capacity. I'm just wondering why I haven't gotten any issues yet considering how much data I have stored.

Here one with 121 TB used and without any problem, so i don't need anything and even less to lose my data by doing an upgrade that i don't need at all. Perhaps the secret lies in not having configured any personal API (I've always been with the standard/default google api).

Link to comment
Share on other sites

  • 0
5 minutes ago, kird said:

Here one with 121 TB used and without any problem, so i don't need anything and even less to lose my data by doing an upgrade that i don't need at all. Perhaps the secret lies in not having configured any personal API (I've always been with the standard/default google api).

that doesn't change the fact that versions prior to the beta mentioned above don't respect the 500K child limit that is forced directly by google...

unless one of the devs wants to chime in and drop the bombshell that earlier versions actually did respect this and only the later versions for some reason stopped respecting the limit, only to have to go back to respecting it in the betas . . .  /s

Link to comment
Share on other sites

  • 0

The logic suggests that the version you've been in most of the time has never had this problem... it's totally compatible with google limitation/imposition, in fact this one has always been there, it's not new, another thing is that we (not the developers) didn't know it. 

Link to comment
Share on other sites

  • 0

Is there no possibility of implementing a method of upgrading CloudDrives to the new format without making the data completely inaccessible? How about literally cloning the data over onto a new drive that follows the new format while leaving the current drive available and untouched? Going days without access to the data is quite an issue for me...

Link to comment
Share on other sites

  • 0
6 minutes ago, kird said:

The logic suggests that the version you've been in most of the time has never had this problem... it's totally compatible with google limitation/imposition, in fact this one has always been there, it's not new, another thing is that we (not the developers) didn't know it. 

again, this makes no sense. Why would they conform to google's limits, then release an update that DOESN'T CONFORM TO THOSE LIMITS, only to release a beta months later that forces an entire migration of data to a new format THAT CONFORMS TO THOSE LIMITS AGAIN?

Link to comment
Share on other sites

  • 0

Look how many clients have this problem in their gdrive with StablebitCD, everyone here (not too many) reporting the error has something in common, they all had their own api configured, this is what really changed in google that makes this error that is being reported. I haven't seen anyone say they have this bug with the standard api. Without getting into technical details, it's clear that this is not an SCD incompatibility but an API incompatibility, in my case I didn't need my own api to work with my gdrive on a daily basis. Perhaps others have needed it. If you feel more safe upgrading to beta when you say you are not having problems yourself... go ahead and get lucky.

Link to comment
Share on other sites

  • 0

Just to clarify for everyone here, since there seems to be a lot of uncertainty:

  • The issue (thus far) is only apparent when using your own api key
    • However, we have confirmed that the clouddrive keys are the exception, rather than the other way around, as for instance the web client does have the same limitation
  • Previous versions (also) do not conform with this limit (and go WELL over the 500k limit)
  • Yes there has definitely been a change at googles side that implemented this new limitation
  • Although there may be issues with the current beta (it is a beta after all) it is still important to convert your drives sooner rather than later, here's why:
    • Currently all access (api or web/googles own app) respects the new limits, except for the clouddrive keys (probably because they are verified)
    • Since there has been no announcement from google that this change was happening, we can expect no such announcement if (when) the clouddrive key also stops working either
    • It may or may not be possible to (easily) convert existing drives if writing is completely impossible (if no api keys work)
  • If you don't have issues now, you don;t have to upgrade, instead wait for a proper release, but do be aware there is a certain risk associated.

I hope this helps clear up some of the confusion!

Link to comment
Share on other sites

  • 0

Has anyone with a large drive completed this process at this point? If you don't mind, can you folks drop a few notes on your experience? Setting aside the fact that the data is inaccessible and other quality of life issues like that, how long did the process take? How large was your drive? Did you have to use personal API keys in order to avoid rate limit errors? Anything else you'd share before someone began to migrate data?

Trying to get a feel for the downtime and prep work I'm looking at to begin the process.

Link to comment
Share on other sites

  • 0

Personally if I am not forced to do it I will be perfectly as I am now, I don't want to think for a second about doing this with 121TB of data... if everything goes well without any loss in the process, I am sure, it would be more than months in doing the whole process. 

Note to developers, please for people who don't have any problem we hope that future stable releases of SCD don't force us to do the migration because of the API issue, please develop versions where we are not forced to do this step since we don't have any need at the moment and we have been already with gdrive for years, this will not change if there is no need to apply our own api. 

Link to comment
Share on other sites

  • 0
8 hours ago, kird said:

Personally if I am not forced to do it I will be perfectly as I am now

You will be forced to.. eventually. I have 6 clouddrives, 50+ TB each, for sure more than 500k chunks in each of them. On sunday only one of those six drives was unable to write/upload data, rest was perfectly fine while uploading. This is on 2 different accounts mind you. I suspect Google is rolling out those API changes gradually, not on all accounts at once, but sooner or later everyone and every account will be restricted by this new 500k files per each folder rule. 

Link to comment
Share on other sites

  • 0
13 hours ago, srcrist said:

Has anyone with a large drive completed this process at this point? If you don't mind, can you folks drop a few notes on your experience? Setting aside the fact that the data is inaccessible and other quality of life issues like that, how long did the process take? How large was your drive? Did you have to use personal API keys in order to avoid rate limit errors? Anything else you'd share before someone began to migrate data?

Trying to get a feel for the downtime and prep work I'm looking at to begin the process.

I'll tell you my path on this...yours may differ

I upgraded to the dreaded beta on Wednesday at about 5pm. I have two drives both 56TB in capacity. Drive "A" has 10TB of data and Drive "B" has 20TB in data.  On Thursday I restarted my computer at around the 24 hour mark and the progress percentage started over. I can pretty confidently say that the whole process started over. About two hours later one of my drives stopped even showing the percentage complete and just had an error. This is when I went into the Json and added my own API key information...again restarting my computer and restarting the process. I don't know if adding my API to that file actually did anything because the logs still show i was being limited but the percentage started and was going up again so that was a sign of progress.

Drive A with the 10TB of actual data took approximately 48 hours to complete from the last time I restarted my computer. Now that it is complete it appears that my data was unharmed and I can access it. 

On drive B with the 20TB of data is (as i write this is at the 60 hr mark) only at 55.33% .

I'm refusing to be the first of us to restart my computer after it is "complete"....hell no

Link to comment
Share on other sites

  • 0

on the drive that has completed the upgrade. I can see and access my data but when I try to change my I/O performance on that drive it tells me "Error saving data. Cloud drive not found"

 

EDIT: Prefetch is not working

Link to comment
Share on other sites

  • 0
1 hour ago, Chase said:

on the drive that has completed the upgrade. I can see and access my data but when I try to change my I/O performance on that drive it tells me "Error saving data. Cloud drive not found"

 

EDIT: Prefetch is not working

OK, now that is concerning. Two of my six 10TB drives are now converting at around 25% and counting. No errors on these two. However, for those waiting for initialization, the log file is flooded with the following error:

Warning: 0 : [ApiHttp:148] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable).

Default API settings, no unmounting errors for now

Link to comment
Share on other sites

  • 0
1 hour ago, Chase said:

on the drive that has completed the upgrade. I can see and access my data but when I try to change my I/O performance on that drive it tells me "Error saving data. Cloud drive not found"

 

EDIT: Prefetch is not working

Oh dear...

 

Did you submit a ticket to support? This will make sure @Alex can take a look and roll out a fix if it's on the software end. I don't think he's on the forums too often.

Link to comment
Share on other sites

  • 0
6 hours ago, Chase said:

on the drive that has completed the upgrade. I can see and access my data but when I try to change my I/O performance on that drive it tells me "Error saving data. Cloud drive not found"

 

EDIT: Prefetch is not working

This indeed sounds like a bug, the update was probably rushed quite a bit and i doubt it was tested extensively.
Be sure to submit a support ticket so the developers are aware and can work on resolving all the issues

Link to comment
Share on other sites

  • 0
On 6/28/2020 at 1:03 PM, JulesTop said:

Oh dear...

 

Did you submit a ticket to support? This will make sure @Alex can take a look and roll out a fix if it's on the software end. I don't think he's on the forums too often.

I submitted a ticket. Let me preface all of this with I am tech savvy but by no means a programmer.

Prefetch is working intermittently right now. The big problem is that it keeps telling me i'm throttled (double yellow arrows on download and upload sides) but it is not acting as if it is throttled consistently. For example,  I have my prefetch set to grab 1Gb and it did so just now at 250 mbit/s without issue. 

I still have the error "Error saving data. Cloud drive not found" when trying to change performance I/O setting or when I press the pause button on upload. However it does take the new settings anyways and clearly the drive is found, mounted, and active.

I added my own API and reauthorized on it for my one drive that has completed the update. Still have the issues listed above but I know that it has switched to my API.

My second drive that was still in the middle of updating is now saying "Drive queued to perform recovery". It looks like my server shutdown unexpectedly at some point this morning so the recovery part is expected but I've never seen it say "queued" for recovery. I'm pissed that my server crashed. Its all brand new dedicated hardware and hosted in a data center...ugh

Any ideas on how to push it out of "queued" to recover and actually start recovery? 

 

Thanks in advance.

Link to comment
Share on other sites

  • 0

Hey guys,

So, to follow up after a day or two: the only person who says that they have completed the migration is saying that their drive is now non-functional. Is this accurate? Has nobody completed the process with a functional drive--to be clear? I can't really tell if anyone trying to help Chase has actually completed a successful migration, or if everyone is just offering feedback based on hypothetical situations.

I don't even want to think about starting this unless a few people can confirm that they have completed the process successfully.

Link to comment
Share on other sites

  • 0

Unintentional Guinea Pig Diaries.

Day 5 - Entry 2

*The Sound of Crickets*

So I'm in the same spot as I last posted. My second drive is still at "Queued to perform Recovery". If I knew how to force a reauth right now I would so I could get it on my API. Or at the very least get it out of "queued"

Perhaps our leaders will come back to see us soon. Maybe this is a test of our ability to suffer more during COVID. We will soon find out.

End Diary entry.

Link to comment
Share on other sites

  • 0
14 hours ago, srcrist said:

Hey guys,

So, to follow up after a day or two: the only person who says that they have completed the migration is saying that their drive is now non-functional. Is this accurate? Has nobody completed the process with a functional drive--to be clear? I can't really tell if anyone trying to help Chase has actually completed a successful migration, or if everyone is just offering feedback based on hypothetical situations.

I don't even want to think about starting this unless a few people can confirm that they have completed the process successfully.

I don't have most of these answers, but something did occur to me that might explain why I'm not seeing any issues using my personal API keys with a CloudDrive over 70TB. I partitioned my CloudDrive into multiple partitions, and pooled them all using DrivePool. I noticed earlier that each of my partitions only have about 7-8TB on them (respecting that earlier estimate that problems would start up between 8-10TB of data). Can anyone confirm whether or not a partitioned CloudDrive would keep each partition's data in a different directory on Google Drive?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...