Jump to content
  • 1

Google Drive: The limit for this folder's number of children (files and folders) has been exceeded


KiLLeRRaT

Question

Hi All,

I've recently noticed that my drive has 40GB to be uploaded, and saw there are errors in the top left.  Clicking that showed 

Quote

The limit for this folder's number of children (files and folders) has been exceeded

What's the deal here, and is this going to be an ongoing issue with growing drives?

My drive is a 15TB drive, with 6TB free (so only using 9TB).

Attached is my drive's info screen.

EDIT: Cloud Drive Version: 1.1.5.1249 on Windows Server 2016.

 

Cheers,

2020-06-18 11_42_07-mRemoteNG - confCons.xml - oscar.gouws.org.png

Link to comment
Share on other sites

206 answers to this question

Recommended Posts

  • 1
On 7/7/2020 at 10:04 PM, Chase said:

All of my drives are working and I have no errors...EXCEPT the last beta that changed the GoogleDrive_ConcurrentRequestCount to "2" appears to be causing issues. My drives tend to do a lot of reading and writing...it's for a plex server. And 2 concurrent causes CloudDrive to tell me I don't have enough bandwidth. I have tried to change the setting in the "settings.json" but it isn't accepting the change.

 

 

The newest beta works great for me - i still have a great speeds and the drives are upgrading fine (25% per day)!

 

Only annoying bug i have atm is that if i try to attach a new drive, then it will fail and give me a folder i cant delete. After a reboot the drive will mount (although it said it failed, but due to the folder being there) but not upgrade as it is supposed to.

So since im moving my drives to a new server im only able to move and upgrade 1 drive at a time (although my drives are split over 5 google accounts and in theory have api limits on each account).

 

So my wish would be a new version which supports mounting new drives, while an existing is upgrading :) - and fixing so that a drive which fails at being mounted doesnt get undeleteable due to being in use

Link to comment
Share on other sites

  • 0

Greetings to all! In connection with the transition to the new format, is there any possibility to display the percentage of the procedure? After a power failure, several disks have been showing me "Mounting ..." for a week now.............. (version 1.2.0.1310).

Link to comment
Share on other sites

  • 0
6 hours ago, Patrik_Mrkva said:

Greetings to all! In connection with the transition to the new format, is there any possibility to display the percentage of the procedure? After a power failure, several disks have been showing me "Mounting ..." for a week now.............. (version 1.2.0.1310).

I had something similar happen. I did something very risky but it did work. I deleted the main cache for the drive having that issue which effectively unmounted it. I was then able to remount it and thankfully everything reloaded correctly. Proceed at your own risk.

Link to comment
Share on other sites

  • 0

Try to reboot. That fixed it for me. Be careful with deleting any files.

After a week all my drives are back to normal and working fine. No missing or corrupt files so far. At first the update was very slow but after the latest beta it works just fine. I had to reboot several times to get the update process going but in the end it worked out. I did not delete any files.

Link to comment
Share on other sites

  • 0

Reboot is my middle name. I reboot regularly twice a day, but without effect. 3 disks out of 8 were recovered after 3 days, the remaining 5 disks still in the "Mounting" state .... In my opinion, deleting cache is a very risky operation ...

If at least the percentages were displayed.....

Link to comment
Share on other sites

  • 0

Update.

My drives are all now working fine. While it would not let me change the "concurrentrequestcount" above 2, but it did take the override setting for 10. My biggest issue was that I was running everything from an HDD from downloading files to my computer, uploading them to clouddrive, the cache for cloud drive, etc. I had my server host add an SDD and I now run the clouddrives and caches from there. Absolutely no issues now. Full speed and no errors. I also discovered that my HDD is failing so my next quest will be to replace that and start installing everything all over again. Hopefully now that the "data" is in the right format it won't have any issues when I have to effectively remount the drives to a fresh install.

Link to comment
Share on other sites

  • 0

I made the mistake of installing the newer versions of the beta, which caused reboots, and now I have no clue where I am at. Both of my drives are still down and I have no idea if something is even happening or not, but I am hoping it is...

Actually, in my logs I am getting a Bad Gateway error for my Google Drive. Wahoo.

Edited by Burrskie
Added more information
Link to comment
Share on other sites

  • 0
1 minute ago, Burrskie said:

I made the mistake of installing the newer versions of the beta, which caused reboots, and now I have no clue where I am at. Both of my drives are still down and I have no idea if something is even happening or not, but I am hoping it is...

What version did you have and what are you on now?

Link to comment
Share on other sites

  • 0
2 minutes ago, Chase said:

What version did you have and what are you on now?

I am on 1312, but I was on 1310, then upgraded to 1311, then 1312, but I just looked and there is another update to 1314 but it hasn't prompted me to update. I guess I will just download it and see since who knows when I will ever get my data back.

Link to comment
Share on other sites

  • 0

So i just mounted a 16TB drive with around 12TB used (which has the issue) on .1314
the 'upgrading' screen took less than 5 seconds, and the drive mounted perfectly within a few seconds.
it has now been 'cleaning up' for about 20minutes, and it is editing and deleting some files as expected, but im a little worried it did not do any actual upgrading.

I noticed in the web interface all it did was rename the chunk folder to '<id>-CONTENT-HIERARCHICAL' and created a new '<id>-CONTENT-HIERARCHICAL-P' folder.
but this new folder is empty and no files are being moved/have been moved.

My guess is the new system no longer moves existing files and instead just makes sure any new changes are put in new folders? 
That does make sense as it would avoid having to move a shit ton of files, and avoids the issue (as it seems to only affect writing new files, and does not affect existing files)

Once the status is green and it has definitely stopped doing anything i will test to see if the drive is now functional on my own api keys...

Link to comment
Share on other sites

  • 0
1 hour ago, Jellepepe said:

My guess is the new system no longer moves existing files and instead just makes sure any new changes are put in new folders? 
That does make sense as it would avoid having to move a shit ton of files, and avoids the issue (as it seems to only affect writing new files, and does not affect existing files)

Seems I was correct, existing files are untouched and new files are stored according to the new system, hopefully this remains working as it saves a lot of moving around

Link to comment
Share on other sites

  • 0
On 7/12/2020 at 11:28 AM, Patrik_Mrkva said:

Heureka! Beta 1314 is solution! All disks are mounted, converted in moment. No errors, everething works perfectly! Thanks, Chris!

Are there any other under the hood changes with 1314 vs the current stable that we should be aware of? Someone mentioned a "concurrentrequestcount" setting on a previous beta? What does that affect? What else should we be aware of before upgrading? I'm still on quite an old version and I've been hesitant to upgrade, partly cuz losing access to my files for over 2 weeks was too costly. Apparently the new API limits are still not being applied to my API keys, so I've been fine so far, but I know I'll have to make the jump soon. Wondering if I should do it on 1314 or wait for the next stable. There doesn't seem to be an ETA on that yet though.

Afterthought: Any chance google changes something in the future that affects files already in drive that break the 500,000 limit? Could a system be implemented to move those file slowly over time while the clouddrive remains accessible in case something does get enforced in the future?

Link to comment
Share on other sites

  • 0
6 hours ago, srcrist said:

I finally bit the bullet last night and converted my drives. I'd like to report that even in excess of 250TB, the new conversion process finished basically instantly and my drive is fully functional. If anyone else has been waiting, it would appear to be fine to upgrade to the new format now.

I had the same experience the other night. I'm just worried about potential issues in the future with directories that are over the limit as I mentioned in the comment above. Overall performance has become much better as well for my drives shared over the network (but keep in mind I was upgrading from a VERY old version so that probably played a factor in my performance previously).

Link to comment
Share on other sites

  • 0

I'm noticing that since upgrading to 1316, I'm getting a lot more I/O errors saying there was trouble uploading to google drive. Is there some under the hood setting that changed which would cause this? I've noticed it happen a few times a day now where previously it'd hardly ever happen.

Other than that, not noticing any issues with performance.

EDIT

Here's a screenshot of the error:
image.png.51fab374b3657729fc5ccf667d81a495.png

Link to comment
Share on other sites

  • 0

I also upgraded to 1315 and it looks like everything is working the way that it should.  I have not had issues that @darkly have reported where gets the insufficient bandwidth errors.

My disk upgrade went instantly, even through I was getting the API limit error before.  I did revert back to standard API keys (not using my own) before upgrading.

I haven't yet tried 1316 yet.

Cheers,

Link to comment
Share on other sites

  • 0
7 minutes ago, KiLLeRRaT said:

I also upgraded to 1315 and it looks like everything is working the way that it should.  I have not had issues that @darkly have reported where gets the insufficient bandwidth errors.

My disk upgrade went instantly, even through I was getting the API limit error before.  I did revert back to standard API keys (not using my own) before upgrading.

I haven't yet tried 1316 yet.

Cheers,

I should probably mention I'm using my own API keys, though I don't see how that should affect this in this way (I was using my own API keys before the upgrade too). I'm also on a gigabit fiber connection and nothing about that has changed since the upgrade. As far as I can tell, this feels like an issue with CD.

Link to comment
Share on other sites

  • 0
On 7/20/2020 at 10:49 PM, srcrist said:

I haven't had any errors with 1316 either. It might have been a localized network issue.

still going on. Happens several times a day consistently since I upgraded to 1316. Never had the error before unless I was actually having a connection issue with my internet.

Link to comment
Share on other sites

  • 0
16 hours ago, darkly said:

still going on. Happens several times a day consistently since I upgraded to 1316. Never had the error before unless I was actually having a connection issue with my internet.

I'm still not seeing an issue here. Might want to open a ticket and submit a troubleshooter and see if Christopher and Alex can see a problem. It doesn't seem to be universal for this release.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...