Jump to content
Covecube Inc.

Jellepepe

Members
  • Content Count

    40
  • Joined

  • Last visited

  • Days Won

    3

Jellepepe last won the day on June 27 2020

Jellepepe had the most liked content!

About Jellepepe

  • Rank
    Advanced Member

Recent Profile Visitors

498 profile views
  1. Seems I was correct, existing files are untouched and new files are stored according to the new system, hopefully this remains working as it saves a lot of moving around
  2. So i just mounted a 16TB drive with around 12TB used (which has the issue) on .1314 the 'upgrading' screen took less than 5 seconds, and the drive mounted perfectly within a few seconds. it has now been 'cleaning up' for about 20minutes, and it is editing and deleting some files as expected, but im a little worried it did not do any actual upgrading. I noticed in the web interface all it did was rename the chunk folder to '<id>-CONTENT-HIERARCHICAL' and created a new '<id>-CONTENT-HIERARCHICAL-P' folder. but this new folder is empty and no files are being moved/have been
  3. I had the wisdom 2 years ago when one of the ssds in my server corrupted and took almost a full 50TB of data in a clouddrive with it, that i would mirror all my data over multiple clouddrives and multiple google accounts, i am very happy with my past self at this stage. I will start a staged rollout of this process on my drives and keep you updated if i find any issues.
  4. why do you say that? i am many times over the limit and never had any issues until a few weeks ago. I was also unable to find any documentation or even mention of it when i first encountered the issue, and even contacted google support who were totally unaware of any such limits. fast forward 2 weeks later and more people start having the same 'issue' and suddenly we can find documentation on it from googles side. I first encountered the issue on June 4th I believe, and I have found no mention of anyone having it before then anywhere, it was also the first time th
  5. This indeed sounds like a bug, the update was probably rushed quite a bit and i doubt it was tested extensively. Be sure to submit a support ticket so the developers are aware and can work on resolving all the issues
  6. Just to clarify for everyone here, since there seems to be a lot of uncertainty: The issue (thus far) is only apparent when using your own api key However, we have confirmed that the clouddrive keys are the exception, rather than the other way around, as for instance the web client does have the same limitation Previous versions (also) do not conform with this limit (and go WELL over the 500k limit) Yes there has definitely been a change at googles side that implemented this new limitation Although there may be issues with the current beta (it is a
  7. I also doubt this is related to this issue. I have well over 10TB used and i have never had a 'internal error' error. The only errors i have seen are user rate limit exceeded (when exceeding the upload or download limits) and since about 2 weeks this exceeded maximum number of children error.
  8. This issue appeared for me over 2 weeks ago (yay me) and it seems to be a gradual rollout. The 500.000 items limit does make sense, as clouddrive stores your files in (up to) 20mb chunks on the provider, and thus the issue should appear somewhere around the 8-10tb mark depending on file duplication settings. In this case, the error actually says 'NonRoot', thus they mean any folder, which apparently can now only have a maximum of 500.000 children. I've actually been in contact with Christopher since over 2 weeks ago about this issue, and he has informed Alex of the issue and they
  9. So i saw this notice; And i am curious to learn what it means, from what i understand a duplicate part could not help when there is a write issue? I'm probably misunderstanding something and the drive is fine otherwise i am just curious what it is actually doing.
  10. Sounds to me like you may just have a failing drive in your system or a really shitty network connection causing corruption. A drive definitely should not normally introduce corruption without something affecting the data.
  11. I do not believe that is it currently possible to use two different API keysets for seperate drives within 1 clouddrive installation back up the data, google drive cannot provide the amount of data integrity assurance you seem to be looking for and should not ever be the only place you store any data you mind losing.
  12. just set up your downloader to download to a local drive and move the completed downloads to the cloud drive..?
  13. Hi, if you (like me) are struggling to mount a clouddrive to your system due to cache drive limitations, this might be a (workaround) solution;
  14. Hi everyone, After dreading that i would have to add an additional drive to my server to be able to mount my clouddrives (which would cost me a lot, as it is not local to me), I tried a lot of possible workarounds. While i understand the reasoning for not allowing certain drives to be pooled, and I do not recommend doing this unless you understand the risks, I am happy to report it definitely works, with no issues in my testing. So a as a TL;DR; You can create a VHD(X) file on a dynamic volume, mount that, then create a storage space with this mounted VHD(X) file, which ca
  15. This really is not an intended use case for this software... I guess i dont really understand why you need the extra step of the clouddrive and cannot just directly download the files from your vps to your computer? If storage is an issue you could permanently mount the clouddrive to the vps, and access the files through there from your main computer without mounting the drive right? I might be misunderstanding the situation
×
×
  • Create New...