Jump to content
Covecube Inc.

Jellepepe

Members
  • Content Count

    40
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Jellepepe

  1. Seems I was correct, existing files are untouched and new files are stored according to the new system, hopefully this remains working as it saves a lot of moving around
  2. So i just mounted a 16TB drive with around 12TB used (which has the issue) on .1314 the 'upgrading' screen took less than 5 seconds, and the drive mounted perfectly within a few seconds. it has now been 'cleaning up' for about 20minutes, and it is editing and deleting some files as expected, but im a little worried it did not do any actual upgrading. I noticed in the web interface all it did was rename the chunk folder to '<id>-CONTENT-HIERARCHICAL' and created a new '<id>-CONTENT-HIERARCHICAL-P' folder. but this new folder is empty and no files are being moved/have been moved. My guess is the new system no longer moves existing files and instead just makes sure any new changes are put in new folders? That does make sense as it would avoid having to move a shit ton of files, and avoids the issue (as it seems to only affect writing new files, and does not affect existing files) Once the status is green and it has definitely stopped doing anything i will test to see if the drive is now functional on my own api keys...
  3. I had the wisdom 2 years ago when one of the ssds in my server corrupted and took almost a full 50TB of data in a clouddrive with it, that i would mirror all my data over multiple clouddrives and multiple google accounts, i am very happy with my past self at this stage. I will start a staged rollout of this process on my drives and keep you updated if i find any issues.
  4. why do you say that? i am many times over the limit and never had any issues until a few weeks ago. I was also unable to find any documentation or even mention of it when i first encountered the issue, and even contacted google support who were totally unaware of any such limits. fast forward 2 weeks later and more people start having the same 'issue' and suddenly we can find documentation on it from googles side. I first encountered the issue on June 4th I believe, and I have found no mention of anyone having it before then anywhere, it was also the first time the devs had heard of it (i created a ticket on the 6th). I seems like it was a gradual roll out as it initially happened on only one of my accounts, and gradually spread, with others also reporting the same issue. it would be possible, but there is a 750gb/day upload limit, so even moving a (smallish ) 50TB drive would take well over 2 months to move. Currently, moving the chunks, the only bottleneck (should be) the api request limit, which corresponds to a lot more data per day. That said, it is possible to do this manually by disabling the migration for large drives in settings, creating a new drive on the newer version, and to start copying.
  5. This indeed sounds like a bug, the update was probably rushed quite a bit and i doubt it was tested extensively. Be sure to submit a support ticket so the developers are aware and can work on resolving all the issues
  6. Just to clarify for everyone here, since there seems to be a lot of uncertainty: The issue (thus far) is only apparent when using your own api key However, we have confirmed that the clouddrive keys are the exception, rather than the other way around, as for instance the web client does have the same limitation Previous versions (also) do not conform with this limit (and go WELL over the 500k limit) Yes there has definitely been a change at googles side that implemented this new limitation Although there may be issues with the current beta (it is a beta after all) it is still important to convert your drives sooner rather than later, here's why: Currently all access (api or web/googles own app) respects the new limits, except for the clouddrive keys (probably because they are verified) Since there has been no announcement from google that this change was happening, we can expect no such announcement if (when) the clouddrive key also stops working either It may or may not be possible to (easily) convert existing drives if writing is completely impossible (if no api keys work) If you don't have issues now, you don;t have to upgrade, instead wait for a proper release, but do be aware there is a certain risk associated. I hope this helps clear up some of the confusion!
  7. I also doubt this is related to this issue. I have well over 10TB used and i have never had a 'internal error' error. The only errors i have seen are user rate limit exceeded (when exceeding the upload or download limits) and since about 2 weeks this exceeded maximum number of children error.
  8. This issue appeared for me over 2 weeks ago (yay me) and it seems to be a gradual rollout. The 500.000 items limit does make sense, as clouddrive stores your files in (up to) 20mb chunks on the provider, and thus the issue should appear somewhere around the 8-10tb mark depending on file duplication settings. In this case, the error actually says 'NonRoot', thus they mean any folder, which apparently can now only have a maximum of 500.000 children. I've actually been in contact with Christopher since over 2 weeks ago about this issue, and he has informed Alex of the issue and they have confirmed they are working on a solution. (other providers already has such issues, and thus it will likely revolve around storing chunks in subfolders with ~100.000 chunks per folder, or something similar.) It is very interesting that you mention reverting to the non-personal api keys resolved it for you, which does suggest that indeed it may be dependent on some (secret) api level limit. Though, that would not explain why adding something to the folder manually also fails... @JulesTop Have you tried uploading manually since the issue 'disappeared' ? If that still fails, it would confirm that the clouddrive api keys are what are 'overriding' the limit, and any personal access, including web, is limited. Either way hopefully there is a definitive fix coming soon, perhaps until then using the default keys is an option.
  9. So i saw this notice; And i am curious to learn what it means, from what i understand a duplicate part could not help when there is a write issue? I'm probably misunderstanding something and the drive is fine otherwise i am just curious what it is actually doing.
  10. Sounds to me like you may just have a failing drive in your system or a really shitty network connection causing corruption. A drive definitely should not normally introduce corruption without something affecting the data.
  11. I do not believe that is it currently possible to use two different API keysets for seperate drives within 1 clouddrive installation back up the data, google drive cannot provide the amount of data integrity assurance you seem to be looking for and should not ever be the only place you store any data you mind losing.
  12. just set up your downloader to download to a local drive and move the completed downloads to the cloud drive..?
  13. Hi, if you (like me) are struggling to mount a clouddrive to your system due to cache drive limitations, this might be a (workaround) solution;
  14. Hi everyone, After dreading that i would have to add an additional drive to my server to be able to mount my clouddrives (which would cost me a lot, as it is not local to me), I tried a lot of possible workarounds. While i understand the reasoning for not allowing certain drives to be pooled, and I do not recommend doing this unless you understand the risks, I am happy to report it definitely works, with no issues in my testing. So a as a TL;DR; You can create a VHD(X) file on a dynamic volume, mount that, then create a storage space with this mounted VHD(X) file, which can then be pooled and/or used as clouddrive cache. The performance decrease is quite minor, as you can see from some quick testing on a old 840 evo drive; [LEFT: local disk mounted directly, RIGHT: mounted VHDX -> Storage Space -> Drivepool] Step-By-Step: Doing this is easy, and it only takes a few steps; Create the VHD(X) file Mount the VHD(X) file as a drive Create a storage space with the mounted VHD(X) drive(s) Use the storage space as clouddrive cache or add it to a drivepool pool 1. First we make the VHD(X) file if you do not have one yet, which we can simply do from disk manager; Action -> Create VHD you are then presented with the following prompt, in which you can choose the size and location of the VHD(X) For optimal performance it is recommended to choose 'VHDX' and 'Fixed size' 2. Next we need to mount the VHD file if it hasn't done so already, which we can also do from disk manager; Action -> Mount VHD We don't need to initialize or format the drive, it will undo this in the next step anyway. 3. Almost done! We now need to create a storage space, as the mounted VHD(X) still cannot be used by clouddrive/drivepool To do this, we open the storage spaces configuration screen (search 'storage spaces'), and choose 'create a new pool and storage space' in the following screen select the mounted VHD drive ("Attached via VHD"), which should show up under "Unformatted drives" be careful to not select a different drive, this will wipe all data! In the wizard we can now select the size, parity settings, and format settings, in this example we are using 'simple', but you can choose other settings if you're using multiple VHDs. 4. If everything went well, we should now be able to detect the new storage space drive in both clouddrive and drivepool, ready to be used! And we are done! For some reason because we created a storage space, we no longer need to manually mount the VHD drives on reboot, so the system keeps working.
  15. This really is not an intended use case for this software... I guess i dont really understand why you need the extra step of the clouddrive and cannot just directly download the files from your vps to your computer? If storage is an issue you could permanently mount the clouddrive to the vps, and access the files through there from your main computer without mounting the drive right? I might be misunderstanding the situation
  16. open task manager, go to services and find the service 'CloudDriveService' , it is probably 'Stopped'. Rightclick and choose start, that should fix the issue. If you get the message that it is disabled, press 'Open Services' on the bottom of the task manager window, then find 'StableBit CloudDrive Service' and open its properties. Here make sure it is set to 'Automatic' instead of Disabled/Manual - then try to start it again. If that also fails i'm not sure but you could try look in the service log to see if there is any reported issues. As long as the encryption key (if the drives are encrypted) is stored somewhere safe, you won't lose data. Do make sure to not delete or modify any files in the clouddrive folders on whichever provider the drives are stored on
  17. Jellepepe

    Google Drive

    you say the drive disconnects when uploading your files? what is the error you get when this happens? have you tried running a speedtest to see your actual connection speed? this sounds like either your connection isnt actually able to handle 100mbps or some router/switch is having a hard time. clouddrive will use as much of the 85mbps limit you set, so if your real world speeds are 90mbps or lower the rest of your network will have a hard time keeping a solid connection currently running 2 100TB volumes from 2 seperate google drives mirrored using drivepool (w/ ssd cache), so i feel like i have some experience but it will be important to figure out the cause first
  18. I was away all weekend, so sorry for not posting. Im not sure, i checked and im still running .951, so i doubt its related. The issues have seemed to go away, i can see it struggle still during the general 7-10pm window, but its not enough to force a dismount of the drive. This mostly leads me to believe it was some sort of issue at google, but it does disappoint we never managed to figure out exactly what the cause was. I will get back to this if theres any more issues, and if you or anyone else is having a similar issue please do so also
  19. i had the same at that time, but not enough to dismount, its 22:48 for me now, so past the window it has dismounted all previous days. It would seem like today is the first day in over a month that i havent had any dismounts, i REALLY hope this means the issue was at google and is now resolved. i also switched to using the latest beta builds around when i posted this thread, i also noticed performance seemingly being much better. i assume something changed in the way I/O works?
  20. Drives just dismounted again (19:42) issue definitely isnt gone EDIT: They only dismounted the one time, so there seems to be an improvement :?
  21. I read that yes, interesting that that is the only thing that changed, im running server 2012 r2 myself and have had no such issues, so i doubt its the same. Hopefully it was an issue at google and resolved now, but we'll see in a few hours i suppose. i followed up on my support ticket with a summary of this thread 2 days ago, i haven't had any response since
  22. they are under a pretty constant load, im using stablebit drivepool to mirror one to the other, so one drive is downloading about 690gb / day and the other is uploading that again (different domains, different api). the only other thing accessing the drives is radarr/sonarr, which are only adding around 10-20gb a day (to both drives) i was already having the issue before i added the second drive and started doing this, so its not the cause. And due to this constant load i can see exactly when it stops working, there are no jumps in load right before or anything, atleast not that i can see, the speed for every thred just drops to about 150kbps each, then after about a minute maybe 2 clouddrive decides it cant get a decent connection and dismounts the drive. If i try to remoung right away (if i see it happen) the same issue is still there, but if i wait a little longer it will start working again. Yesterday this was only a few minutes before it worked again, but before ive had it take almost half an hour, oad on the drives yesterday compared to the day before is almost exactly identical. Mine usually dismount between 7-10pm, but if you;re in a different timezone that might match up with my dismounts, it seems to often be around a full hour. From what i am able to tell from resource monitor and watching connections, clouddrive connects through a domain, not ip, and the ip behind this isnt static, so it would be hard to check. Also it is showing the connnection to be active, just throttled to 150kbps per thread, i doubt a ping would fail at that point. That's good to hear! (for you atleast) I've also had a HTTP500 return sometimes (which is what that error is), and it doesnt see =m to cause a dismount for me either. I'm not really sure what it means, the translation 'internal server error' is pretty broad, but i dont think its related, just a random request failing for unrelated reasons. I do have to note that i've only ever had this happen once or twice in a day at most, never more than that. Anyway, do keep us updated on if the issues remains gone, or if you just got lucky last night! If they remain gone, it would be interesting to try and figure out what exactly changed...
  23. This issue, whatever it was, seems to be resolved.. What is interesting is the time they posted that they became aware of the issue corresponds exactly with a dismount for me. I doubt this was the issue, mainly because i dont want to get my hopes up, but its interesting nonetheless.
  24. Thats really interesting!! I have not received this email on either Gsuite domains nor the backup email adresses, internally i think Drive and Docs are not the same, but this might be related nontheless. Assuming i dont get the email myself, please do give updates on whatever they announce!
  25. Interesting, i did notice the drives were able reconnect much quicker this time (only a few minutes) so depending on your usage during this time, it might not have disconnected since it wasnt active. (clouddrive only disconnects when it tries downloading data and fails, so if theres no activity on the drive it wont disconnect) I'm running server 2012 R2 myself, and havent had any issues aside from coulddrive, so im not sure if this is related, but worth making a note of! Great to hear, would love to hear back on your setup and if you run into anything feel free to ask, also if you want drop me a private message for discord or whatever if you want some more tips, will try to keep this topic focused on the dismounting issues. Again, since you seemingly didn't remount the drive (or there was no activity) during the rest of the 'trouble zone' its hard to tell if its the exact same issues, but it does seem very suspect its always this general 7-10pm timeframe. ---------------------------------------------------------------------------------- My drives have been on a pretty high load due to drivepool duplicating all the time, so my graph very clearly shows when the connection drops(?) Its an interesting trend to see the issue is much less apparent as it was yesterday & the day before hopefully this continues and the issue is completely gone by the end of the week
×
×
  • Create New...