Just to clarify for everyone here, since there seems to be a lot of uncertainty:
The issue (thus far) is only apparent when using your own api key
However, we have confirmed that the clouddrive keys are the exception, rather than the other way around, as for instance the web client does have the same limitation
Previous versions (also) do not conform with this limit (and go WELL over the 500k limit)
Yes there has definitely been a change at googles side that implemented this new limitation
Although there may be issues with the current beta (it is a beta after all) it is still important to convert your drives sooner rather than later, here's why:
Currently all access (api or web/googles own app) respects the new limits, except for the clouddrive keys (probably because they are verified)
Since there has been no announcement from google that this change was happening, we can expect no such announcement if (when) the clouddrive key also stops working either
It may or may not be possible to (easily) convert existing drives if writing is completely impossible (if no api keys work)
If you don't have issues now, you don;t have to upgrade, instead wait for a proper release, but do be aware there is a certain risk associated.
I hope this helps clear up some of the confusion!
They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space.
None of this really has anything to do with this thread, but since it needs to be said (again):
CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system.
This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever.
Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss.
This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will.
The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that.
If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
I was away all weekend, so sorry for not posting. Im not sure, i checked and im still running .951, so i doubt its related.
The issues have seemed to go away, i can see it struggle still during the general 7-10pm window, but its not enough to force a dismount of the drive.
This mostly leads me to believe it was some sort of issue at google, but it does disappoint we never managed to figure out exactly what the cause was.
I will get back to this if theres any more issues, and if you or anyone else is having a similar issue please do so also
i had the same at that time, but not enough to dismount, its 22:48 for me now, so past the window it has dismounted all previous days.
It would seem like today is the first day in over a month that i havent had any dismounts, i REALLY hope this means the issue was at google and is now resolved.
i also switched to using the latest beta builds around when i posted this thread, i also noticed performance seemingly being much better. i assume something changed in the way I/O works?
Thanks for responding, i seem to have fixed it by force attaching the drive to a different pc (was unable to detach too), then reinstalling clouddrive and reattaching it to the old pc. i've turned it down to 6 up threads and 20 down now, which still seems to fill up my bandwidth (33mbps up, 330 down)
it seems to have somehow crashed in the authorization, in such a way i was unable to reauthorize it.
If it stops working again i will definitely report back