Jump to content
  • 0

Panic Google drive user where is the assistance from Covercube?


Historybuff

Question

Hi all.

I have been the happy user of the clouddrive software for 2 years now. after a 3 month proff of concept i was happy.

I moved all my data away from locally storage and have currently 50 Tb stored on goggle drive.

I do not have 50 Tb of local storage anymore and would hate to change as that would take time and cost alot of money.

 

Where is Covecube in this API issue?

 

Why silent

 

Why just blame Google?

 

If all that is needed is to create a private API key why not just make a new updated version of this guide. Create a few test that the users could perform to validate this is ok again.

 

The internet is buzzing and people are starting to scare people away from this service.

 

And all that is needed is to be a bit proactive!

 

Link to comment
Share on other sites

11 answers to this question

Recommended Posts

  • 0

If you want to contact Christopher, do so here: https://stablebit.com/Contact.

12 hours ago, Historybuff said:

If all that is needed is to create a private API key why not just make a new updated version of this guide. Create a few test that the users could perform to validate this is ok again.

I'm not sure what "updated version of this guide" means, but plenty of us have been using personal keys for a long time. It works fine. We have our own quotas. You can see it in the Google API dashboard. This feature was added over a year ago precisely in anticipation of this very issue. 

12 hours ago, Historybuff said:

The internet is buzzing and people are starting to scare people away from this service.

There is a subset of the userbase who is using consumer grade cloud storage for terabytes worth of data that they do not consider to be expendable. And, honestly, those people should stop using this or any other cloud service. I don't think they were ever the target market, and there is nothing CloudDrive can do to guarantee the service level these people are expecting on the providers that they're using. Google's SLA (https://gsuite.google.com/terms/sla.html) does not even provide for that sort of data integrity on Drive (They do not have an SLA for Drive at all), and I have no idea what anyone thinks CloudDrive can do over and above the SLA of the provider you choose to use. Just a personal opinion, there. 

Link to comment
Share on other sites

  • 0
13 hours ago, srcrist said:

If you want to contact Christopher, do so here: https://stablebit.com/Contact.

I'm not sure what "updated version of this guide" means, but plenty of us have been using personal keys for a long time. It works fine. We have our own quotas. You can see it in the Google API dashboard. This feature was added over a year ago precisely in anticipation of this very issue. 

There is a subset of the userbase who is using consumer grade cloud storage for terabytes worth of data that they do not consider to be expendable. And, honestly, those people should stop using this or any other cloud service. I don't think they were ever the target market, and there is nothing CloudDrive can do to guarantee the service level these people are expecting on the providers that they're using. Google's SLA (https://gsuite.google.com/terms/sla.html) does not even provide for that sort of data integrity on Drive (They do not have an SLA for Drive at all), and I have no idea what anyone thinks CloudDrive can do over and above the SLA of the provider you choose to use. Just a personal opinion, there. 

What i wanted was basicly a bit more detailed guide.

1 how am i sure that my drives are using the api key? Test? can not find that info under the drives.

2 Could someone tell me show me where i could se the usage in google api dashboard

 

And while i sounded angry really i was just a bit flustered and worried and cannot really see how i could protect myself from loosing my data.

Hope someone can help me with a kind of test.

Thanks

 

Link to comment
Share on other sites

  • 0
1 hour ago, kird said:

For those who have two Google drive accounts in different domains in the same program (StablebitCD), how can we indicate in the .json file the two APIs generated for each account? Or how should we do it in this case?

I do not believe that is it currently possible to use two different API keysets for seperate drives within 1 clouddrive installation

 

On 12/18/2019 at 11:09 AM, Historybuff said:

And while i sounded angry really i was just a bit flustered and worried and cannot really see how i could protect myself from loosing my data.

back up the data, google drive cannot provide the amount of data integrity assurance you seem to be looking for and should not ever be the only place you store any data you mind losing.

Link to comment
Share on other sites

  • 0
8 hours ago, Jellepepe said:

I do not believe that is it currently possible to use two different API keysets for seperate drives within 1 clouddrive installation

 

back up the data, google drive cannot provide the amount of data integrity assurance you seem to be looking for and should not ever be the only place you store any data you mind losing.

The developers will have to think a lot more than they are doing to provide a solution to clients who are in my case, I'm sure I'm not the only one in those scenarios.

Regarding the backup, it doesn't make any sense, people who have more than 100 Tb in their accounts, how do you want them to have more copies spread over other accounts? I assure you that if you don't use SCD and you save your data in gdrive accounts you do have the integrity of your data safeguarded. Please, I hope the developers will get their act together and start implementing a program where the security of their clients' data is certified.

Link to comment
Share on other sites

  • 0

You do not need two separate API keys to access multiple drives. And it does not negatively impact security in the least, unless you are using CloudDrive to access someone else's data. API keys are for applications, not accounts.

5 hours ago, kird said:

Regarding the backup, it doesn't make any sense, people who have more than 100 Tb in their accounts, how do you want them to have more copies spread over other accounts?

Perhaps do not store 100TB of irreplaceable data on a consumer grade cloud storage account? But, otherwise, yes. Other accounts with redundancy would be a good first step. 

5 hours ago, kird said:

I assure you that if you don't use SCD and you save your data in gdrive accounts you do have the integrity of your data safeguarded.

I assure you that it is not. Google does not have a data integrity SLA for Drive at all. It simply does not exist. Google does an admirable job of maintaining data integrity, but we've already seen two issues where they lost users' data. It will happen again, and Drive users cannot do anything about it. If you don't have the space to backup your data, and you care about that data, then you're storing too much data. Period. The real question isn't, "how am I supposed to back up 100TB," it's, "why are you storing 100TB of data, that you do not consider to be expendable, in the cloud, that you cannot back up?" That's on you, as the user.

5 hours ago, kird said:

Please, I hope the developers will get their act together and start implementing a program where the security of their clients' data is certified.

There is absolutely nothing--and I mean nothing whatsoever--that the developers of CloudDrive can do to "certify" the integrity and security of your data that they are not already doing. CloudDrive uses end-to-end, enterprise grade encryption for the data, and has integrity verification built-in at multiple points. And yet cloud storage is still cloud storage...and your data is (and will always be) vulnerable to loss by any cloud storage provider that you choose. And there is nothing they can do about that. 

If that is not a level of risk that you are comfortable taking on...do not use cloud storage for your data, with CloudDrive or any other similar solution.

Link to comment
Share on other sites

  • 0

Thanks srcrist for your knowledge , I respect everything you've said about data protection, but I don't agree at all, will not argue anything since it is obvious that data loss incidents are happening only with users or almost exclusively who are scd customers

On the subject of the Api, I don't quite understand the mechanism that you say is by application and not by account even though these are two totally different accounts as they are different domains despite being gdrive. So simply generating an API for one of the two accounts and editing the .json with the data would be enough?

Link to comment
Share on other sites

  • 0
3 hours ago, kird said:

Thanks srcrist for your knowledge , I respect everything you've said about data protection, but I don't agree at all, will not argue anything since it is obvious that data loss incidents are happening only with users or almost exclusively who are scd customers

 

That is just a passive aggressive way of arguing that I am wrong, and that the data loss issues are a solvable problem for Covecube. Neither of which are correct. I'm sorry.

The reasons that the data loss are experienced on CloudDrive and not other solutions are related to how CloudDrive operates by design. It is a consequence of CloudDrive storing blocks of an actual disk image with a fully functional file system and, as such,  being more sensitive to revisions than something like rClone which simply uploads whole files. This has been explained multiple times by Christopher and Alex and it makes perfect sense if you understand both how a file system operates, and how CloudDrive is intended to operate as a product. If anyone is not able to accept the additional sensitivities of a block-based cloud storage solution then, again, simply do not use it. rClone or something similar may very well better fit your needs. I don't think Covecube were ever intending this product to serve users who want to use it to store abusive amounts of media on consumer grade cloud storage. It works for that purpose, but it is not the intended function. And removing the functionality that is responsible for these sensitivities also eliminates the intended functionality of a block-based solution. Namely, in-place read and write modifiability of cloud data. And CloudDrive is, to my knowledge, still the only product on the market with such capability.

But I would never use any other cloud solution for hundreds of TB of irreplaceable data either. There is simply no way that is an intelligent solution, and anyone who is doing it is, frankly, begging for inevitable catastrophe.

3 hours ago, kird said:

On the subject of the Api, I don't quite understand the mechanism that you say is by application and not by account even though these are two totally different accounts as they are different domains despite being gdrive. So simply generating an API for one of the two accounts and editing the .json with the data would be enough?

As was explained in the other thread, an API key is not to access the data on a given account. It is a key for an application to request data from Google's services. A single API key can request data from any account that authorizes said access; as evidenced by the fact that Covecube's default API key, which was obviously created from the developer's Google account, can access the data on your Google Drive. You can use an API key that is, in fact, requested by an account completely unrelated to any account that any data is actually stored on for CloudDrive.

It should be noted that Alex again removed Google Drive from the experimental providers in .1425, though, as it appears that Google approved their quota limit expansion after some delay. So all of this is moot, if you don't want to change the key. 

Link to comment
Share on other sites

  • 0

Raidrive/Netdrive are 2 programs that have a similar behavior to SCD, to my knowledge. I don't know if in their deeper process they have differences. I suppose that those of us who have accounts with gdrive that are unlimited precisely hire themselves thinking of using them, among other things, as data storage of different sorts and SCD is a program that claims total support for working in the cloud in a secure manner regardless of whether you plan to store 1Mb or I don't know how many Tb, well that would have to be if they couldn't declare it as such. That said, I appreciate your expert explanations.

Link to comment
Share on other sites

  • 0
12 hours ago, kird said:

Raidrive/Netdrive are 2 programs that have a similar behavior to SCD, to my knowledge. I don't know if in their deeper process they have differences.

They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space.

12 hours ago, kird said:

as data storage of different sorts and SCD is a program that claims total support for working in the cloud in a secure manner regardless of whether you plan to store 1Mb or I don't know how many Tb

None of this really has anything to do with this thread, but since it needs to be said (again):

CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system.

This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. 

Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. 

This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. 

The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that.

If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...