Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    332
  • Joined

  • Last visited

  • Days Won

    23

srcrist last won the day on January 4

srcrist had the most liked content!

About srcrist

  • Rank
    Advanced Member

Recent Profile Visitors

559 profile views
  1. If CloudDrive is indicating that your downstream performance is better than you're seeing for the file transfer, my first guess is that it might be drive I/O congestion. Are you, by chance, copying the data to the same drive that you're using as your CloudDrive cache?
  2. CloudDrive is built on top of the Windows storage infrastructure and will not work with Android. I haven't heard anything about any ports in the works. You could always mount the drive with a windows machine and share it with your Shield via SMB, though.
  3. Some people are lucky enough to get unlimited drives through their work or school, and some people use gsuite accounts which have unlimited with more than 5 users on the domain, or 1TB with less than 5...but Google doesn't actually enforce that limit, as far as I know.
  4. They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space. None of this really has anything to do with this thread, but since it needs to be said (again): CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system. This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that. If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
  5. Doublecheck that it's correctly added to the subpool. I can't see that it is from your screenshots, and that folder should be created as soon as it's added to the pool. If it does look like it's correctly added, and the folder still does not exist, I would just remove it and re-add it to the subpool and see if that causes it to be created. Beyond that, you'd have to open an actual support ticket, because I'm not sure why it wouldn't be created when the drive is added.
  6. That is just a passive aggressive way of arguing that I am wrong, and that the data loss issues are a solvable problem for Covecube. Neither of which are correct. I'm sorry. The reasons that the data loss are experienced on CloudDrive and not other solutions are related to how CloudDrive operates by design. It is a consequence of CloudDrive storing blocks of an actual disk image with a fully functional file system and, as such, being more sensitive to revisions than something like rClone which simply uploads whole files. This has been explained multiple times by Christopher and Alex and it makes perfect sense if you understand both how a file system operates, and how CloudDrive is intended to operate as a product. If anyone is not able to accept the additional sensitivities of a block-based cloud storage solution then, again, simply do not use it. rClone or something similar may very well better fit your needs. I don't think Covecube were ever intending this product to serve users who want to use it to store abusive amounts of media on consumer grade cloud storage. It works for that purpose, but it is not the intended function. And removing the functionality that is responsible for these sensitivities also eliminates the intended functionality of a block-based solution. Namely, in-place read and write modifiability of cloud data. And CloudDrive is, to my knowledge, still the only product on the market with such capability. But I would never use any other cloud solution for hundreds of TB of irreplaceable data either. There is simply no way that is an intelligent solution, and anyone who is doing it is, frankly, begging for inevitable catastrophe. As was explained in the other thread, an API key is not to access the data on a given account. It is a key for an application to request data from Google's services. A single API key can request data from any account that authorizes said access; as evidenced by the fact that Covecube's default API key, which was obviously created from the developer's Google account, can access the data on your Google Drive. You can use an API key that is, in fact, requested by an account completely unrelated to any account that any data is actually stored on for CloudDrive. It should be noted that Alex again removed Google Drive from the experimental providers in .1425, though, as it appears that Google approved their quota limit expansion after some delay. So all of this is moot, if you don't want to change the key.
  7. You do not need two separate API keys to access multiple drives. And it does not negatively impact security in the least, unless you are using CloudDrive to access someone else's data. API keys are for applications, not accounts. Perhaps do not store 100TB of irreplaceable data on a consumer grade cloud storage account? But, otherwise, yes. Other accounts with redundancy would be a good first step. I assure you that it is not. Google does not have a data integrity SLA for Drive at all. It simply does not exist. Google does an admirable job of maintaining data integrity, but we've already seen two issues where they lost users' data. It will happen again, and Drive users cannot do anything about it. If you don't have the space to backup your data, and you care about that data, then you're storing too much data. Period. The real question isn't, "how am I supposed to back up 100TB," it's, "why are you storing 100TB of data, that you do not consider to be expendable, in the cloud, that you cannot back up?" That's on you, as the user. There is absolutely nothing--and I mean nothing whatsoever--that the developers of CloudDrive can do to "certify" the integrity and security of your data that they are not already doing. CloudDrive uses end-to-end, enterprise grade encryption for the data, and has integrity verification built-in at multiple points. And yet cloud storage is still cloud storage...and your data is (and will always be) vulnerable to loss by any cloud storage provider that you choose. And there is nothing they can do about that. If that is not a level of risk that you are comfortable taking on...do not use cloud storage for your data, with CloudDrive or any other similar solution.
  8. You want all of your applications now pointing at the hybrid pool. Once the data is correctly moved, the data will appear identically to how it appeared before you nested the pool. The structure of the underlying pool is, as always, transparent at the file system level to applications. Your sub pool(s) do not even need drive letters/mount points, FYI. You can simply give the hybrid pool the localpool mount point. Which, in your case, appears to be P: To move your data, here is the process: So let's say you have a hybrid pool (O:), consisting of a localpool (P:) which contains drives D:, E:, F:, and G: and a cloudpool (M:) containing a single cloud drive (we'll just say H:). Right now, if you look at your actual drive file systems you'll have a poolpart folder containing another poolpart folder. That is, D:, E:, F:, G:, and H: all have a hidden poolpart folder in root containing a second poolpart folder. All of the data on each drive that you want to be accessible to the pool needs to be moved into the second poolpart folder on that drive. So, right now, for example, you probably have G:\Poolpart-XXXX\Poolpart-YYYY\ and G:\Poolpart-XXXX\<a bunch of other stuff> within your poolpart folder on that drive. All of the <a bunch of other stuff> simply needs to be cut and pasted to move it to the Poolpart-YYYY folder instead of Poolpart-XXXX. It will then be accessible at O:, with an identical structure to how it is presently accessible via P:. Note that Poolpart-XXXX represents localpool (P:) and Poolpart-YYYY represents hybridpool (O:), in this example. Each level of nesting actually represents one pool level above the previous. Thus, the master pool of any given hierarchy will be contained in the deepest nested poolpart folder. You will repeat this movement process for each individual logical volume you are including in the pool. That is, E:\Poolpart-XXXX\Poolpart-YYYY, D:\Poolpart-XXXX\Poolpart-YYYY, etc, etc. Just move everything on the drive to the corresponding poolpart-YYYY folder on the same drive. Then restart the service and remeasure the hybrid pool and it will all be within the pool.
  9. Once you've created the nested pool, you'll need to move all of the existing data into the poolpart hidden folder within the outer poolpart hidden folder before it will be accessible from the pool. It's the same process that you need to complete if you simply added a drive to a non-nested pool that already had data on it. If you want the data to be accessible within the pool, you'll have to move the data into the pool structure. Right now you should have drives with a hidden poolpart folder and all of the other data on the drive within your subpool. You need to take all of that other data and simply move it within the hidden folder. See this older thread for a similar situation: https://community.covecube.com/index.php?/topic/4040-data-now-showing-in-hierarchical-pool/&sortby=date
  10. If you simply log into the Google Cloud API dashboard (https://console.cloud.google.com/apis/dashboard) you should be able to see the usage. Make sure that you're signed in with the Google account that you created your API key with. Not necessarily the account that has your drive data (unless they are the same).
  11. It looks like Google was uh...less than communicative about the verification requirement changes. I've seen that error from a few other large apps this week like Cozi as well. They're really cracking down, and the verification process can apparently cost upwards of $15,000. IFTT apparently reduced their gmail support because of it when they rolled out similar requirements for gmail auth back in March. The good news for us is that you should be able to get around it by using your own API key with CloudDrive. See the other thread for discussion about doing so.
  12. You can specify what specific folders and files you want to duplicate with the duplication and balancing options in drivepool. But a nested drivepool setup is still what you'd want to use. You want a pool that contains your existing pool and the cloud storage, and then configure the duplication at that level.
  13. Yeah, the only reason I know to do that is that once upon a time Amazon Cloud Drive was marked experimental and that was the only way to access that as well. Providers are moved to experimental in order to purposely hide them from general users. Either because they are problematic, or because there are more technical steps to use them than others.
  14. It sounds like you'd want a nested pool setup. That is, you'll want a pool that contains your existing pool and the CloudDrive (or a pool of cloud drives, if necessary), and then you can enable duplication between the pool and the drive, or the pool and pool of drives (depending on your needs). That will duplicate your entire existing pool to the cloud.
  15. You could use a more traditional file encryption tool to encrypt the files that are on your drive, if you wanted. Though the net effect is that all of the data will need to be downloaded, encrypted, and then uploaded again in its new encrypted format. That's really true no matter what method you used. Even if you could, hypothetically, encrypt a CloudDrive drive after creation, it would still need to download each chunk, encrypt it, and then upload it again. There is no way to encrypt the chunks stored on your provider after drive creation, though. You do not need to use the same key. It will prompt you for a key for each drive when you attach the drive.
×
×
  • Create New...