Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by srcrist

  1. To be clear: there is documentation on this feature in the change log.
  2. The change log seems to suggest that enabling it on a drive with existing data will only impact new data written to the drive: .1121 * Added an option to enable or disable data duplication for existing cloud drives (Manage Drive -> Data duplication...). - Any new data written to the cloud drive after duplication was enabled will be stored twice at the storage provider. - Existing data on the drive that is not overwritten will continue to be stored once. .1118 * Added an option to enable data duplication when creating a new cloud drive. - Data duplication stores your data t
  3. CloudDrive duplication is block-level duplication. It makes several copies of the chunks that contain your file system data (basically everything that gets "pinned"). If any of those chunks are then detected as corrupt or inaccessible, it will use one of the redundant chunks to access your file system data, and then repair the redundancy with the valid copy. DrivePool duplication is file-level duplication. It will make however many copies of whatever data you specify, and arrange those copies throughout your pool as you specify. DrivePool duplication is very customizable. You have full c
  4. Sure thing. Best of luck with finding something that works. If you're looking for a file based solution, it's honestly difficult to do better than rClone. I would look that direction.
  5. CloudDrive itself does not support Team Drives because their API access is different. But DrivePool can certainly pool multiple CloudDrive volumes together. It can pool any volume your system can access. But CloudDrive will not work to use Team Drives to evade the Google upload limitations, if that was your intention. There is some news that they are banning accounts for doing so, as well. Just FYI. See here: https://old.reddit.com/r/DataHoarder/comments/emuu9l/google_gsuit_whats_it_like_and_is_it_still_worth/fdw1cri/ I also want to add that the other solutions are not immune to the
  6. OK. So, there is a lot here, so let's unpack this one step at a time. I'm reading some fundamental confusion here, so I want to make sure to clear it up before you take any additional steps forward. Starting here, which is very important: It's critical that you understand the distinction in methodology between something like Netdrive and CloudDrive, as a solution. Netdrive and rClone and their cousins are file-based solutions that effectively operate as frontends for Google's Drive API. They upload local files to Drive as files on Drive, and those files are then accessible from your Dr
  7. I can actually confirm this bug as well. The circumstances were very straightforward: I detached the drive from the machine because I had to take it down for a hardware test. When the test was completed, the drive said that it was already attached (to the same machine I detached it from), and I had to force the mount and reindex the drive. This was about a week ago, on 1261. I do not, sadly, have any logs or records from the incident, and the drive functions as normal after the reindex. EDIT: I should add that attempting to force the mount once actually gave me an error about the cache di
  8. No. That's a Google API thing that's causing issues for way more applications than it should be. I didn't think CloudDrive should be affected, though, since they *have* been verified. Are you using the default API keys or did you install your own?
  9. srcrist

    Migrating Computers

    If your drive is properly detached and reattached it should not have to reindex the drive. You should be able to attach it and pick up where you left off after a few minutes of pinning and synchronization.
  10. If CloudDrive is indicating that your downstream performance is better than you're seeing for the file transfer, my first guess is that it might be drive I/O congestion. Are you, by chance, copying the data to the same drive that you're using as your CloudDrive cache?
  11. CloudDrive is built on top of the Windows storage infrastructure and will not work with Android. I haven't heard anything about any ports in the works. You could always mount the drive with a windows machine and share it with your Shield via SMB, though.
  12. Some people are lucky enough to get unlimited drives through their work or school, and some people use gsuite accounts which have unlimited with more than 5 users on the domain, or 1TB with less than 5...but Google doesn't actually enforce that limit, as far as I know.
  13. They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's soluti
  14. Doublecheck that it's correctly added to the subpool. I can't see that it is from your screenshots, and that folder should be created as soon as it's added to the pool. If it does look like it's correctly added, and the folder still does not exist, I would just remove it and re-add it to the subpool and see if that causes it to be created. Beyond that, you'd have to open an actual support ticket, because I'm not sure why it wouldn't be created when the drive is added.
  15. That is just a passive aggressive way of arguing that I am wrong, and that the data loss issues are a solvable problem for Covecube. Neither of which are correct. I'm sorry. The reasons that the data loss are experienced on CloudDrive and not other solutions are related to how CloudDrive operates by design. It is a consequence of CloudDrive storing blocks of an actual disk image with a fully functional file system and, as such, being more sensitive to revisions than something like rClone which simply uploads whole files. This has been explained multiple times by Christopher and Alex and
  16. You do not need two separate API keys to access multiple drives. And it does not negatively impact security in the least, unless you are using CloudDrive to access someone else's data. API keys are for applications, not accounts. Perhaps do not store 100TB of irreplaceable data on a consumer grade cloud storage account? But, otherwise, yes. Other accounts with redundancy would be a good first step. I assure you that it is not. Google does not have a data integrity SLA for Drive at all. It simply does not exist. Google does an admirable job of maintaining data integrity, but we
  17. You want all of your applications now pointing at the hybrid pool. Once the data is correctly moved, the data will appear identically to how it appeared before you nested the pool. The structure of the underlying pool is, as always, transparent at the file system level to applications. Your sub pool(s) do not even need drive letters/mount points, FYI. You can simply give the hybrid pool the localpool mount point. Which, in your case, appears to be P: To move your data, here is the process: So let's say you have a hybrid pool (O:), consisting of a localpool (P:) which contains drives
  18. Once you've created the nested pool, you'll need to move all of the existing data into the poolpart hidden folder within the outer poolpart hidden folder before it will be accessible from the pool. It's the same process that you need to complete if you simply added a drive to a non-nested pool that already had data on it. If you want the data to be accessible within the pool, you'll have to move the data into the pool structure. Right now you should have drives with a hidden poolpart folder and all of the other data on the drive within your subpool. You need to take all of that other data and
  19. If you simply log into the Google Cloud API dashboard (https://console.cloud.google.com/apis/dashboard) you should be able to see the usage. Make sure that you're signed in with the Google account that you created your API key with. Not necessarily the account that has your drive data (unless they are the same).
  20. It looks like Google was uh...less than communicative about the verification requirement changes. I've seen that error from a few other large apps this week like Cozi as well. They're really cracking down, and the verification process can apparently cost upwards of $15,000. IFTT apparently reduced their gmail support because of it when they rolled out similar requirements for gmail auth back in March. The good news for us is that you should be able to get around it by using your own API key with CloudDrive. See the other thread for discussion about doing so.
  21. You can specify what specific folders and files you want to duplicate with the duplication and balancing options in drivepool. But a nested drivepool setup is still what you'd want to use. You want a pool that contains your existing pool and the cloud storage, and then configure the duplication at that level.
  22. Yeah, the only reason I know to do that is that once upon a time Amazon Cloud Drive was marked experimental and that was the only way to access that as well. Providers are moved to experimental in order to purposely hide them from general users. Either because they are problematic, or because there are more technical steps to use them than others.
  23. It sounds like you'd want a nested pool setup. That is, you'll want a pool that contains your existing pool and the CloudDrive (or a pool of cloud drives, if necessary), and then you can enable duplication between the pool and the drive, or the pool and pool of drives (depending on your needs). That will duplicate your entire existing pool to the cloud.
  24. You could use a more traditional file encryption tool to encrypt the files that are on your drive, if you wanted. Though the net effect is that all of the data will need to be downloaded, encrypted, and then uploaded again in its new encrypted format. That's really true no matter what method you used. Even if you could, hypothetically, encrypt a CloudDrive drive after creation, it would still need to download each chunk, encrypt it, and then upload it again. There is no way to encrypt the chunks stored on your provider after drive creation, though. You do not need to use the same ke
  25. Since Google Drive is now marked as experimental, you'll have to enable experimental providers under the troubleshooting options at the top of the CloudDrive UI in order to see it. Google Cloud Storage is Google's enterprise storage provider, so that won't work for Google Drive.
  • Create New...