Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    349
  • Joined

  • Last visited

  • Days Won

    23

srcrist last won the day on January 4

srcrist had the most liked content!

About srcrist

  • Rank
    Advanced Member

Recent Profile Visitors

618 profile views
  1. Sounds like a Plex problem. The data read from the drive is the same regardless of whether or not it's transcoding, and "Remux" releases can have wildly different bitrates, so there isn't really anything unique to those files. CloudDrive is more than capable of achieving the throughput required, in any case. Might want to ask on their forums to see if anyone has relevant experience.
  2. srcrist

    Question about my setup

    Option one would use more data, but you could simply extract directly to the cloud drive without using any more data. The move will take place at the file system level and will not use any additional data. Downloading the compressed files to the cloud, and then extracting them to the cloud, though, does use additional data. Because the data from the extraction has to be created during the extraction process. Once extracted, no new data will need to be uploaded for a move. The same as a local drive.
  3. srcrist

    Question about my setup

    I don't see anything wrong with that. The thing you want to avoid for optimization is doing other work on the same drive that's being used for the cache. You're not doing that, so you should be good.
  4. I believe that this is related to your windows settings for how to handle "removable media." CloudDrive shows up as a removable drive, so if you selected to have windows open explorer when removable media is inserted, it will open when CloudDrive mounts. Check that Windows setting.
  5. Almost certainly related to this issue: File a ticket and submit a troubleshooter. @Christopher (Drashna) was asking if this still existed in the latest beta. It would appear that it does.
  6. Perhaps the other person can, but I can't really afford to dismount the drive on a prod server just to test it. Particularly considering that it could lead to downtime of almost an entire day if it dismounts incorrectly again. My apologies. I know that isn't helpful.
  7. Isn't the the entire purpose of a changelog? To explain the functionality of new features? The changes that have been made to the operation of the application? I copied roughly 100 TB over the course of several months from one drive to another via CloudDrive, and with the cache drive being full that entire time it simply throttled the writes to the cache and accepted data as room became available. Which is intended functionality as far as I am aware. You may have encountered a bug, or there may have been a technical issue with your cache drive itself--but it should do little more than copy slowly. I'm not even sure what you mean by CloudDrive being unable to read chunks because the cache is full. How would a full cache prevent reads? The full cache is storing your writes, and the reads are from the cloud unless you configure your cache to maintain some read availability...but even without it, CloudDrive can still read chunks. It's possible that you're simply misattributing the cause of your data loss. The only way in which I can think that you might lose data via the cache is if the drive you're using for cache space isn't storing valid data. The cache, in any case, will not be meaningfully affected by any sort of data duplication.
  8. To be clear: there is documentation on this feature in the change log.
  9. The change log seems to suggest that enabling it on a drive with existing data will only impact new data written to the drive: .1121 * Added an option to enable or disable data duplication for existing cloud drives (Manage Drive -> Data duplication...). - Any new data written to the cloud drive after duplication was enabled will be stored twice at the storage provider. - Existing data on the drive that is not overwritten will continue to be stored once. .1118 * Added an option to enable data duplication when creating a new cloud drive. - Data duplication stores your data twice at the storage provider. - It consumes twice the upload bandwidth and twice the storage space at the provider. - In case of data corruption or loss of the primary data blocks, the secondary blocks will be used to provide redundancy for read operations. So it should not impact the cache drive in any immediate sense. CloudDrive is generally smarter than that about the cache though. I would expect it to throttle writes to the cache as it was processing the data, as it does with large volume copies from other sources like DrivePool. Large writes from other sources do not corrupt or dismount the drive. It simply throttles the writes until space is available. A moot point, in any case, as it will not duplicate your existing chunks unless you manually download and reupload the data.
  10. CloudDrive duplication is block-level duplication. It makes several copies of the chunks that contain your file system data (basically everything that gets "pinned"). If any of those chunks are then detected as corrupt or inaccessible, it will use one of the redundant chunks to access your file system data, and then repair the redundancy with the valid copy. DrivePool duplication is file-level duplication. It will make however many copies of whatever data you specify, and arrange those copies throughout your pool as you specify. DrivePool duplication is very customizable. You have full control over where and when it duplicates your data. If you want it to duplicate your entire pool, that is a setting you control. As is whether or not it does so at regular intervals, or immediately. That's all up to your balancer settings in DrivePool. Despite the name similarity, their functionality really has nothing to do with one another. CloudDrive's duplication makes copies of very specific chunks on your cloud provider. It doesn't have anything to do with duplicating your actual data. It's intended to prevent corruption from arbitrary rollbacks and data loss on the provider's part, like we saw back in March and June of last year. EDIT: It slipped my mind that full duplication can also be enabled in CloudDrive. This is still block-level duplication on your cloud provider. Rather than using one chunk for each chunk, it would use two. For the same purpose mentioned above. If one chunk is corrupt or unavailable, it will use the other and repair the redundancy. Net effect being that 100GB of data on your cloud storage would take up 200GB worth of chunks, of course, and also twice the upload time per byte. You would still only see ONE copy of the data in your file system, though.
  11. Sure thing. Best of luck with finding something that works. If you're looking for a file based solution, it's honestly difficult to do better than rClone. I would look that direction.
  12. CloudDrive itself does not support Team Drives because their API access is different. But DrivePool can certainly pool multiple CloudDrive volumes together. It can pool any volume your system can access. But CloudDrive will not work to use Team Drives to evade the Google upload limitations, if that was your intention. There is some news that they are banning accounts for doing so, as well. Just FYI. See here: https://old.reddit.com/r/DataHoarder/comments/emuu9l/google_gsuit_whats_it_like_and_is_it_still_worth/fdw1cri/ I also want to add that the other solutions are not immune to the data loss issue any more than CloudDrive, it's just that data loss can manifest differently. If a file is corrupted on Google's end with rClone or Netdrive, you lose that file. If a chunk containing CloudDrive's file system is corrupted, you can lose access to the file system itself, and have to rebuild it. Ultimately, though, nobody should ever use cloud storage for any data that they consider to be irreplaceable without backups.
  13. OK. So, there is a lot here, so let's unpack this one step at a time. I'm reading some fundamental confusion here, so I want to make sure to clear it up before you take any additional steps forward. Starting here, which is very important: It's critical that you understand the distinction in methodology between something like Netdrive and CloudDrive, as a solution. Netdrive and rClone and their cousins are file-based solutions that effectively operate as frontends for Google's Drive API. They upload local files to Drive as files on Drive, and those files are then accessible from your Drive--whether online via the web interface, or via the tools themselves. That means that if you use Netdrive to upload a 100MB File1.bin, you'll have a 100MB file called File1.bin on your Google drive that is identical to the one you uploaded. Some solutions, like rClone, may upload the file with an obfuscated file name like Xf7f3g.bin, and even apply encryption to the file as it's being uploaded, and decryption when it is retrieved. But they are still uploading the entire file, as a file, using Google's API. If you understand all of that, then understand that CloudDrive does not operate the same way. CloudDrive is not a frontend for Google's API. CloudDrive creates a drive image, breaks that up into hundreds, thousands, or even millions of chunks, and then uses Google's infrastructure and API to upload those chunks to your cloud storage. This means that if you use CloudDrive to store a 100MB file called File1.bin, you'll actually have some number of chunks (depending on your configured chunk size) called something like XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-1, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-2, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-3, etc, as well as a bunch of metadata that CloudDrive uses to access and modify the data on your drive. Note that these "files" do not correspond to the file size, type, or name that you uploaded, and cannot be accessed outside of CloudDrive in any meaningful way. CloudDrive actually stores your content as blocks, just like a physical hard drive, and then stores chunks of those blocks on your cloud storage. Though it can accomplish similar ends to Netdrive, rClone, or any related piece of software, its actual method of doing so is very different in important ways for you to understand. So what, exactly, does this mean for you? It means, for starters, that you cannot simply use CloudDrive to access information that is already located in your cloud storage. CloudDrive only accesses information that has been converted to the format that it uses to store data, and CloudDrive's format cannot be accessed by other applications (or Google themselves). Any data that you'd like to migrate from your existing cloud storage to a CloudDrive volume must be downloaded and moved to the CloudDrive volume just as it would need to be if you were to migrate the data to a new physical drive on your machine--for the same reasons. It may be helpful to think of CloudDrive as a virtual machine drive image. It's the same general concept. Just as you would have to copy data within the virtual machine in order to move data to the VM image, you'll have to copy data within CloudDrive to move it to your CloudDrive volume. There are both benefits and drawbacks to using this approach: Benefits CloudDrive is, in my experience, faster than rClone and its cousins. Particularly around the area of jumping to granular data locations, as you would for, say, jumping to a specific location in a media file. CloudDrive stores an actual file system in the cloud, and that file system can be repaired and maintained just like one located on a physical drive. Tools like chkdsk and windows' own in-built indexing systems function on a CloudDrive volume just as they will on your local drive volumes. In your case this means that Plex's library scans will take seconds, and will not lock you out of Google's API limitations. CloudDrive's block-based storage means that it can modify portions of files in-place, without downloading the entire file and reuploading it. CloudDrive's cache is vastly more intelligent than those implemented by file-based solutions, and is capable of, for example, storing the most frequently accessed chunks of data, such as those containing the metadata information in media files, rather than whole media files. This, like the above, also translates to faster access times and searches. CloudDrive's block-based solution allows for a level of encryption and data security that other solutions simply cannot match. Data is completely AES encrypted before it is ever even written to the cache, and not even Covecube themselves can access the data without your key. Neither your cloud provider, nor unauthorized users and administrators for your organization, can access your data without consent. Drawbacks (read carefully) CloudDrive's use of an actual file system also introduces vulnerabilities that file-based solutions do not have. If the file system data itself becomes corrupted on your storage, it can affect your ability to access the entire drive--in the same way that a corrupted file system can cause data loss on a physical drive as well. The most common sorts of corruption can be repaired with tools like chkdsk, but there have been incidents caused by Google's infrastructure that have caused massive data loss for CloudDrive users in the past--and there may be more in the future, though CloudDrive has implemented redundancies and checks to prevent them going forward. Note that tools like testdisk and recuva can be used on a CloudDrive volume just as they can on a physical volume in order to recover corrupt data, but this process is very tedious and generally only worth using for genuinely critical and irreplaceable data. I don't personally consider media files to be critical or irreplaceable, but each user must consider their own risk tolerance. A CloudDrive volume is not accessible without CloudDrive. Your data will be locked into this ecosystem if you convert to CloudDrive as a solution. Your data will also only be accessible from one machine at a time. CloudDrive's caching system means that corruption can occur if multiple machines could access your data at once, and, as such, it will not permit the volume to be mounted by multiple instances simultaneously. And, as mentioned, all data must be uploaded within the CloudDrive infrastructure to be used with CloudDrive. Your existing data will not work. So, having said all of that, before I move on to helping you with your other questions, let me know that you're still interested in moving forward with this process. I can help you with the other questions, but I'm not sure that you were on the right page with the project you were signing up for here. rClone and NetDrive both also make fine solutions for media storage, but they're actually very different beasts than CloudDrive, and it's really important to understand the distinction. Many people are not interested in the additional limitations.
  14. I can actually confirm this bug as well. The circumstances were very straightforward: I detached the drive from the machine because I had to take it down for a hardware test. When the test was completed, the drive said that it was already attached (to the same machine I detached it from), and I had to force the mount and reindex the drive. This was about a week ago, on 1261. I do not, sadly, have any logs or records from the incident, and the drive functions as normal after the reindex. EDIT: I should add that attempting to force the mount once actually gave me an error about the cache directory already existing, but forcing it a second time allowed it to mount and start the indexing process. In any case, something does seem borked with the detach and reattach process.
×
×
  • Create New...