Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 11/04/21 in all areas

  1. My backup pool has a 1TB SSD that I use to speed up my backups over a 10Gbs link. Works great, as I can quickly backup my deltas from the main pool --> backup pool and in slow time it pushes the files to the large set of spinning rust (100TB). However, when I try to copy a file that is larger than the 1TB SSD I get a msg saying there is not enough available space. Ideally, the SSD Optimiser should just push the file directly to a HDD in such cases (feature request?), but for now what would be the best way of copying this file into the pool? - Manually copy directly to one of the HDD behind DrivePools back then rescan? - Turn off the SSD optimiser then copy the file" or, - Some other method? Thanks Nathan
    2 points
  2. i would also like to see dark mode addedto all stablebit products
    1 point
  3. Not currently, but I definitely do keep on bringing it up.
    1 point
  4. AFAIK, copying, even cloning does not work. The easiest way is: 1. Install/connect new HDD 2. Add HDD to the Pool 3. And now you can either click on Remove for the 6TB HDD or use Manage Pool -> Balancing -> Balancers -> Drive Usage Limiter -> uncheck Duplicated and Unduplicated -> Save -> Remeasure and Rebalance. 4. Wait. Until. It. Is. Done. (though you can reboot normally if you want/need to and it'll continue after boot. Not entirely sure if you go the Remove route.
    1 point
  5. Whenever I have problems with duplication, I go to the Settings Cog>Troubleshooting>Recheck Duplication and let DrivePool try to figure it out. Honestly, if there are duplication problems with DrivePool (like after removal of a failed HDD), it takes me a couple times running the Recheck Duplication and Balancing tasks. Last time that happened to me, it literally took a few days for DrivePool to clean itself up, but I have 80TB in my DrivePool. To be fair to DrivePool, it did fix itself given time. I only have a few folders set for duplication in my DrivePool, so out of my 80TB pool, only about 20TB are duplicated. Also, DrivePool duplication is good for some things, but it does not ensure that your files are actually intact and complete. It is possible to have a corrupted file/folder and DrivePool is happy to duplicate the corruption. If there is a mismatch between the original and the copy, DrivePool cannot tell you which file/folder is true and which may have been corrupted. For example, my DrivePool is mainly used as my home media storage. If I have an album folder with 15 tracks, and one or two tracks gets deleted or corrupted, DrivePool cannot tell me if the original directory is complete, if the duplicate directory is complete, or if neither copy is complete. Because of this, I now add 10% .par2 files to my folders for verification and possible rebuild. With the .par2 files, I can quickly determine if the folder is complete, if any missing or corrupted files can be rebuilt from the .par2 files in that folder, or if I have to take out my backup HDDs from the closet to rebuild the corrupted data in DrivePool. Unfortunately, DrivePool duplication does not ensure that your data has not been corrupted. For this reason, I don't consider DrivePool duplication in any way a backup solution. It lacks the ability to verify if the original or the duplicate copy is complete and intact and cannot resolve mismatches between copies. In theory, from what I understand, duplication is mainly good for rebuilding your DrivePool if you have a HDD failure and the bad drive is a complete loss. Then, DrivePool will still have a copy of the files on other drives in DrivePool and can rebuild the failed data. That may be a great option, and of course I said I do use duplication, but DrivePool duplication still lacks any ability to verify if the files are complete and uncorrupted. For that reason, I have gone to using those .par2 files for file verification. My backup HDDs are stored in my closet. It's not the best solution to my backup needs, but it is the best I have found for me at this point. In a more perfect world, DrivePool would have the ability to duplicate folders for faster pool recovery, and there would also be some way to verify and rebuild lost data like the .par2 files. In your case, if you have good backups of your data, I might consider turning off duplication in DrivePool, Rebalancing the pool and/or forcing a Recheck Duplication to clean up the data, and then turning duplication back on for the folders/pool as you want. But before I did that, I think I would contact the programmer directly for support and ask him for his recommendation(s). DrivePool is a great program and data recovery is much better than other methods I have used such as RAID systems and Windows Storage Spaces. But I do run into errors like you are experiencing and I cannot always understand the corrective action to take. Mostly, I have found that DrivePool is able to correct itself with its various troubleshooting tasks, but it might take a long time on a large pool.
    1 point
  6. OK. So, there is a lot here, so let's unpack this one step at a time. I'm reading some fundamental confusion here, so I want to make sure to clear it up before you take any additional steps forward. Starting here, which is very important: It's critical that you understand the distinction in methodology between something like Netdrive and CloudDrive, as a solution. Netdrive and rClone and their cousins are file-based solutions that effectively operate as frontends for Google's Drive API. They upload local files to Drive as files on Drive, and those files are then accessible from your Drive--whether online via the web interface, or via the tools themselves. That means that if you use Netdrive to upload a 100MB File1.bin, you'll have a 100MB file called File1.bin on your Google drive that is identical to the one you uploaded. Some solutions, like rClone, may upload the file with an obfuscated file name like Xf7f3g.bin, and even apply encryption to the file as it's being uploaded, and decryption when it is retrieved. But they are still uploading the entire file, as a file, using Google's API. If you understand all of that, then understand that CloudDrive does not operate the same way. CloudDrive is not a frontend for Google's API. CloudDrive creates a drive image, breaks that up into hundreds, thousands, or even millions of chunks, and then uses Google's infrastructure and API to upload those chunks to your cloud storage. This means that if you use CloudDrive to store a 100MB file called File1.bin, you'll actually have some number of chunks (depending on your configured chunk size) called something like XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-1, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-2, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-3, etc, as well as a bunch of metadata that CloudDrive uses to access and modify the data on your drive. Note that these "files" do not correspond to the file size, type, or name that you uploaded, and cannot be accessed outside of CloudDrive in any meaningful way. CloudDrive actually stores your content as blocks, just like a physical hard drive, and then stores chunks of those blocks on your cloud storage. Though it can accomplish similar ends to Netdrive, rClone, or any related piece of software, its actual method of doing so is very different in important ways for you to understand. So what, exactly, does this mean for you? It means, for starters, that you cannot simply use CloudDrive to access information that is already located in your cloud storage. CloudDrive only accesses information that has been converted to the format that it uses to store data, and CloudDrive's format cannot be accessed by other applications (or Google themselves). Any data that you'd like to migrate from your existing cloud storage to a CloudDrive volume must be downloaded and moved to the CloudDrive volume just as it would need to be if you were to migrate the data to a new physical drive on your machine--for the same reasons. It may be helpful to think of CloudDrive as a virtual machine drive image. It's the same general concept. Just as you would have to copy data within the virtual machine in order to move data to the VM image, you'll have to copy data within CloudDrive to move it to your CloudDrive volume. There are both benefits and drawbacks to using this approach: Benefits CloudDrive is, in my experience, faster than rClone and its cousins. Particularly around the area of jumping to granular data locations, as you would for, say, jumping to a specific location in a media file. CloudDrive stores an actual file system in the cloud, and that file system can be repaired and maintained just like one located on a physical drive. Tools like chkdsk and windows' own in-built indexing systems function on a CloudDrive volume just as they will on your local drive volumes. In your case this means that Plex's library scans will take seconds, and will not lock you out of Google's API limitations. CloudDrive's block-based storage means that it can modify portions of files in-place, without downloading the entire file and reuploading it. CloudDrive's cache is vastly more intelligent than those implemented by file-based solutions, and is capable of, for example, storing the most frequently accessed chunks of data, such as those containing the metadata information in media files, rather than whole media files. This, like the above, also translates to faster access times and searches. CloudDrive's block-based solution allows for a level of encryption and data security that other solutions simply cannot match. Data is completely AES encrypted before it is ever even written to the cache, and not even Covecube themselves can access the data without your key. Neither your cloud provider, nor unauthorized users and administrators for your organization, can access your data without consent. Drawbacks (read carefully) CloudDrive's use of an actual file system also introduces vulnerabilities that file-based solutions do not have. If the file system data itself becomes corrupted on your storage, it can affect your ability to access the entire drive--in the same way that a corrupted file system can cause data loss on a physical drive as well. The most common sorts of corruption can be repaired with tools like chkdsk, but there have been incidents caused by Google's infrastructure that have caused massive data loss for CloudDrive users in the past--and there may be more in the future, though CloudDrive has implemented redundancies and checks to prevent them going forward. Note that tools like testdisk and recuva can be used on a CloudDrive volume just as they can on a physical volume in order to recover corrupt data, but this process is very tedious and generally only worth using for genuinely critical and irreplaceable data. I don't personally consider media files to be critical or irreplaceable, but each user must consider their own risk tolerance. A CloudDrive volume is not accessible without CloudDrive. Your data will be locked into this ecosystem if you convert to CloudDrive as a solution. Your data will also only be accessible from one machine at a time. CloudDrive's caching system means that corruption can occur if multiple machines could access your data at once, and, as such, it will not permit the volume to be mounted by multiple instances simultaneously. And, as mentioned, all data must be uploaded within the CloudDrive infrastructure to be used with CloudDrive. Your existing data will not work. So, having said all of that, before I move on to helping you with your other questions, let me know that you're still interested in moving forward with this process. I can help you with the other questions, but I'm not sure that you were on the right page with the project you were signing up for here. rClone and NetDrive both also make fine solutions for media storage, but they're actually very different beasts than CloudDrive, and it's really important to understand the distinction. Many people are not interested in the additional limitations.
    1 point
  7. It says Gsuite in the title. I'm assuming that means Google Drive. Correct me if all of the following is wrong, Middge. Hey Middge, I use a CloudDrive for Plex with Google Drive myself. I can frequently do 5-6 remux quality streams at once. I haven't noticed a drop in capacity aside from Google's relatively new upload limits. Yes. But this is going to require that you remake the drive. My server also has a fast pipe, and I've also raised the minimum download to 20MB as well. I really haven't noticed any slowdown in responsiveness because the connection is so fast, and it keeps the overall throughput up. This is fine. You can play with it a bit. Some people like higher numbers like 5 or 10 MB triggers, but I've tried those and I keep going back to 1 MB as well, because I've just found it to be the most consistent, performance-wise, and I really want it to grab a chunk of *all* streaming media immediately. This is way too low, for a lot of media. I would raise this to somewhere between 150-300MB. Think of the prefetch as a rolling buffer. It will continue to fill the prefetch as the data in the prefetch is used. The higher the number, then, the more tolerant your stream will be to periodic network hiccups. The only real danger is that if you make it too large (and I'm talking like more than 500MB here) it will basically always be prefetching, and you'll congest your connection if you hit like 4 or 5 streams. I would drop this to 30, no matter whether you want to go with a 1MB, 5MB, or 10MB trigger. 240 seconds almost makes the trigger amount pointless anyway--you're going to hit all of those benchmarks in 4 minutes if you're streaming most modern media files. A 30 second window should be fine. WAAAAY too many. You're almost certainly throttling yourself with this. Particularly with the smaller than maximum chunk size, since it already has to make more requests than if you were using 20MB chunks. I use three clouddrives in a pool (a legacy arrangement from before I understood things better, don't do it. Just expand a single clouddrive with additional volumes), and I keep them all at 5 upload and 5 download threads. Even if I had a single drive, I'd probably not exceed 5 upload, 10 download. 20 and 20 is *way* too high and entirely unnecessary with a 1gbps connection. These are all fine. If you can afford a larger cache, bigger is *always* better. But it isn't necessary. The server I use only has 3x 140GB SSDs, so my caches are even smaller than yours and still function great. The fast connection goes a long way toward making up for a small cache size...but if I could have a 500GB cache I definitely would.
    1 point
×
×
  • Create New...