Jump to content

igobythisname

Members
  • Posts

    6
  • Joined

  • Last visited

Reputation Activity

  1. Like
    igobythisname reacted to Umfriend in Best way to replace a drive in my Drivepool to a larger drive?   
    AFAIK, copying, even cloning does not work.
    The easiest way is:
    1. Install/connect new HDD
    2. Add HDD to the Pool
    3. And now you can either click on Remove for the 6TB HDD or use Manage Pool -> Balancing -> Balancers -> Drive Usage Limiter -> uncheck Duplicated and Unduplicated -> Save -> Remeasure and Rebalance.
    4. Wait. Until. It. Is. Done. (though you can reboot normally if you want/need to and it'll continue after boot. Not entirely sure if you go the Remove route.
  2. Like
    igobythisname reacted to Shane in Replace 8TB drive with 16TB drive   
    That would work (presuming you have enough free space).
    Alternatively you could use a USB drive dock to connect and add one of the new drives before removing one of the old drives, then repeat the process with the other new and old drives. Though this assumes you have a spare USB port and a USB dock to plug into it.
    There's also manual tricks you can use to more quickly (still takes a while) replace pooled drives with new ones, but they require a certain level of "knowing what you're doing" in case anything doesn't go according to plan, involving copying from inside the pool drives' hidden PoolPart folders.
  3. Like
    igobythisname reacted to Sirkassad in Replace 8TB drive with 16TB drive   
    I have a pool that has 2 8TB drives.  I am going to upgrade both drives to 16TB but I cannot add a new 16TB without first removing a 8TB.  I wasn't doing any pool duplication so several days ago I kicked it off and it is 25% done.  Takes forever.  My plan was to wait until this is done, then remove one of the 8TB drives and replace with a 16TB, wait for it to re-write the pool to the 16TB, then remove the 2nd 8TB and add the 16TB.  To me this is the only way given I cannot have all the drives connected at the same time.  Does this make sense?
  4. Like
    igobythisname reacted to Rob Manderson in Moving disks from inside computer case to Probox   
    As long as Drivepool can see the drives it'll know exactly which pool they belong to.  I move drives around regularly between my server and an external diskshelf and they always reconnect to the correct pools.
  5. Like
    igobythisname reacted to dsteinschneider in Moving disks from inside computer case to Probox   
    Thanks Rob, DrivePool correctly found the drives. Am I correct that if I wanted to re-assign the drive letters of the physical drives it will work the same way?
    EDIT - went ahead and re-assigned drive letters - DrivePool adjusted itself immediately 
  6. Like
    igobythisname reacted to MitchellStringer in I delete files on the Cloud drive, but the changes are not reflected in the software   
    see the image below.
     
    I deleted about 200GB but Clouddrive is still showing the google drive as full, and isn't updating, still shows as 500+GB if i log into Google drive.
     
    i have tried cleanup, clearing cache and reauth (just in case)
     

  7. Like
    igobythisname reacted to Christopher (Drashna) in How can I use StableBit DrivePool with StableBit CloudDrive?   
    Because we've already have had a couple of questions about this, in their current forms, StableBit DrivePool works VERY well with StableBit Cloud Drive already.
     
    The StableBit CloudDrive drive disks appear as normal, physical disks. This means that you can add them to a Pool without any issues or workarounds.
     
     
    Why is this important and how does it affect your pool?
    You can use the File Placement Rules to control what files end up on which drive. This means that you can place specific files on a specific CloudDrive.  You can use the "Disk Usage Limiter" to only allow duplicated data to be placed on specific drives, which means you can place only duplicated files on a specific CloudDrive disk.  
     
    These are some very useful tricks to integrate the products, already. 
    And if anyone else finds some neat tips or tricks, we'll add them here as well.
     
  8. Thanks
    igobythisname reacted to srcrist in New convert! Need help planning :)   
    OK. So, there is a lot here, so let's unpack this one step at a time. I'm reading some fundamental confusion here, so I want to make sure to clear it up before you take any additional steps forward.
    Starting here, which is very important: It's critical that you understand the distinction in methodology between something like Netdrive and CloudDrive, as a solution. Netdrive and rClone and their cousins are file-based solutions that effectively operate as frontends for Google's Drive API. They upload local files to Drive as files on Drive, and those files are then accessible from your Drive--whether online via the web interface, or via the tools themselves. That means that if you use Netdrive to upload a 100MB File1.bin, you'll have a 100MB file called File1.bin on your Google drive that is identical to the one you uploaded. Some solutions, like rClone, may upload the file with an obfuscated file name like Xf7f3g.bin, and even apply encryption to the file as it's being uploaded, and decryption when it is retrieved. But they are still uploading the entire file, as a file, using Google's API.
    If you understand all of that, then understand that CloudDrive does not operate the same way.
    CloudDrive is not a frontend for Google's API. CloudDrive creates a drive image, breaks that up into hundreds, thousands, or even millions of chunks, and then uses Google's infrastructure and API to upload those chunks to your cloud storage. This means that if you use CloudDrive to store a 100MB file called File1.bin, you'll actually have some number of chunks (depending on your configured chunk size) called something like XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-1, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-2, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-3, etc, as well as a bunch of metadata that CloudDrive uses to access and modify the data on your drive. Note that these "files" do not correspond to the file size, type, or name that you uploaded, and cannot be accessed outside of CloudDrive in any meaningful way. CloudDrive actually stores your content as blocks, just like a physical hard drive, and then stores chunks of those blocks on your cloud storage. Though it can accomplish similar ends to Netdrive, rClone, or any related piece of software, its actual method of doing so is very different in important ways for you to understand.
    So what, exactly, does this mean for you? It means, for starters, that you cannot simply use CloudDrive to access information that is already located in your cloud storage. CloudDrive only accesses information that has been converted to the format that it uses to store data, and CloudDrive's format cannot be accessed by other applications (or Google themselves). Any data that you'd like to migrate from your existing cloud storage to a CloudDrive volume must be downloaded and moved to the CloudDrive volume just as it would need to be if you were to migrate the data to a new physical drive on your machine--for the same reasons.
    It may be helpful to think of CloudDrive as a virtual machine drive image. It's the same general concept. Just as you would have to copy data within the virtual machine in order to move data to the VM image, you'll have to copy data within CloudDrive to move it to your CloudDrive volume.
    There are both benefits and drawbacks to using this approach:
    Benefits  
    CloudDrive is, in my experience, faster than rClone and its cousins. Particularly around the area of jumping to granular data locations, as you would for, say, jumping to a specific location in a media file. CloudDrive stores an actual file system in the cloud, and that file system can be repaired and maintained just like one located on a physical drive. Tools like chkdsk and windows' own in-built indexing systems function on a CloudDrive volume just as they will on your local drive volumes. In your case this means that Plex's library scans will take seconds, and will not lock you out of Google's API limitations. CloudDrive's block-based storage means that it can modify portions of files in-place, without downloading the entire file and reuploading it. CloudDrive's cache is vastly more intelligent than those implemented by file-based solutions, and is capable of, for example, storing the most frequently accessed chunks of data, such as those containing the metadata information in media files, rather than whole media files. This, like the above, also translates to faster access times and searches. CloudDrive's block-based solution allows for a level of encryption and data security that other solutions simply cannot match. Data is completely AES encrypted before it is ever even written to the cache, and not even Covecube themselves can access the data without your key. Neither your cloud provider, nor unauthorized users and administrators for your organization, can access your data without consent.  Drawbacks (read carefully) 
    CloudDrive's use of an actual file system also introduces vulnerabilities that file-based solutions do not have. If the file system data itself becomes corrupted on your storage, it can affect your ability to access the entire drive--in the same way that a corrupted file system can cause data loss on a physical drive as well. The most common sorts of corruption can be repaired with tools like chkdsk, but there have been incidents caused by Google's infrastructure that have caused massive data loss for CloudDrive users in the past--and there may be more in the future, though CloudDrive has implemented redundancies and checks to prevent them going forward. Note that tools like testdisk and recuva can be used on a CloudDrive volume just as they can on a physical volume in order to recover corrupt data, but this process is very tedious and generally only worth using for genuinely critical and irreplaceable data. I don't personally consider media files to be critical or irreplaceable, but each user must consider their own risk tolerance. A CloudDrive volume is not accessible without CloudDrive. Your data will be locked into this ecosystem if you convert to CloudDrive as a solution. Your data will also only be accessible from one machine at a time. CloudDrive's caching system means that corruption can occur if multiple machines could access your data at once, and, as such, it will not permit the volume to be mounted by multiple instances simultaneously. And, as mentioned, all data must be uploaded within the CloudDrive infrastructure to be used with CloudDrive. Your existing data will not work.  
    So, having said all of that, before I move on to helping you with your other questions, let me know that you're still interested in moving forward with this process. I can help you with the other questions, but I'm not sure that you were on the right page with the project you were signing up for here. rClone and NetDrive both also make fine solutions for media storage, but they're actually very different beasts than CloudDrive, and it's really important to understand the distinction. Many people are not interested in the additional limitations. 
  9. Thanks
    igobythisname reacted to srcrist in Clouddrive, Gsuite unlimited, and Plex   
    It says Gsuite in the title. I'm assuming that means Google Drive. Correct me if all of the following is wrong, Middge. 
     
    Hey Middge,
    I use a CloudDrive for Plex with Google Drive myself. I can frequently do 5-6 remux quality streams at once. I haven't noticed a drop in capacity aside from Google's relatively new upload limits. 
    Yes. But this is going to require that you remake the drive. My server also has a fast pipe, and I've also raised the minimum download to 20MB as well. I really haven't noticed any slowdown in responsiveness because the connection is so fast, and it keeps the overall throughput up.
    This is fine. You can play with it a bit. Some people like higher numbers like 5 or 10 MB triggers, but I've tried those and I keep going back to 1 MB as well, because I've just found it to be the most consistent, performance-wise, and I really want it to grab a chunk of *all* streaming media immediately.
    This is way too low, for a lot of media. I would raise this to somewhere between 150-300MB. Think of the prefetch as a rolling buffer. It will continue to fill the prefetch as the data in the prefetch is used. The higher the number, then, the more tolerant your stream will be to periodic network hiccups. The only real danger is that if you make it too large (and I'm talking like more than 500MB here) it will basically always be prefetching, and you'll congest your connection if you hit like 4 or 5 streams. 
    I would drop this to 30, no matter whether you want to go with a 1MB, 5MB, or 10MB trigger. 240 seconds almost makes the trigger amount pointless anyway--you're going to hit all of those benchmarks in 4 minutes if you're streaming most modern media files. A 30 second window should be fine.
    WAAAAY too many. You're almost certainly throttling yourself with this. Particularly with the smaller than maximum chunk size, since it already has to make more requests than if you were using 20MB chunks. I use three clouddrives in a pool (a legacy arrangement from before I understood things better, don't do it. Just expand a single clouddrive with additional volumes), and I keep them all at 5 upload and 5 download threads. Even if I had a single drive, I'd probably not exceed 5 upload, 10 download. 20 and 20 is *way* too high and entirely unnecessary with a 1gbps connection. 
    These are all fine. If you can afford a larger cache, bigger is *always* better. But it isn't necessary. The server I use only has 3x 140GB SSDs, so my caches are even smaller than yours and still function great. The fast connection goes a long way toward making up for a small cache size...but if I could have a 500GB cache I definitely would. 
×
×
  • Create New...