Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    426
  • Joined

  • Last visited

  • Days Won

    32

srcrist last won the day on September 23

srcrist had the most liked content!

1 Follower

About srcrist

  • Rank
    Advanced Member

Recent Profile Visitors

770 profile views
  1. Note that deleting files off of the drive and deleting the drive itself are not the same thing. Simply removing files off of the file system of the drive will not remove the drive data from your provider, because a drive of X capacity still exists, just like removing files off of your physical drive does not cause it to physically shrink. If you want to remove the storage files located on the provider then you have to either shrink the drive or destroy it. Also note that, with CloudDrive, the "files" stored on the provider are not representative of the files stored on the drive in the first place. They are chunks of data that comprise the drive structure that CloudDrive uses to store information. Once they exist on the provider, there is no need to delete them unless the drive structure itself changes. CloudDrive will simply overwrite and modify them as data is added and removed.
  2. At least there is a silver lining. That sucks that CenturyLink's service and hardware is so locked down and convoluted, though. I use a wired router and separate wireless access points, personally, and that would be a nightmare for my setup. Hope it works out.
  3. Generally speaking, even if other router functions are locked down by the ISP on their hardware, consumers typically have access to change the router to bridge mode themselves. Are you sure that you don't have access to that via the web configuration for the router? Is the fiber gateway also integrated into the router, or can you just replace it with your own and bypass their hardware entirely?
  4. There really isn't anything unique about CloudDrive's network usage that should impact the router that you use over and above any other high-use network application. So any general router recommendations should be fine. I personally rely on Small Net Builder for reviews and ratings for home network hardware. Might be worth a look for you.
  5. There will definitely be a performance hit, but it won't necessarily be prohibitive. The extent to which it matters will depend largely on your specific use case, and the issues revolve predominately around the same areas where hard drives perform more poorly compared to their solid state cousins in general. For example: if you have a drive wherein you are constantly writing and reading simultaneously--that SSD is going to make a profound difference. If you have a drive where you're predominately just reading contiguous larger files, the SSD shouldn't have as big an impact. As a general rule, just imagine that your CloudDrive *is* whatever sort of drive you use for the cache, and the limitations will be similar.
  6. Writes to your CloudDrive drive will be throttled when there is less than (I believe) 5GB remaining on the cache drive. This is normal and intended functionality. Just get used to throttled writes until all of the data is duplicated. Your local write speed is obviously faster than your upload. Completely expected. You can install whatever you want to a pool.
  7. srcrist

    A weird question

    If I am reading this correctly, you want to be able to upload the data to your cloud storage remotely, and then access it later locally. You could do this with CloudDrive using, as mentioned above, a (Windows) VPS to upload the data, and then detatch the CloudDrive drive from the VPS and reattach it locally. But this is tedious. Since CloudDrive only allows your drive to be mounted from a single machine at any given time, it really isn't the best tool for what you're trying to accomplish. A tool like rClone which simply accesses Google Drive's native file structure would be of much more use, I think. You could use any cheap VPS (and several other services, for that matter) to add files to your drive, and then use rClone to access them locally as a folder. I suspect that your question is, overall, based on a misconception of how CloudDrive works. It will not access the files that exist on your Google Drive space. Only its own storage files that are contained on your Drive.
  8. Now that i read this again: make sure that your duplication is enabled on the master pool. Not the local subpool. Duplication enabled on THAT pool will only duplicate to drives contained within that pool.
  9. I'm not 100% sure what the issue is, as I think it could be multiple things based on this information. One common mistake is to not enable duplication on both pools. Both pools need to be allowed to accept duplicated data, while only the local pool should be allowed to accept unduplicated data. If only one pool is allowed to contain duplicates, there will be no pool to duplicate to. So maybe check that setting in your balancers. Balancer logic can get pretty complicated, so I may not be able to be as helpful as I'd like with respect to your specific aims and configuration. But it should be relatively simple. You need two pools (local and cloud), both configured to accept duplicated data, the local pool configured to accept your new (unduplicated) data, and 2x duplicated enabled on the master pool in order to duplicate between them.
  10. Yeah, sure thing. The only other thing that I would point out is that chkdsk has a hard limit of 60TB per volume. So you'll probably not want to exceed that limit. A single CloudDrive drive can, however, be partitioned into multiple volumes if you need more space, and those volumes can be recombined using DrivePool to create a unified filesystem that can still be error checked and repaired by chkdsk.
  11. This should not be the case. Check to make sure that this isn't a Plex configuration issue. If you have Plex set to automatically scan on a filesystem change, it should discover additions within minutes at the most. I have noticed that this feature does not seem to function correctly unless a full scan has been completed since the Plex sever software was last started, though. I think it's just buggy, but it's a Plex issue. In any case, though, this is not a CloudDrive issue and it doesn't actually have to download the whole file to the cache in order for the file to be visible to applications or for Plex to identify it. If it isn't a library scanning issue, it may be that you have Plex set to generate video thumbnails or something when new media is added, in which case the analysis won't be "complete" until it has done so, which requires that it read the entire file to generate those thumbnails. I personally recommend only using chapter thumbnails, at the most, if you want to store your media in the cloud. This feature can be turned on or off and adjusted in the Plex settings.
  12. I'm having a little trouble parsing exactly what your overall goal is here, but, if I'm reading you correctly, I should note that the CloudDrive cache is a single consolidated file, and storing it on a DrivePool pool wouldn't really provide any benefit at all over simply storing it on a single drive. You can do so using the advanced settings for CloudDrive, but it really doesn't add any benefit. It still wouldn't be able to scale any larger than a single drive, and it can't be duplicated to increase performance. Unfortunately, a slow upload is really just a hard limitation on the functionality of CloudDrive. A larger cache drive is a bandaid, but, in the long term, there really isn't any way to add data faster than your upload can send it to your provider.
  13. Either of these options are workable, depending on your needs. It's really up to you. You're probably just overthinking this. Just use whatever settings you need to get a drive of the size you require, that can serve the data you store. So you'll want a cluster size that can accommodate your volume size, depending on the maximum size you'd like for your volume. The larger the files you store, the more efficient a larger chunk size will be. If you have a bunch of files larger than 20MB, I'd probably just suggest using 20MB chunks. If most of your files are *smaller*, then it will be more efficient to use smaller chunks. The larger the chunks, the larger the maximum size of the CloudDrive drive as well. A drive with 20MB chunks can expand up to a maximum of 1 petabytes. You'll just need to play with the prefetcher settings to find something that works for your specific use case, hardware, and network. Those settings are different for everyone. You will want a nested pool setup with your CloudDrive/CloudDrive Pool (whichever you choose to create) in a master pool with your existing DrivePool. You can then set any balancing settings you like between those two in the master pool. There are a lot of ways to handle the specific balancing configuration from there, depending on what, exactly, you want to accomplish. But it sounds to me like you have the basic concept right, yes. You won't have to. If you use DrivePool and nest a few pools, as you're planning here, you'll still have one mount point for the master pool to point your applications to. Everything else will be transparent to the OS and your applications. That is: you will automatically be accessing both the cloud and local duplicates simultaneously, and DrivePool will pull data from whichever it can access when you request the data (using the hierarchy coded into the service, which is smart enough to prioritize faster storage.)
  14. Your upload threshold is the point at which CloudDrive will upload new data to the cloud. It isn't a limit, it's a delay. I think you're sorta confusing what that setting does. Your upload threshold says "start uploading new data to the cloud when I have X amount of new data, or it has been Y minutes since the last upload--whichever comes first." It is not a limit that says, "upload X amount of data, then stop." The uploading throttling, on the other hand, is the setting that will limit your upload speed. And it works fine to keep your usage below Google's API limits. So I'm not sure what you mean that it doesn't really work. I use it every day. A throttle limit of roughly 70mbps will keep you below Google's 750GB/day limitation.
×
×
  • Create New...