Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. This should not be the case. Check to make sure that this isn't a Plex configuration issue. If you have Plex set to automatically scan on a filesystem change, it should discover additions within minutes at the most. I have noticed that this feature does not seem to function correctly unless a full scan has been completed since the Plex sever software was last started, though. I think it's just buggy, but it's a Plex issue. In any case, though, this is not a CloudDrive issue and it doesn't actually have to download the whole file to the cache in order for the file to be visible to applications or for Plex to identify it. If it isn't a library scanning issue, it may be that you have Plex set to generate video thumbnails or something when new media is added, in which case the analysis won't be "complete" until it has done so, which requires that it read the entire file to generate those thumbnails. I personally recommend only using chapter thumbnails, at the most, if you want to store your media in the cloud. This feature can be turned on or off and adjusted in the Plex settings.
  2. I'm having a little trouble parsing exactly what your overall goal is here, but, if I'm reading you correctly, I should note that the CloudDrive cache is a single consolidated file, and storing it on a DrivePool pool wouldn't really provide any benefit at all over simply storing it on a single drive. You can do so using the advanced settings for CloudDrive, but it really doesn't add any benefit. It still wouldn't be able to scale any larger than a single drive, and it can't be duplicated to increase performance. Unfortunately, a slow upload is really just a hard limitation on the functionality of CloudDrive. A larger cache drive is a bandaid, but, in the long term, there really isn't any way to add data faster than your upload can send it to your provider.
  3. Either of these options are workable, depending on your needs. It's really up to you. You're probably just overthinking this. Just use whatever settings you need to get a drive of the size you require, that can serve the data you store. So you'll want a cluster size that can accommodate your volume size, depending on the maximum size you'd like for your volume. The larger the files you store, the more efficient a larger chunk size will be. If you have a bunch of files larger than 20MB, I'd probably just suggest using 20MB chunks. If most of your files are *smaller*, then it will be more efficient to use smaller chunks. The larger the chunks, the larger the maximum size of the CloudDrive drive as well. A drive with 20MB chunks can expand up to a maximum of 1 petabytes. You'll just need to play with the prefetcher settings to find something that works for your specific use case, hardware, and network. Those settings are different for everyone. You will want a nested pool setup with your CloudDrive/CloudDrive Pool (whichever you choose to create) in a master pool with your existing DrivePool. You can then set any balancing settings you like between those two in the master pool. There are a lot of ways to handle the specific balancing configuration from there, depending on what, exactly, you want to accomplish. But it sounds to me like you have the basic concept right, yes. You won't have to. If you use DrivePool and nest a few pools, as you're planning here, you'll still have one mount point for the master pool to point your applications to. Everything else will be transparent to the OS and your applications. That is: you will automatically be accessing both the cloud and local duplicates simultaneously, and DrivePool will pull data from whichever it can access when you request the data (using the hierarchy coded into the service, which is smart enough to prioritize faster storage.)
  4. Your upload threshold is the point at which CloudDrive will upload new data to the cloud. It isn't a limit, it's a delay. I think you're sorta confusing what that setting does. Your upload threshold says "start uploading new data to the cloud when I have X amount of new data, or it has been Y minutes since the last upload--whichever comes first." It is not a limit that says, "upload X amount of data, then stop." The uploading throttling, on the other hand, is the setting that will limit your upload speed. And it works fine to keep your usage below Google's API limits. So I'm not sure what you mean that it doesn't really work. I use it every day. A throttle limit of roughly 70mbps will keep you below Google's 750GB/day limitation.
  5. Once you've reauthorized the drive or reestablished the connection after modifying the JSON file, you should be using the new key. There shouldn't be anything else to do. As far as performance is concerned, I've never seen a difference between using a personal key and the default key. Google may have changed something since the last time I used a personal key, but there was no difference whatsoever. Do you see any API errors in your logs? If there is something API related impacting throughput, it should be in the logs. For what it's worth, there really isn't a compelling reason to use a personal API key over the default key. I'd probably just suggest switching back to the default key if it is, for some reason, giving you issues. You can always switch to your personal key when and if the default key actually hits capacity or experiences some other issue. But that hasn't been happening any time recently.
  6. How much cache have you given it? If there is room, and there isn't a need to replace the content in the local cache, I don't believe that CloudDrive will simply flush the cache just to flush it. There really isn't any need for it to do so. There is no reason, for example, that the local cache would ever be stale--since the only way to modify the drive is via CloudDrive itself, which is always aware of the content residing in the cache vs the provider. If your previous frame of reference is something like rClone, it has a cache expiry because the cache can be stale relative to the provider. That's not the case here. The local cache only ever mirrors remote content. The cache is either mirrored content that has been uploaded to the provider, or content that has been fetched from the provider, in addition to the content that has yet to be uploaded to the provider. If you want to force it to clear the cache entirely for testing purposes, I would suggest either detaching and reattaching the drive, or setting the cache size to something exceptionally low so there isn't room locally to cache even a single large file.
  7. I already responded to your problem in the other unrelated Plex thread that you bumped to ask about it. I think it would be nice if you stopped bumping old threads about it, and if you simply submitted a support request to Stablebit via the website. They'll get you sorted.
  8. I don't have an answer for you, and I don't think Box is a particularly popular platform that a lot of users would be familiar with. You'd almost certainly be better off submitting this question directly to Christopher and Alex via the contact form on the web page.
  9. I don't believe that change would impact this limitation. The limit on Team/Shared drives was not a per-folder limitation in the first place.
  10. It probably would have been better to submit a new thread for this question, as it only tangentially relates to the topic of the thread, and the Plex information in this particular thread is a bit outdated. So for anyone stumbling on this thread for up-to-date Plex information, you'll likely want to continue your search. Now, that being said, the errors that you're seeing are data integrity check errors. CloudDrive is telling you that the data that it's getting from Google does not match the data that it expected to receive. So that leaves us with three possibilities: 1. CloudDrive has a bug and is, for some reason, expecting the wrong data; 2. Google is returning the wrong data when CloudDrive requests it; or, 3. That the data on Google's services is genuinely corrupt. My intuition says that 2 is the most likely--both because a bug like that should be more widely reported, and because genuine data corruption shouldn't resolve itself once in awhile--but determining which of them is actually the problem will require that you submit a ticket to the official support channel at https://stablebit.com/Contact so that they can troubleshoot with you. Those are not, notably, API ban errors. And CloudDrive is effectively immune from the API bans that tools like rClone get simply by virtue of its method of operation anyway. So whatever is going on here is likely unrelated to the use of the API, unless Google has changed something without warning (again).
  11. As mentioned, the example image on the wiki DOES show the quotes, as does the default advanced config file from which the image is taken. But, that being said, if you think a more explicit mention is warranted, you'll want to send it to Stablebit via the contact form on the website.
  12. You're missing the quotations around the client ID and secret. That's why its treating it as a number. Notice that the example and the screencap on the instructional page you linked have quotes.
  13. Submitting a ticket was the right call, here. It may be a bug, it may be a conflict with another service. It may even be a hardware or Windows issue. So you'll need to work with support via the ticket system. In any case, you should go ahead and download and run the Troubleshooter (https://wiki.covecube.com/StableBit_Troubleshooter), which will gather a bunch of information and submit it for your ticket. Just enter the ticket number you were given in the requisite field when you run the tool. Christopher and Alex will let you know if they need anything else from there.
  14. I have active instances of both rClone and CloudDrive myself as well. I use an rClone mirror for my data, actually, just as insurance against data loss and to provide flexibility. CloudDrive is my daily driver storage solution, though. I've simply found that it's faster and more compatible with less quirks. This is the main thread that I can think of where the Team Drive issue has been addressed: The biggest technical issues mentioned are related to drive integrity because Team Drives allow simultaneous access, and the hard limits on files and folders.
  15. Well you can delete the service folder, but I wouldn't recommend it. Simply choosing to reauthorize the drive, or removing the connection from your providers list in the UI and re-adding it, should bring up the window to choose your credentials again.
  16. Sure thing. Also just doublecheck very carefully that you are actually using the same account that created the drive to authorize it now. You'll be using whatever Google account is logged in to your browser to authorize the service. As someone who uses multiple Google accounts for a few things, I have received this same error because I thought that I authorized with one account but didn't notice that I actually was signed in with another.
  17. The error, as you can probably deduce, is simply saying that it cannot locate the data for the volume when logging in with the provided credentials. But, if you're absolutely sure that you're using the same google credentials that the data was created with, and that the data is available when looking via other tools like the Google web UI, you might have just stumbled on a bug of some sort. Submit a ticket to the official support channel here: https://stablebit.com/Contact They'll get you sorted.
  18. When drives are first created there will be some amount of file system data that will be uploaded to your cloud provider. It should not be much more than a few hundred MB per drive, though, and should only take a few minutes to upload for most people. It's possible that something else, perhaps even Windows itself, is creating some sort of data on the drive. Make sure that Windows' indexing services and page file and recycle bin settings are disabled on the CloudDrive volumes. In any case, if you are just getting started I would highly discourage you from using eight separate CloudDrive drives unless there is some very specific reason that you need individual drives instead of one large drive split into multiple volumes. CloudDrive can be surprisingly resource intensive with respect to caching and bandwidth, and the difficultly of managing those things compounds pretty quickly for each additional drive you tax your local resources with. If you have like eight separate SSDs to use as cache drives and a 1gbps connection wherein you have little concern for local bandwidth management then, by all means, feel free. Otherwise, though, you should be aware that a single CloudDrive can be resized up to 1PB total, and can be partitioned into multiple volumes just like a local drive. And that CloudDrive itself provides bandwidth management tools within the UI to throttle a single drive.
  19. We should clarify some terms here. A partition, or volume, is different than the size of the drive. You can expand a CloudDrive drive up to a maximum of 1PB irrespective of the sector size. But the sector size will limit the size of individual volumes (or partitions) on the drive. A CloudDrive can have multiple volumes. So if you're running out of space, you might just have to add a second volume to the drive. Note that multiple volumes can then be merged to recreate a single unified file system with tools like Stablebit's own DrivePool.
  20. And to be very clear: though you cannot add your drive to multiple pools, you can add your pool to another pool, which is the proposal here. Thus, you can have, say, your pool of local drives and your pool of cloud provider drives both together in a master pool which can be used to duplicate between both subpools.
  21. No. I'm sorry. I'm not trying to be difficult, but I'm just not understanding the problem. If you want to nest your existing pool, you can just add it to a new pool and move the data to the new PoolPart folder that will be created within the pool you already have. If you want to, for some reason, break your existing pool apart, delete the pool you have, and create a new pool to nest with another pool...you'd just move all of the data out of your PoolPart folder, remove the drive from your existing pool, create the new one, and move the data to the PoolPart folder that you want it to be in. The major point here is that since DrivePool works at the file system level, all of the data is accessible (and modifiable) as existing files and folders on your drive. You can move and shuffle that around however you choose simply using Windows Explorer or another file manager. No matter what structure you want your pools to have, it's just a matter of moving data from one folder to another. It's entirely up to you.
  22. I guess I'm really not sure what you mean. When you add any drive to a pool, a hidden folder called PoolPart-XXXXXXXXXX will be created on that drive. Any data on that drive that you want to be accessible from within the pool just needs to be placed in that folder. If you're trying to nest the pools, you'll have a hidden PoolPart folder within a hidden PoolPart folder and you'll just need to move all of the data there. In every case, whatever pool you want the data in simply requires that the data be moved to the respective PoolPart folder that contains the data for that drive. Any possible change you want to make simply involves moving data at the file system level either into or out of the PoolPart folder for a given pool. You can restructure any pool you have completely at will.
  23. I use CloudDrive for a few things, but the largest drive, which is the one that I just gave you settings for, is for media, yes. It's constantly uploading as well, basically. I am not currently using my own API keys for that drive. There isn't really an "optimal" prefetcher setting--it just depends on your needs. I shouldn't really impact this.
  24. Submit a ticket via the support contact form. It sounds like this might be resolved as simply as force mounting the drive after clearing the service folder, but I don't want to tell you anything that might cause you data loss. The form is the official support channel. It's located here: https://stablebit.com/Contact
×
×
  • Create New...