Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    466
  • Joined

  • Last visited

  • Days Won

    34

Posts posted by srcrist

  1. 11 hours ago, kye said:

    Well, the problem is that, when plex discovers the new movie, i have to wait until , i believe, it downloads the whole movie in cash, so it could identify it . I see stablebit starting to download alot, then it stops and plex can succesfully analyze the movie.

    This should not be the case. Check to make sure that this isn't a Plex configuration issue. If you have Plex set to automatically scan on a filesystem change, it should discover additions within minutes at the most. I have noticed that this feature does not seem to function correctly unless a full scan has been completed since the Plex sever software was last started, though. I think it's just buggy, but it's a Plex issue.

    In any case, though, this is not a CloudDrive issue and it doesn't actually have to download the whole file to the cache in order for the file to be visible to applications or for Plex to identify it.

    If it isn't a library scanning issue, it may be that you have Plex set to generate video thumbnails or something when new media is added, in which case the analysis won't be "complete" until it has done so, which requires that it read the entire file to generate those thumbnails. I personally recommend only using chapter thumbnails, at the most, if you want to store your media in the cloud. This feature can be turned on or off and adjusted in the Plex settings.

  2. I'm having a little trouble parsing exactly what your overall goal is here, but, if I'm reading you correctly, I should note that the CloudDrive cache is a single consolidated file, and storing it on a DrivePool pool wouldn't really provide any benefit at all over simply storing it on a single drive. You can do so using the advanced settings for CloudDrive, but it really doesn't add any benefit. It still wouldn't be able to scale any larger than a single drive, and it can't be duplicated to increase performance.

    Unfortunately, a slow upload is really just a hard limitation on the functionality of CloudDrive. A larger cache drive is a bandaid, but, in the long term, there really isn't any way to add data faster than your upload can send it to your provider.

  3. 2 hours ago, ryan74 said:

    1. Do I create 1 big GDrive? +-100TB or more
    2. Create smaller ones and put them a drivepool of their own?

    Either of these options are workable, depending on your needs. It's really up to you.

     

    2 hours ago, ryan74 said:

    Section I'm not so sure on when creating a CloudDrive:

    You're probably just overthinking this. Just use whatever settings you need to get a drive of the size you require, that can serve the data you store. So you'll want a cluster size that can accommodate your volume size, depending on the maximum size you'd like for your volume. The larger the files you store, the more efficient a larger chunk size will be. If you have a bunch of files larger than 20MB, I'd probably just suggest using 20MB chunks. If most of your files are *smaller*, then it will be more efficient to use smaller chunks. The larger the chunks, the larger the maximum size of the CloudDrive drive as well. A drive with 20MB chunks can expand up to a maximum of 1 petabytes. You'll just need to play with the prefetcher settings to find something that works for your specific use case, hardware, and network. Those settings are different for everyone.

    2 hours ago, ryan74 said:

    I take it that once this setup and the next step will be to create a hybrid pool with localpool + cloudpool and setup folder duplication on the cloudpool, using Drive Usage limiter... Is that correct?

    You will want a nested pool setup with your CloudDrive/CloudDrive Pool (whichever you choose to create) in a master pool with your existing DrivePool. You can then set any balancing settings you like between those two in the master pool. There are a lot of ways to handle the specific balancing configuration from there, depending on what, exactly, you want to accomplish. But it sounds to me like you have the basic concept right, yes.

     

    2 hours ago, ryan74 said:

    *BTW is it possible to add local and cloud path directories to a Plex library, with the same data?

    You won't have to. If you use DrivePool and nest a few pools, as you're planning here, you'll still have one mount point for the master pool to point your applications to. Everything else will be transparent to the OS and your applications.

    That is: you will automatically be accessing both the cloud and local duplicates simultaneously, and DrivePool will pull data from whichever it can access when you request the data (using the hierarchy coded into the service, which is smart enough to prioritize faster storage.)

  4. 5 hours ago, GdriveUnknown said:

    I'm hitting also hitting the limits daily, as I just got gigabit fiber. I'm trying to backup 15TB in my Gdrive but I've only gotten upto 3TB so far. What would be the best way to go about this? After the backup I should be fine as I never upload more than 20GB-50GB Daily. I tried entering a upload throttling amount but that doesn't really work. I've tried upload threshold but that only goes to 10mb. I tried to enter 740GB, but it say the max is 10mb. Do I put 1440 Minutes in a day? Do I change the size of my cache? I've set that to 1TB. 

    Your upload threshold is the point at which CloudDrive will upload new data to the cloud. It isn't a limit, it's a delay. I think you're sorta confusing what that setting does. Your upload threshold says "start uploading new data to the cloud when I have X amount of new data, or it has been Y minutes since the last upload--whichever comes first." It is not a limit that says, "upload X amount of data, then stop."

    The uploading throttling, on the other hand, is the setting that will limit your upload speed. And it works fine to keep your usage below Google's API limits. So I'm not sure what you mean that it doesn't really work. I use it every day. A throttle limit of roughly 70mbps will keep you below Google's 750GB/day limitation.

  5. Once you've reauthorized the drive or reestablished the connection after modifying the JSON file, you should be using the new key. There shouldn't be anything else to do.

    As far as performance is concerned, I've never seen a difference between using a personal key and the default key. Google may have changed something since the last time I used a personal key, but there was no difference whatsoever.

    Do you see any API errors in your logs? If there is something API related impacting throughput, it should be in the logs.

    For what it's worth, there really isn't a compelling reason to use a personal API key over the default key. I'd probably just suggest switching back to the default key if it is, for some reason, giving you issues. You can always switch to your personal key when and if the default key actually hits capacity or experiences some other issue. But that hasn't been happening any time recently.

  6. How much cache have you given it? If there is room, and there isn't a need to replace the content in the local cache, I don't believe that CloudDrive will simply flush the cache just to flush it. There really isn't any need for it to do so. There is no reason, for example, that the local cache would ever be stale--since the only way to modify the drive is via CloudDrive itself, which is always aware of the content residing in the cache vs the provider. If your previous frame of reference is something like rClone, it has a cache expiry because the cache can be stale relative to the provider. That's not the case here. The local cache only ever mirrors remote content. The cache is either mirrored content that has been uploaded to the provider, or content that has been fetched from the provider, in addition to the content that has yet to be uploaded to the provider. 

    If you want to force it to clear the cache entirely for testing purposes, I would suggest either detaching and reattaching the drive, or setting the cache size to something exceptionally low so there isn't room locally to cache even a single large file. 

  7. On 8/3/2020 at 8:08 PM, Firerouge said:

    With the new hierarchical chunk organization, shouldn't this now be technically possible?

    I don't believe that change would impact this limitation. The limit on Team/Shared drives was not a per-folder limitation in the first place.

  8. It probably would have been better to submit a new thread for this question, as it only tangentially relates to the topic of the thread, and the Plex information in this particular thread is a bit outdated. So for anyone stumbling on this thread for up-to-date Plex information, you'll likely want to continue your search.

    Now, that being said, the errors that you're seeing are data integrity check errors. CloudDrive is telling you that the data that it's getting from Google does not match the data that it expected to receive. So that leaves us with three possibilities: 1. CloudDrive has a bug and is, for some reason, expecting the wrong data; 2. Google is returning the wrong data when CloudDrive requests it; or, 3. That the data on Google's services is genuinely corrupt. My intuition says that 2 is the most likely--both because a bug like that should be more widely reported, and because genuine data corruption shouldn't resolve itself once in awhile--but determining which of them is actually the problem will require that you submit a ticket to the official support channel at https://stablebit.com/Contact so that they can troubleshoot with you.

    Those are not, notably, API ban errors. And CloudDrive is effectively immune from the API bans that tools like rClone get simply by virtue of its method of operation anyway. So whatever is going on here is likely unrelated to the use of the API, unless Google has changed something without warning (again).

  9. 7 hours ago, Gijs said:

    I just had this same problem as well. I would recommend adding a warning to the https://wiki.covecube.com/StableBit_CloudDrive_Q7941147 article that quotes are required.

    As mentioned, the example image on the wiki DOES show the quotes, as does the default advanced config file from which the image is taken. But, that being said, if you think a more explicit mention is warranted, you'll want to send it to Stablebit via the contact form on the website. 

  10. Submitting a ticket was the right call, here. It may be a bug, it may be a conflict with another service. It may even be a hardware or Windows issue. So you'll need to work with support via the ticket system.

    In any case, you should go ahead and download and run the Troubleshooter (https://wiki.covecube.com/StableBit_Troubleshooter), which will gather a bunch of information and submit it for your ticket. Just enter the ticket number you were given in the requisite field when you run the tool. Christopher and Alex will let you know if they need anything else from there.

  11. I have active instances of both rClone and CloudDrive myself as well. I use an rClone mirror for my data, actually, just as insurance against data loss and to provide flexibility. CloudDrive is my daily driver storage solution, though. I've simply found that it's faster and more compatible with less quirks.

    This is the main thread that I can think of where the Team Drive issue has been addressed:

    The biggest technical issues mentioned are related to drive integrity because Team Drives allow simultaneous access, and the hard limits on files and folders.

  12. 20 minutes ago, Jean-pierre.erasmus@outloo said:

    That is what I am thinking.

    After the reboot and reinstall it appears to have cached the credentials somewhere. 
    Do you know where so I can delete it as the reinstall remembered all configuration

    Well you can delete the service folder, but I wouldn't recommend it. Simply choosing to reauthorize the drive, or removing the connection from your providers list in the UI and re-adding it, should bring up the window to choose your credentials again.

  13. 16 minutes ago, Jean-pierre.erasmus@outloo said:

    Thank you for the prompt reply @srcrist. I have logged a ticket. I will quickly try uninstalling clouddrive -> reboot -> clear all files -> reinstall to see if that fixes it.

    Sure thing. Also just doublecheck very carefully that you are actually using the same account that created the drive to authorize it now. You'll be using whatever Google account is logged in to your browser to authorize the service. As someone who uses multiple Google accounts for a few things, I have received this same error because I thought that I authorized with one account but didn't notice that I actually was signed in with another.

  14. The error, as you can probably deduce, is simply saying that it cannot locate the data for the volume when logging in with the provided credentials. But, if you're absolutely sure that you're using the same google credentials that the data was created with, and that the data is available when looking via other tools like the Google web UI, you might have just stumbled on a bug of some sort.

    Submit a ticket to the official support channel here: https://stablebit.com/Contact

    They'll get you sorted.

  15. When drives are first created there will be some amount of file system data that will be uploaded to your cloud provider. It should not be much more than a few hundred MB per drive, though, and should only take a few minutes to upload for most people.

    It's possible that something else, perhaps even Windows itself, is creating some sort of data on the drive. Make sure that Windows' indexing services and page file and recycle bin settings are disabled on the CloudDrive volumes.

    In any case, if you are just getting started I would highly discourage you from using eight separate CloudDrive drives unless there is some very specific reason that you need individual drives instead of one large drive split into multiple volumes. CloudDrive can be surprisingly resource intensive with respect to caching and bandwidth, and the difficultly of managing those things compounds pretty quickly for each additional drive you tax your local resources with. If you have like eight separate SSDs to use as cache drives and a 1gbps connection wherein you have little concern for local bandwidth management then, by all means, feel free. Otherwise, though, you should be aware that a single CloudDrive can be resized up to 1PB total, and can be partitioned into multiple volumes just like a local drive. And that CloudDrive itself provides bandwidth management tools within the UI to throttle a single drive.

  16. On 7/27/2020 at 3:02 PM, ndr said:

    Hi,

    When I started with CloudDrive I created a 60TB volume and thought that would be enough but soon I am out of space. I need to increase this volume but not sure what options I got.
    What is the easiest way to solve this?

    I tried the "resize" option but seems like I a sector size setting (4kb) for my drive won't let me increase it any more.

    Is it possible to recreate the drive from scratch without removing the uploaded content in Gdrive?

    We should clarify some terms here. A partition, or volume, is different than the size of the drive. You can expand a CloudDrive drive up to a maximum of 1PB irrespective of the sector size. But the sector size will limit the size of individual volumes (or partitions) on the drive. A CloudDrive can have multiple volumes. So if you're running out of space, you might just have to add a second volume to the drive.

    Note that multiple volumes can then be merged to recreate a single unified file system with tools like Stablebit's own DrivePool.

  17. 18 minutes ago, Burrskie said:

    Yeah. the issue I have is the cloud drive that has everything is already part of a pool, so I can't remove it from the pool to add it to a pool I will nest. Make sense?

    And to be very clear: though you cannot add your drive to multiple pools, you can add your pool to another pool, which is the proposal here.

    Thus, you can have, say, your pool of local drives and your pool of cloud provider drives both together in a master pool which can be used to duplicate between both subpools.

  18. 5 minutes ago, Burrskie said:

    Yeah. the issue I have is the cloud drive that has everything is already part of a pool, so I can't remove it from the pool to add it to a pool I will nest. Make sense?

    No. I'm sorry. I'm not trying to be difficult, but I'm just not understanding the problem. If you want to nest your existing pool, you can just add it to a new pool and move the data to the new PoolPart folder that will be created within the pool you already have. If you want to, for some reason, break your existing pool apart, delete the pool you have, and create a new pool to nest with another pool...you'd just move all of the data out of your PoolPart folder, remove the drive from your existing pool, create the new one, and move the data to the PoolPart folder that you want it to be in.

    The major point here is that since DrivePool works at the file system level, all of the data is accessible (and modifiable) as existing files and folders on your drive. You can move and shuffle that around however you choose simply using Windows Explorer or another file manager. No matter what structure you want your pools to have, it's just a matter of moving data from one folder to another. It's entirely up to you.

  19. 3 hours ago, Burrskie said:

    Could you explain a little more on how I would move things? I can't assign a cloud drive to multiple drive pools, so I can't take the stuff from one drive pool folder on the cloud drive to a second drive pool folder on the same cloud drive. I am either thinking way too hard about this or there's something I am missing. Please help me out just a little more!

    I guess I'm really not sure what you mean. When you add any drive to a pool, a hidden folder called PoolPart-XXXXXXXXXX will be created on that drive. Any data on that drive that you want to be accessible from within the pool just needs to be placed in that folder. If you're trying to nest the pools, you'll have a hidden PoolPart folder within a hidden PoolPart folder and you'll just need to move all of the data there. In every case, whatever pool you want the data in simply requires that the data be moved to the respective PoolPart folder that contains the data for that drive. Any possible change you want to make simply involves moving data at the file system level either into or out of the PoolPart folder for a given pool. You can restructure any pool you have completely at will.

×
×
  • Create New...