Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. It's almost certainly a router problem. There isn't really anything that CloudDrive does that is abnormal as far as network traffic goes, other than being somewhat taxing on your network infrastructure. So the first thing that I would test is another high-traffic application to see if it causes the same problem. Smallnetbuilder has a good article on router testing here: https://www.smallnetbuilder.com/wireless/wireless-howto/31679-how-to-test-a-wireless-router Try that first, and then we can go from there.
  2. When there is a drive issue, once you reconnect it will upload your cache to make sure that the content on the cloud is synced. Is that what you're seeing it upload? Does the size match your cache size?
  3. When it's attached via USB, it would use the local disk provider--which I believe is a separate format from the FTP provider. I'm not sure if there is a way to do a quick conversion between the two. If you can share your drive from the router as a network drive instead of an FTP, both of those could use the local disk provider. I haven't seen Christopher on the forums lately, but if you submit a ticket he can let you know if there is an easier way to convert the data without just creating a new local disk drive and copying it over.
  4. srcrist

    Few requests

    I do not. I have one very large drive divided up into 5 55TB volumes--all on one account, combined with DrivePool. It's about 190TB of data. When it isn't pinning, CPU usage is negligible. When it is pinning it's less than 5%.
  5. srcrist

    Few requests

    For what it's worth, I see less than 5% usage, generally, on my E3-1245v2. So that might be a bug of some form. I don't think that CPU usage is typical. Certainly not for an 8700k.
  6. srcrist

    Cache question

    What about checking Resource Monitor? Is anything shown to be writing to the drive?
  7. srcrist

    Cache question

    You just need to open a ticket for that one. The fixed cache shouldn't grow beyond its setting at all. The writes to the drive should simply be throttled. You didn't *change* the cache from another setting at some point, did you? If so, it might be in the process of shrinking. That being said though, your problem here isn't actually the cache--which is that small darker blue section of the pie. According to your image you have over 5 TB to upload. Have you checked the drive usage in Resource Monitor or a similar tool to see if anything is indeed writing to the drive? That is technically a part of the cache, but it isn't representative of the cache growing to fill the disk. CloudDrive thinks, for one reason or another, that there are 5 TB of data that have been written to it that need to be uploaded to the cloud. That's what's taking up all of that space. The only non-bug explanation I can think of, based on this image, is that 10mpbs *will* exceed Google's 750GB/day upload limit and give you an API lock-out every day. So if something *is* writing to the drive, it won't be able to upload it all in the time available each day--which will continue to fill the drive. But ostensibly not with a fixed cache regardless. In any case, this needs a proper ticket for Alex and Christopher to take a look at. Unless you changed your cache from expandable or something while a ton of data was already waiting to be uploaded, that behavior is abnormal. Once you submit a ticket and have a number, you should also just go ahead and run the troubleshooter (http://wiki.covecube.com/StableBit_Troubleshooter) and attach the ticket number so they can take a look at the environment. Submit everything, and it'll collect about 300MB of information and upload it for them to poke around in. Out of curiousity: does Windows also show that the drive is full? It isn't a UI or reporting error with CloudDrive is it?
  8. Unfortunately, because of the way CloudDrive operates, you'll have to download the data and reupload it again to use CloudDrive. CloudDrive is a block-based solution that creates an actual drive image, chops it up into chunks, and stores those chunks on your cloud provider. CloudDrive's data is not accessible directly from the provider--by design. The reverse of this is that CloudDrive also cannot access data that you already have on your provider, because it isn't stored in the format that CloudDrive requires. There are other solutions, including Google's own Google File Stream application, that can mount your cloud storage and make it directly accessible as a drive on your PC. Other similar tools are rClone, ocaml-fuse, NetDrive, etc. There are pros and cons to both approaches. I'll list some below to help you make an informed decision: Block-based Pros: A block-based solution creates a *real* drive (as far as Windows is concerned). It can be partitioned like a physical drive, you can use file-system tools like chkdsk to preserve the data integrity, and literally any program that can access any other drive in your PC works natively without any hiccups. You can even use tools like DrivePool or Storage Spaces to combine multiple CloudDrive drives or volumes into one larger pool. A block-based solution enables end to end encryption. An encrypted drive is completely obfuscated both from your provider and anyone who might access your data by hacking your provider's services. Not even the number of files, let alone the file names, is visible unless the drive is mounted. CloudDrive has built-in encryption that encrypts the data before it is even written to your local disk. A block-based solution also enables more sophisticated sorts of data manipulation as well. Consider the ability to access parts of files without first downloading the entire file. That sort of thing. The ability to cache sections of data locally also falls under this category, which can greatly reduce API calls to your provider. Block-based Cons: Data is obfuscated even if unencrypted, and unable to be accessed directly from the provider. We already discussed this above, but it's definitely one of the negatives--depending on your use case. The only thing that you'll see on your provider is thousands of chunks of a few dozen megabytes of size. The drive is inaccessible in any way unless mounted by the drivers that decrypt the data and provide it to the operating system. You'll be tethered to CloudDrive for as long as you keep the data on the cloud. Moving the data outside of that ecosystem would require it to again be downloaded and reuploaded in its native format. Hope that helps.
  9. srcrist

    Cache question

    Your post is missing a few bits of important information to help determine if there's actually a problem. Notably, you never mention what you actually have your cache set to. What sort of cache mode are you using, and what size? CloudDrive will *always* fill the drive to *at least* the size that you specify when you mount the drive. So if you're using a 6TB drive and you've set up a 6TB cache size, it'll store 6TB of information and never any less. It will cache the most beneficial data, as determined by the algorithm, until it reaches the size you've set. If you've set your cache to something smaller, like a few hundred GB, and you've selected the "expandable" cache mode that is recommended for most use cases, it will fill up to whatever amount you've set and then continue to fill the drive only while it has additional content that needs to be uploaded to the cloud. As an example, if you have a 250GB expandable cache, you'll always see the cache as *at least* 250GB--but if there is an additional 100GB that needs to be uploaded to the cloud, the cache will grow to 350GB and decrease as that additional data is uploaded until it reaches its 250GB minimum again. So if I don't help you with the following information, share those details. Now, that being said, if you *are* using a 6TB drive with a 6TB cache size--that's excessive and counter-productive. I'd say a 500GB cache or so, at the most, is all you would ever need for even the most heavy uses (video streaming from your drive or something similar). That means that CloudDrive will store the 500GB of the most accessed/most useful data locally, and move the rest to the cloud for later access. But here is the good news: the detail that is actually important with respect to whether or not you can disconnect the drive right away isn't actually the cache size at all. It's the "To upload:" number that is displayed right below the pie chart in the UI (if you have any data to upload). That number is what actually represents the data that is in your cache but *not* in your cloud storage yet--and that is the only number that will prevent CloudDrive from letting you detach the drive and attach it to another system. As long as you don't have any data "to upload," you can detach the drive in less than a minute and your entire cache--whether it's 6GB or 6TB--will instantly poof. It's all just duplicated data from your cloud storage. If you actually have several TB of data *to upload*, then you're almost certainly running into other problems like daily data upload limits set by your cloud provider. And there isn't much CloudDrive can do about that. You'll just have to stop putting data on the drive and wait until it's done. Does that make sense?
  10. The Manage Drive menu is located directly below the pie chart on the clouddrive UI. You'll find both the "destroy" and "detach" options within that menu. It does not matter if you're using the trial or not.
  11. You'll likely need to adjust the filtering level to see the exponential back-off messages. Change your ApiHttp tracing level to information and you'll see messages like this "3:10:38.5: Information: 0 : [ApiHttp:98] Server is throttling us, waiting 1,928ms and retrying." But there is no way that the exponential backoffs aren't working for you. If they stopped working, you'd quickly hit google's API "ban" and no longer be able to submit API requests for 24 hours. Even if they *weren't* working, though, it would not be a constant 100mbps stream or 75mbps. Google rejects the API requests once you hit your threshold, so you can't actually transfer any data no matter what CloudDrive wants to do. Aside from the data required to make the API request, which is minimal, there would be no data transferred. It just isn't a technical possibility.
  12. The exponential backoff is hardcoded. If you check your logs you can see how long it pauses every time it receives a throttling request. It will continue trying periodically, but that period gets longer and longer while you're at the threshold. But, like Christopher said, dropping your upload to 75mbps from 100mbps will prevent you from hitting the 750gb/day limit altogether. You're only getting those messages because you're allowing it to exceed the limit for no reason. In any case, there isn't a lot of harm in letting it keep trying. CloudDrive will not exceed Google's limits and will not result in an API ban no matter what.
  13. It says Gsuite in the title. I'm assuming that means Google Drive. Correct me if all of the following is wrong, Middge. Hey Middge, I use a CloudDrive for Plex with Google Drive myself. I can frequently do 5-6 remux quality streams at once. I haven't noticed a drop in capacity aside from Google's relatively new upload limits. Yes. But this is going to require that you remake the drive. My server also has a fast pipe, and I've also raised the minimum download to 20MB as well. I really haven't noticed any slowdown in responsiveness because the connection is so fast, and it keeps the overall throughput up. This is fine. You can play with it a bit. Some people like higher numbers like 5 or 10 MB triggers, but I've tried those and I keep going back to 1 MB as well, because I've just found it to be the most consistent, performance-wise, and I really want it to grab a chunk of *all* streaming media immediately. This is way too low, for a lot of media. I would raise this to somewhere between 150-300MB. Think of the prefetch as a rolling buffer. It will continue to fill the prefetch as the data in the prefetch is used. The higher the number, then, the more tolerant your stream will be to periodic network hiccups. The only real danger is that if you make it too large (and I'm talking like more than 500MB here) it will basically always be prefetching, and you'll congest your connection if you hit like 4 or 5 streams. I would drop this to 30, no matter whether you want to go with a 1MB, 5MB, or 10MB trigger. 240 seconds almost makes the trigger amount pointless anyway--you're going to hit all of those benchmarks in 4 minutes if you're streaming most modern media files. A 30 second window should be fine. WAAAAY too many. You're almost certainly throttling yourself with this. Particularly with the smaller than maximum chunk size, since it already has to make more requests than if you were using 20MB chunks. I use three clouddrives in a pool (a legacy arrangement from before I understood things better, don't do it. Just expand a single clouddrive with additional volumes), and I keep them all at 5 upload and 5 download threads. Even if I had a single drive, I'd probably not exceed 5 upload, 10 download. 20 and 20 is *way* too high and entirely unnecessary with a 1gbps connection. These are all fine. If you can afford a larger cache, bigger is *always* better. But it isn't necessary. The server I use only has 3x 140GB SSDs, so my caches are even smaller than yours and still function great. The fast connection goes a long way toward making up for a small cache size...but if I could have a 500GB cache I definitely would.
  14. Want to bump this. I'm also seeing a lot of these in the logs. Is this just a new throttling message?
  15. Great, thanks for adding that. It'd be nice to just be able to check a box and create a new volume myself.
  16. This might be a feature request, but is it presently possible to expand a drive without also expanding the volume size in order to add a new volume? I want to keep my volumes at around 50TB or so, but I'd like to add additional volumes over time without creating new CloudDrives. In testing, it looks like "resize" will always also resize the volume as well as the drive. Am I correct?
  17. This is the CloudDrive forum, just fyi. But, setting that aside, I'm not entirely sure how to answer your question. You are using the up-to-date stable versions of both of those products. There has been a significant amount of development, particularly for DrivePool, in the most recent BETA releases. But, as with all BETA software, you'll have to do your own risk assessment to decide if you want to accept the risk and responsibility of using pre-release versions. I use the beta versions of all three products, and I don't have any issues. But your mileage, of course, may vary. You can find the BETAs and their associated changelogs here: http://dl.covecube.com/ScannerWindows/beta/download/ http://dl.covecube.com/CloudDriveWindows/beta/download/ http://dl.covecube.com/DrivePoolWindows/beta/download/ Hope that helps.
  18. CloudDrive essentially cannot hit the API ban because of its implementation of exponential back-off. rClone used to hit the bans because it did not respect the back-off requests Google's servers were sending, though I think they've solved that problem at this point. In any case, don't worry about that with CloudDrive. The only "ban" you'll need to worry about is the ~750gb/day upload threshold.
  19. Can you be more explicit about what sort of scanning plex is doing? What are the other settings for CloudDrive? What is your chunk size? I use Google Drive, not OneDrive, but scanning for changes takes no time at all. I once lost my library though, and re-adding all of the content took over a day--so if that's what it's doing, that's normal. But give some more details.
  20. Right. But no matter what you set that to, the data will *still* be in the cache at the end of that period. I don't know why the UI only shows the prefetched amount for that duration. You might just be noticing a UI bug. But as long as the functionality is as Christopher says, it shouldn't really matter what the UI is saying, the data is still available indefinitely.
  21. The indexing process, when it has to happen, can take a very long time. I have a drive that takes over 10 hours. It will complete eventually. You can use the service log under technical details to watch it, if you want to see some sort of indication of the progress. It will start at your highest numbered block and work its way down to 0. Fortunately, now that the bug that existed a few months ago is fixed, it very rarely has to reindex.
  22. I think Christopher already answered that question above. The data is kept in the cache as long as there is space, so it doesn't have anything to do with duration. Once the prefetcher grabs the data, it's in the cache until the caching algorithm needs the space. So the time is a window.
  23. I'm not sure what you're referring to. It doesn't reupload the drive on an unclean shutdown. It *does* reindex the chunks on the provider if it detects some sort of a problem with the local database. It also uploads everything that was in the cache when the unclean shutdown occurred. Is your cache the size of the entire drive?
  24. I use Google Drive, and my server host is OVH's SoYouStart brand.
  25. I've never really tested the limits, but I have 5 to 8 regularly without any issues. Server is an E3 1245v2 and 32GB of ram on a 250mbit connection and media is stored predominately in 20000 bit/s files or higher. I've been quite happy with it.
×
×
  • Create New...