Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex: 20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
  2. Google has a 750GB/day limit on uploads. If you upload 750GB you will be locked out of the upload API until the reset each day. Are you uploading more than that amount within a single 24 hour period?
  3. Again, there is a service log option in the technical details window that can be adjusted to record various tracing levels for numerous aspects of the application. The logs can be found in your service directory. That directory is %programdata%\StableBit CloudDrive\Service by default. You will need to apply the tracing levels to record the information you're looking for, and then load the logs to parse for your purposes. Note that verbose tracing of the sort you're requesting will generate very lengthy logs.
  4. I'm pretty sure that the service log under technical details can be adjusted to show this information, but Chris or Alex would probably need to chime in about what tracing levels need to be set to which settings to get the result you're looking for.
  5. Do not move the directory Plex is pointed at. Leave Plex pointed at the DrivePool drive, but move the content to the hidden folder. Plex will see it just fine. That hidden directory is where DrivePool stores the pool content on that drive. It will appear in the DrivePool drive once you've moved the content there.
  6. When you say that you created multiple partitions from the same google drive, I'm not exactly sure what that means. Are you saying that you are using CloudDrive to make a drive and partitioning that? If so, it might simply be your performance settings. I can help you, but this is probably a less helpful place on that front than the CloudDrive forums.
  7. My first thought is that if you cleared out the DB, it's going to need to reindex the chunks from the cloud. For 53TBs that will take at least 6 hours. Are you saying you've let it go that long and it's just dismounting and remounting? I'd be glad to remote in and take a look if you'd like, but I'm guessing here that this is what is going on. Note that this is not normal drive recovery, and you can see that this process is underway by opening the service log in the technical details. You'll see hundreds of numbers scroll by as it indexes all of the chunks.
  8. srcrist

    Google Drive slow

    Your prefetcher settings are effectively useless, and your minimum download is far too low. Your upload threads are probably too high as well. Change your minimum download to 20MB, drop your upload threads to no more than 5. Use more reasonable prefetcher settings like a 20MB trigger in 10 seconds with a 500MB prefetch. All of that should make a huge difference.
  9. I'm glad you were able to saturate, but lower that thread count. 20 is too high and you'll get throttled by Google. 10 down and 5 up should be more than enough. 20 is more than the API will even allow you to have in *both* directions.
  10. srcrist

    CloudDrive and Plex

    Yeah, it's perfectly fine. If you fake it, you'll risk having it terminated for fraud. Google doesn't care what you put on your space as long as it isn't facially illegal (read: child pornography), and CloudDrive is completely encrypted anyway. The only thing I've ever heard of Google getting mad at is uploading raw copy-written video files and sharing them with others right off of your space. Don't do that. Using CloudDrive is perfectly fine, though. And CloudDrive respects all of Google's throttling requests etc, so the account won't get API banned unless you exceed the 750GB/day limit, in which case you'll be locked out of the upload API until it resets.
  11. srcrist

    CloudDrive and Plex

    That is absolutely not why those accounts were terminated. Google isn't going to terminate an account for exceeding a limit that they could easily enforce with technical measures if they actually wanted to. Plenty of people have GSuite for business accounts well in excess of 1TB and they aren't getting terminated. The people who said that are either mistaken or lying. I don't know what threads you're talking about, but I can tell you from personal experience that Google does not terminate accounts for using too much space. They could easily simply stop you from uploading anything else, but they don't. Don't worry about it. Of course, anyone who is using an account via another institution, like a university, is at the whims of the polices of their institution, but that's a different story. I think I might have misunderstood your desire here. You can duplicate the content on one google account on another google account and mount the drive, but I'm not sure what you'd gain by doing it this way. If you simply share the content from one account to another, and the original account gets terminated, the shared content will still vanish. So you'd still have to upload all of the data twice, whether by downloading it all from one drive and uploading it again, or simply by just making a second CloudDrive and mirroring it with DrivePool or some other utility. Personally, I would suggest the latter. It's an automated process and you don't need to worry about moving the data yourself. The different UUIDs generated for the two drives are irrelevant if you're mirroring it at a filesystem level.
  12. It isn't bad. You can probably do better. What are your performance settings? I use 5 upload threads and 10 download threads. I can saturate my Google Drive connection. But, remember, Google has a cap of 750GB/day, so I just throttle it at 70mbps so I never exceed the cap.
  13. srcrist

    CloudDrive and Plex

    Don't use or pay for the fake .edu accounts. They will absolutely, 100%, get terminated. You can simply pay for your own GSuite for business account and still have access to unlimited storage (they advertise that you need at least 5 users, but even a single user has unlimited as of now). It's only $10 a month, it's unlimited storage, and, to my knowledge, I haven't heard of a single one of these accounts being terminated. The .edu fakes, on the other hand, are terminated all the time. Not to mention that they're just run by shady people willing to do shady things and they shouldn't be trusted at all. Your university account is subject to your institutions policies, but I have never heard of an institution requiring some arbitrary data threshold for their users. I don't even know why they would. The terms of Google for Education don't place any such limits on them, and there's really no reason that they should care how much data anyone uses. Google takes care of it all, and it's actually completely free for the university no matter how much data the students use. You'd have to have some draconian and completely unnecessary IT policies at your institution for them to arbitrarily enforce policies that do not affect them in any way, and do not cost them any more money. They have effectively no responsibility for the data or its maintenance at all. It's just a service that Google provides to educational institutions because of their background as a university research project. Now, that being said, not all institutions permit alumni to keep GSuite accounts after they graduate at all, or vice versa. My institution only uses GSuite for alumni, for example, which is what I use for my Plex. Our student accounts were another email provider entirely. I've also heard of schools that will provide you with a *different* GSuite account for alumni. I'm sure you can contact your school's IT department to verify their policies around the accounts, in any case. Now, as far as your questions go: It's important to understand that your CloudDrive is completely attached to the account that you upload your data to. Again: Do not use a purchased, fake .edu account for your CloudDrive. Once the data is uploaded to that location, it is gone as soon as you lose the account. For good. You would have to set up multiple CloudDrive drives on multiple accounts and mirror them to have any sort of redundancy to protect against this, and you'll be doing X times the uploading of the same data for each account you need to do this for, as well as increasing the overhead on your server. Just don't do it. Use a legit .edu account or pay for GSuite for business. But CloudDrive drives are portable. As long as the integrity of the GSuite account remains intact, you can simply detach it from one machine and reattach it to a new machine. Takes less than 5 minutes to do. As covered above, no. And simply do not buy .edu accounts. Why are you even supporting those people and their shady fraud? If you're willing to pay, GSuite for business isn't that expensive. You would have to either upload the data twice from a mirrored drive (which you *could* do with DrivePool, if you wanted), or reupload the data from a local backup when (not if) the account gets nuked. Ultimately, though, CloudDrive works wonderfully for Plex and I've been using it for exactly that purpose for over two years now. Just stay as far away as you can from the people setting up fraudulent .edus for free and them selling them on the internet. For the record, I have around 215 TB on my .legit .edu account and I haven't heard a peep from either Google or my institution.
  14. Count me among the unfortunate souls who also lost a drive. I had a 90TB ReFS volume and a windows update killed it over two years ago, and I ditched it right then. My solution is to use 55TB NTFS volumes (so that they stay below the 60TB limit for chkdsk). I have one large CloudDrive divided up into 55TB partitions, and then I combine them with DrivePool into one unified storage pool. The end effect is identical to having one giant drive, but I can repair volumes individually. I've been using this setup for a few years now without issue. Never had any unresolvable file corruption at all. I just expand my CloudDrive and add another 55TB volume when and if I need more space.
  15. That's actually a problem with ReFS and not CloudDrive. See the many examples on google: https://www.google.com/search?q=windows+10+reFS+RAW&oq=windows+10+reFS+RAW&aqs=chrome..69i57.7817j0j7&sourceid=chrome&ie=UTF-8 The fall creators update even removed support for ReFS because of the stability issues, signalling that even MS doesn't necessarily see it as an NTFS successor any longer. At least not any time in the near future. Going with NTFS is advisable, for now.
  16. CloudDrive is not sync software like rClone. It is a virtual drive. Though they can be used to accomplish similar things, this is a very important distinction. You could use some sort of backup tool on top of CloudDrive to easily accomplish the same thing, but the concept of a "source" and a "destination" really has no relevance with CloudDrive. CloudDrive simply *is* the destination. There is no local copy of the data (outside of the cache) unless you want to set up some sort of cloning or sync software to provide one. Once the data is on your CloudDrive drive you can do anything with it that you could do if it was on a physical hard drive, so you could easily use a tool to copy data from a local directory to the server and leave it there even if you delete it locally--if that's what I'm understanding your desire is here. As Christopher mentioned above, you can even use data recovery on the drive just like a physical disk if you were to *accidently* delete something from the CloudDrive drive--which is something that *cannot* be done with a file-based solution like rClone. But as far as what happens to the data on the drive relative to some local copy, that's entirely up to you. CloudDrive doesn't operate based on interacting with actual copies of individual files stored on your cloud storage, so it doesn't really know or care what you do with those files. All it does is whatever you tell it to do, just like your hard drive. If you want redundancy or deletion protection, any tool that provides such features on a physical drive can easily be used to do the same for a CloudDrive drive.
  17. CloudDrive is not the solution you're looking for, in that case. It fundamentally operates in a way incompatible with your goals. There are some other solutions that might meet your needs. See this post for some more information:
  18. The file errors make me wonder if there is something wrong with the installation. Go ahead and run the troubleshooter (http://wiki.covecube.com/StableBit_Troubleshooter) and include your ticket number. I think you'll have to wait for Christopher to handle this one. I'm not sure what's going on.
  19. That makes me wonder if there is a corruption in the installation. Do a search and see if you can find the service folder. Search for a directory called "ChunkIds", which is where your DB is stored.
  20. While your internet connection may be fine, that error signifies that the connection between you and google's servers is not. That is what that error means. CloudDrive is not able to contact Google's servers and upload and download your data. The causes of such a problem can vary, but the fact that you can connect to *other* sites reliably has no bearing on this problem. I'm sure Christopher can walk you through troubleshooting the connection in your ticket. Check for it at C:\ProgramData\StableBit CloudDrive\Service\Logs. The ProgramData directory will be hidden.
  21. I don't speak German, so I'm not sure what the first error is telling you, but the second error is simply telling you that Google Drive was unreachable for a little while. If you're sure that everything is fine with your connection, it was probably just a temporary network hiccup. They happen. I get that error every once in awhile. Sometimes Google Drive itself even has network problems. You can check that status here: https://www.google.com/appsstatus#hl=en&v=status Completely unrelated to the issue discussed in that other thread you linked. Log files are located in %ProgramData%\StableBit CloudDrive\Service\Logs EDIT: Upon a second look, it appears that the first error is just the equivalent upload error to the second download error. Both simply indicate a network problem between you and Google. Neither are long-term concerns as long as they are temporary.
  22. I use mine on a headless OVH server. I don't have this issue. Might it be a video driver problem? Have you tried a 3rd party RDP client? You can also just skip that process entirely and open the ports required for remote access within CloudDrive and run the UI locally. That might be an option for you. See :https://stablebit.com/Support/CloudDrive/Manual?Section=Remote Control
  23. Again, I'm simply not even sure how that would be possible from a technical perspective. I suspect that any high-traffic application might cause the same problem for you. CloudDrive doesn't do anything to modify the WAN settings on your router--nor could it. If your router is dropping the connection to your modem, that's almost certainly your router, your modem, or your ISP. Stranger things have happened, I suppose, but I have no idea what could cause that problem. No application on your PC should ever be able to cause that problem. I know this probably isn't what you want to hear, but it's almost *certainly* not CloudDrive. CloudDrive is just demanding the data required to trigger the problem.
  24. Does the router still have a WAN IP when this happens? You'll have to look at the router's config page when it drops out to see. It sounds to me like the router is losing its connection to the modem when this happens, and I have no idea how any application could possibly cause that unless there is a problem with the router, the modem, or your ISP. That setup works fine for both. I don't use torrents very much, but that setup works fine when I do. If you're hosting a drive specifically for torrents, you can probably adjust the prefetcher down somewhat. Torrent pieces are only around 4-8MB apiece even for a very large torrent. So caching 500MB at a time is unnecessary. Honestly, with a minimum download size of 10MB, I'd probably just disable the prefetcher altogether for torrents. Torrents grab pieces in a somewhat arbitrary order, based on the availability of pieces being seeded. So there isn't a real advantage to caching a lot of contiguous data. It won't likely need the data contiguously. If you're seeding, torrents are also comparatively slow. You'll rarely need more than 1MB/sec or so, and your connection is fast enough to handle that on the fly--as is mine.
  25. OK. In that case, I think my follow up question is for some more detail about what you mean when you say that the router is "dropping" your internet connection. Is it simply not responding? Are you losing your DHCP to your modem? What specifically is it doing when it drops? I'm trying to think of things that *any* application on your PC could be doing that would cause a problem with the connection between your router and your WAN without there being an underlying router issue, but I'm having trouble thinking of anything. Some more detail might help. You're welcome. Some of them are probably out of date compared to what I'm using today. I had written up a little tutorial on reddit to help people, but I ended up deleting that account and wiping all of the posts. Nothing will influence your upload more than the number of upload threads. Personally, I throttle my upload to 70 mbps so that I never exceed Google's 750GB/day limit, but my current settings easily saturate that. My settings are as follows, and I highly recommend them: 10 Download Threads 5 Upload Threads No background I/O (I disable this because I want to actually prioritize the reads over the writes for the Plex server) Uploads throttled to 70mbps Upload Threshold set to 1MB or 5mins 10MB minimum download size (seems to provide a relatively responsive drive while fetching more than enough data for the server in most cases) 20MB Prefetch Trigger 500MB Prefetch forward 10 second prefetch window I actually calculated the prefetch settings based on the typical remux quality video file on my server. 20MB in 10 seconds is roughly equivalent to a 16mbps video file, if I remember the standard that I used correctly. That's actually a very low bitrate for a remuxed bluray, which is typically closer to 35mbps. The 10MB minimum download seems to handle any lower quality files just fine. I still play with the settings once in awhile, but this setup successfully serves up to 6 or 7 very high quality video streams at a time. I've been extremely happy with it. I'm using one large drive (275TB) divided up into 5 55TB volumes (so I can use chkdsk), and it has about 195TB of content on it. I combine all of the volumes using DrivePool and point Plex at the DrivePool drive. Works wonderfully. The drive structure is 20MB chunks with a 100MB chunk cache, running with 50GB expandable cache on an SSD.
×
×
  • Create New...