Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. Gonna cross my fingers that this helps to solve my issue with Google and reindexing. Haven't been able to mount my drive in like four days because I can't reindex the drive in less than 8 or 9 hours and I get at least ONE error during that time. EDIT: No dice. Still unable to mount the drive in the time that I have between errors.
  2. So my service crashed last night. I opened a ticket and sent you the logs to take a look at, so we can set that aside. But while it was reindexing it got one of the Internal Server Error responses from Google Drive. Just one. Then it started reindexing the entire drive again starting at chunk 4,300,000 or so. Does it really have to do that? This wouldn't have been a big deal when this drive was small...but this process takes about 8 hours now every time it has to reindex the drive, and it happened at around the halfway mark. Four hours of lost time is frustrating. Does it HAVE to start over? Can it not just retry at the point where it got the error? Am I missing something? Just wanted to see what the thought was on this.
  3. My understanding is that CloudDrive is SUPPOSED to do this already. The shutdown service that it runs exists specifically for this purpose. We just need to get some attention to the issue and see if Alex can come up with a fix.
  4. You did not answer any of my questions. And if you come to a forum and ask questions or make statements, you don't get to call the people trying to understand you "smartypants." You sound fundamentally confused about how this software works in a multitude of ways, and it's difficult to even understand what you're talking about, as a result. CloudDrive has no inherent risk of losing your data and it's still unclear how you think that it might. Stops and slowdowns are in inherent part of using this sort of application, and you've still not demonstrated anything abnormal. I watched your video. There might be some slowdown that can be solved by modifying your caching solution, but you've said nothing about it. And, frankly, I have no idea why you could even think that CloudDrive is causing internet connectivity issues with Windows itself. It simply does not touch anything that might be capable of doing so. It adds itself to the firewall, and, when running, it's just a bunch of http requests. Does your web browser cause your internet to disconnect? Because that's doing the same thing. This is the most uninformative response you could have made. Do you want help, or not?
  5. No idea what this means. What does this have to do with CloudDrive and what indication did CloudDrive give you that you might have lost data? Are you just talking about the reindexing that happens when the drive is uncleanly dismounted? It isn't software based. It's happening because you're using a cloud provider as a drive. It will never be as consistent as your data speeds off of a physical drive, both because your provider will set certain limits and throttling points on your connection, and because internet connectivity is volatile. You just need to adjust your expectations. It also might be I/O related depending on your caching setup. In any case, your video does not demonstrate anything unexpected or abnormal. This almost certainly has nothing to do with CloudDrive. I wouldn't even know where to start diagnosing this though. I notice that you're using your drive for Plex. See this guide here to see if it can help with some performance issues: https://www.reddit.com/r/PleX/comments/61ppfi/stablebit_clouddrive_plex_and_you_a_guide/
  6. I've seen this as well. My interim solution for my very large drive has been to simply detach the drive before reboots and then reattach it. It's much easier than doing 8 hours of reindexing when the server comes back online.
  7. srcrist

    Is there a way?

    CloudDrive is a block based solution, so the software doesn't really know what files it's accessing. You should be able to use any windows tool, like resource monitor, to see what is accessing the drive and what files are being accessed though.
  8. I've had some serious issues with corruption on ReFS with Windows 10. It seems to be a windows issue. I just went back to NTFS.
  9. Hi, I actually spoke with you on the reddit thread. I would go ahead and submit logs and open a proper ticket for your issue. It does not seem to be a settings issue so I think you need more troubleshooting.
  10. Yes and no. I'd say this sorta depends on the use case too. Remember that a fixed cache will also throttle all write requests once the cache is full too. So you'll only be able to write to the drive at the speed of the upstream connection. If you're using a torrent client to interact directly with the drive that could slow everything down overall. That being said, if we're talking about a drive that is predominately tuned for seeding I think you're right.
  11. The drive needs to be detached and reattached to the system. This is only an option you can change when creating a drive or reattaching one.
  12. So CloudDrive creates a real filesystem on a real (though not physical) drive structure. That means that NTFS on your CloudDrive will behave just like NTFS on a physical hard drive. So, just like a physical drive, when you delete data, NTFS simply marks that data as deleted and the drive space as available for future use. The data remains on the drive structure until overwritten by something else. So, to directly answer your questions: 1) Sure. It will "go away" once it is overwritten by new data. If some sort of information security is important to you (beyond that provided by end to end drive encryption) you'd want to use one of the many tools available to overwrite hard drive data with zeros or random binary. 2) Yes. It can. Just like any physical drive, you can use recovery tools to recover "deleted" or "lost" data off of your mounted CloudDrive. I think, on the balance, this is a huge plus for CloudDrive as a storage solution. 3) You've already reclaimed the space. At least as far as the operating system and filesystem are concerned. Windows will freely write to any drive space that NTFS has marked as available. What's probably confusing you a little is that unlike a physical drive, where all of the sectors and data space are available from the day you purchase the drive by virtue of the fact that they are stored on a literal, physical, platter; CloudDrive only uploads the blocks once something has written to them at least the first time. This is default behavior for all online storage providers for fairly obvious reasons. You wouldn't want to have to upload, say, an entire 256TB drive structure to Google Drive BEFORE you could start using it. Nevertheless, when you created your CloudDrive the software DID actually create the full filesystem and make it accessible to your OS. So your OS will treat it as if all of that space already exists--even if it only exists conceptually until CloudDrive uploads the data. If you used a local disk space provider to create a drive, btw, you would see that it creates all of the blocks at drive creation--since local storage doesn't have the same concerns as online providers.
  13. That setting is specifically for drives hosted on the local disk and shares provider. It won't do anything in the overwhelming majority of use cases. In general, the advanced settings target either very specific scenarios, or the quirks of particular providers. I don't really recommend messing with them unless there is a problem. I DO think Alex should consider revisiting the default number of I/O failures before dismounting the drive, as I'm not sure I've ever had a system where that did not need to be adjusted out of the box. I've also never experienced any negative side-effects (timeouts and system freezes) by raising it to a reasonable number like 8 or so. So I'm not sure what's really going on there. Aside from that value, though, I don't really think there are any general tweaks to be made that shouldn't actually be fixed as a matter of development. That is, if you find an issue that is regularly fixed by tweaking something like the ioManager settings, for example, it's probably better to just send it in as a bug report and open a ticket so Alex can take a look at why the software isn't handling the issue automatically.
  14. If your minimum download is set higher it should not be able to download only 1MB at a time. Mine is set to 10MB, for example. It simply cannot download less than a full chunk on that particular drive at a time. That's one of the reasons that I find your hashing prefetcher settings to be a bit redundant. If you're not prefetching any more than one chunk at a time, the minimum download could handle that setting all by itself. No disk usage should ever dismount your drive. That indicates other problems. Specifically, the drive dismounts because of read/write errors to the provider. If it's happening during heavy usage it's probably related to timeouts from your system I/O. Adjust the relevant setting in the config file in your CloudDrive directory and see if that helps. See my guide here, near the bottom, for specifics: https://www.reddit.com/r/PleX/comments/61ppfi/stablebit_clouddrive_plex_and_you_a_guide/
  15. Right. So, what's important is not the 1MB part, but the 1MB in relation to the time window you've set. YOUR setup will only prefetch if 1MB of data is requested in less than 3 seconds. That's a pretty big request, particularly for a torrent client--where many downloads are still measured in the KB/sec range. But you say you have different settings for seeding, so I guess that's fine. I honestly think I would just disable the prefetcher for hashing files. I'm not sure if it really adds anything there. In any case I think you're both dramatically overestimating the importance of file data being stored in sequential chunks, and underestimating the intelligence of the CloudDrive prefetch algorithms. I think you're making assumptions about the nature of the prefetcher that may not be true, though, until documentation is completed we can probably only speculate. For what it's worth, you can defragment a CloudDrive--if you just want to eliminate the problem altogether.
  16. Your prefetcher settings are probably too conservative depending on what you're trying to accomplish. Mine are set up this way because I want CloudDrive to start pulling down content to the local disks when someone starts actively downloading one of my seeds. As such, I want it to respond at a much lower rate than 1MB in 3 secs because many people download from my seeds much slower than that. I don't want CloudDrive to have to poll the content from the provider for every bittorrent chunk someone needs. I'd like to have it on the drive so that their download can be independent (relatively) from the amount of downstream bandwidth that my server has access to at any given moment. I also see no reason for you to limit yourself to a single CD chunk of prefetch unless your storage situation is so dire that you simply do not have any overhead to spare.
  17. My seed drive can hash FAR more than 20GB/day. Now I'm just wondering about your settings. What are they? Mine are as follows: 10MB Chunk Size, 10MB minimum download, 25GB expandable cache (on an SSD), full encryption, 10 d/l threads, 5 upload threads, uploading throttled at 25mbps, 1MB prefetch trigger, 10mb forward, 180 prefetch window Server is 1gbps downstream, 250mbps upstream. Server also runs Plex (on a different CloudDrive for media storage which also seeds.) My server can hash a full remuxed bluray movie in about 30-45mins. That's around 25-35GB.
  18. Right. Like I said, it was a great drive to hold long-term seeds. You're talking about the inarguable issues that it has downloading torrent content. I'm talking about long-term storage for seeding for months or even years after a download. Many old torrents are rarely downloaded and poorly seeded. A CloudDrive, particularly paired with one of the unlimited cloud providers, can host that content essentially indefinitely. Incomplete downloads are not an issue, since seeds are already downloaded. Hash checks take time, but like any server-based storage solution, proper management can minimize if not eliminate the need for them once they're hashed once. Ultimately, though, if you just want a drive to sit there and store content and seed it to your trackers CloudDrive works just fine. The simple solution is obviously to just download to a local drive, upload the completed content to your CloudDrive, hash it (once), and seed from there forever more.
  19. If you hover over the size in the drive size at the top of the UI it will tell you what your minimum download size is set to. You can change it by detaching the drive and reattaching it. I would drop your prefetch time window to somewhere around 30-150secs. I generally suggest a cache at least as large as your largest media file. There is a good guide here: https://www.reddit.com/r/PleX/comments/61ppfi/stablebit_clouddrive_plex_and_you_a_guide/ It was written with Plex in mind, but should work well for any media-based drive.
  20. srcrist

    Plex playback issues

    On the Plex forums, there seem to be a number of complaints about the 1.6.X PMS versions. I would stick with 1.5.X for now until they work out those kinks. Even people who are not using CloudDrive are reporting connectivity and responsiveness issues.
  21. Much of this is wrong or a bit misinformed. Using a CloudDrive as a torrent drive does not result in any additional API calls. It will result in additional reads and writes to the cache drive, but CloudDrive will still upload and download the chunks with the same amount of API usage as any other use. Beyond this, rClone results in API bans because it neither caches filesystem information locally, nor respects Google's throttling requests with incremental backoff. CloudDrive does both of these things, and will do so regardless of its use-case--torrents or otherwise. In any case, CloudDrive DOES work for torrents. In particular, it makes a great drive to hold long-term seeds. The downside, as observed, is that hash checks and such will take a long time initially, but once that's completed you should notice few differences as long as your network speeds can accommodate the overhead for CloudDrive.
  22. If I'm reading him correctly he's suggesting you drop the 1800sec to 300secs not the 400mb to 300mb.
  23. Yes. It can read portions of a file.
  24. I should have grabbed logs...but I completely wiped the system and started over (With WS2016 and NTFS) after the corruption. I apologize. I was just frustrated and wanted to get started on a rebuild. This is to say, I have no logs to give you, unfortunately.
×
×
  • Create New...