Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

About Soaringswine

  • Rank

Recent Profile Visitors

407 profile views
  1. Everything looks fine, though I set minimum download size to 50 MB.
  2. heh.. every link was already purple on that search for me as I've been trying to figure out a cause over the year.. only recourse is to use logical data recovery software like R-Studio, but even then, good luck getting it to fully scan a 200 TB ReFS partition hosted on a cloud service without crashing or taking 3 months.. honestly, ReFS support could work with Clouddrives better if you utilized Storage Spaces underneath with Integrity Streams enabled, but then I have no idea how the Clouddrive cache would handle that.
  3. Clouddrive seems to have major issues with ReFS drives, I just lost my fifth drive (have lost something close to 50 TB from this happening this year) that only shows as RAW - even with upload verification enabled. Going to move back to NTFS, at least you can run chkdsk on that.
  4. Hi, I know rclone can utilize something like PG Blitz (https://plexguide.com/news/wiki/pg-blitz-instructions/) or Supertransfer2 (https://plexguide.com/wikis/supertransfer2/) to bypass Gsuite Business's 750 GB / daily upload limit, could this be implemented in Clouddrive or would it be too much work?
  5. right, I understand what is being stated, but observed behavior is different. If you change the prefetch window to X seconds, then trigger the prefetcher, the prefetcher seems to hold the data for X seconds. Please see the two gifs I made illustrating the functionality: 20 second prefetch window: 60 second prefetch window:
  6. awesome! could you maybe ping Alex and get some clarification on the 30s bit? because the behavior of the window isn't exactly as its described elsewhere (it seems to control how long data is held in the prefetcher).
  7. heh.. now I'm trying to find the best settings for 4K UHD Remuxes (400 - 500 megabit/s bitrate..). any tips? I can't seem to get more than 350 Mb/s prefetching (a "cache this specific file ahead of time" option would be amazing!). wondering if I'm hitting crypto limitations.. my CPU has AES-NI but it's an older implementation and each thread is only 2.0 GHz (20 threads on the Clouddrive VM total). edit: hmm looks like it was just a media client issue. seems 4K remuxes stream fine now.
  8. The trigger is happening regardless, it's the "Prefetched" amount that stays around longer, indicating to me that the prefetched data is being kept in the prefetch cache longer. this also means you can easily accrue 10-30 GB of prefetch cache if you set your window high. try my experiment that I stated above, you'll see.
  9. thanks guys, that all makes sense. I still think there's something funky with the "Prefetched:" figure in the UI then. Seriously, try modifying the the Prefetch Time Window to something like 10 seconds, then prefetch something. That number will go up to the amount that you prefetched (150 MB or whatever), then drop down to 0 MB (and disappear), 10 seconds later. Now set it to 300 seconds and prefetch. It'll keep that data for 300 seconds, then drop down to 0 MB (and disappear). so either there's a bug in the UI, the documentation is wrong, or the functionality is different.
  10. The window does not appear to function like that. You can change the window and watch the prefetched data get held in the UI for the time you set. Try setting the window to 30 minutes and set a timer, it'll hold that prefetched data for 30 minutes and then drop it. How would 1 MB in 30 seconds work? If it did work how you are saying, how do you access 1 MB in 30 seconds? It would take like a couple seconds at MOST to access that much data so it would never trigger that "window". The other issue with a low trigger size is that if Plex or any other program is analyzing file metadata, it triggers the prefetcher and clogs up it up even if it only needs to read parts of an entire file. Do you have any proof that the Transcoder default throttle buffer is CPU time? "Amount in seconds to buffer before throttling back the transcoder speed." is what the Plex site says and I can't find any other sources other than suggested amounts on the Rclone forums of 10 minutes. Even if it is CPU time, it tells Plex to keep trancoding additional data and the Plex client will buffer and cache it. In any case, I am certainly getting better performance with these settings. I can see spikes of data streaming / trancoding when the large prefetch gets downloaded and then processed by Plex and sent to the client. I watched 4x 1080p streams get handled flawlessly. That being said, I do have gigabit and 20x 2 GHz cores on the server.
  11. Just FYI Google File Stream still has a ~700 GB / day upload limit. and way more things work with Stablebit Cloud Drive than File Stream. try running portable apps from File Stream, they rarely work. File Stream is a bit of a black box atm with limited features but it could potentially be great.
  12. Ok, did some experimenting! So, the prefetcher from what I understand now is essentially another, higher priority, "cache", and the window is how long the prefetcher keeps that data around. I've observed DRASTICALLY better performance by increasing the trigger to 100 MB and the window to around 30 minutes. I also increased Transcoder default throttle buffer in Plex to 10 minutes so that it will go ahead and keep requesting additional data. Another question: does data ever move from the prefetcher into the normal cache or is it dropped after your set window no matter what?
  13. Hi all, I have a home 1 Gb/s up/down connection and a very beefy Plex server. I use Cloud Drive with GSuite and a 500 GB SSD cache drive. 90% of my content is high bitrate 1080p video. I was wondering what the ideal I/O performance settings were. I get pretty good performance as it is with these settings, but some things are a bit unclear and I'd like to optimize things: Download threads: 10 Minimum download size: 100 MB Prefetch trigger: 10 MB Prefetch forward: 3000 MB Prefetch time window: 30 seconds It seems the prefetcher isn't too intelligent on the file level. That is, if I'm accessing a 1000 MB file, it still prefetches 3000 MB of data. But from where? Just the next contiguous 2000 MB of blocks? Ideally I'd want to set the prefetch forward to something like 15000 MB which would be the size of my largest media files, but then I think it'd prefetch 15 GB of data on every prefetch, regardless if it's relevant. How does the minimum download size work with the prefetcher? It's my understanding that essentially if your cloud block size < minimum download size, it says to always download however many blocks it needs to fill up the minimum download size. How does that work with the prefetcher? I'm still unclear how the prefetch trigger and the time window relate. In another thread I think it essentially says if 10 MB of data is accessed for 30 seconds, prefetch 3000 MB? I'm not exactly sure how that would work with a gigabit connection. As 10 MB would never take 30 seconds to download. All that aside, if I have a gigabit connection, does the prefetcher actually help? It seems that sometimes there are download activities going on alongside prefetch activities and I wonder if they cause issues. Any other optimization tips? Thanks!
  14. AH HA! it's because I have Arq running backups to the same Gsuite account. must be hitting the limit there.
  15. I'm getting User Rate Limit Exceeded on downloads (prefetches) and there's no way I've uploaded 750 GB within 24 hours.
  • Create New...