Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

About Soaringswine

  • Rank

Recent Profile Visitors

477 profile views
  1. Everything looks fine, though I set minimum download size to 50 MB.
  2. heh.. every link was already purple on that search for me as I've been trying to figure out a cause over the year.. only recourse is to use logical data recovery software like R-Studio, but even then, good luck getting it to fully scan a 200 TB ReFS partition hosted on a cloud service without crashing or taking 3 months.. honestly, ReFS support could work with Clouddrives better if you utilized Storage Spaces underneath with Integrity Streams enabled, but then I have no idea how the Clouddrive cache would handle that.
  3. Clouddrive seems to have major issues with ReFS drives, I just lost my fifth drive (have lost something close to 50 TB from this happening this year) that only shows as RAW - even with upload verification enabled. Going to move back to NTFS, at least you can run chkdsk on that.
  4. Hi, I know rclone can utilize something like PG Blitz (https://plexguide.com/news/wiki/pg-blitz-instructions/) or Supertransfer2 (https://plexguide.com/wikis/supertransfer2/) to bypass Gsuite Business's 750 GB / daily upload limit, could this be implemented in Clouddrive or would it be too much work?
  5. right, I understand what is being stated, but observed behavior is different. If you change the prefetch window to X seconds, then trigger the prefetcher, the prefetcher seems to hold the data for X seconds. Please see the two gifs I made illustrating the functionality: 20 second prefetch window: 60 second prefetch window:
  6. awesome! could you maybe ping Alex and get some clarification on the 30s bit? because the behavior of the window isn't exactly as its described elsewhere (it seems to control how long data is held in the prefetcher).
  7. heh.. now I'm trying to find the best settings for 4K UHD Remuxes (400 - 500 megabit/s bitrate..). any tips? I can't seem to get more than 350 Mb/s prefetching (a "cache this specific file ahead of time" option would be amazing!). wondering if I'm hitting crypto limitations.. my CPU has AES-NI but it's an older implementation and each thread is only 2.0 GHz (20 threads on the Clouddrive VM total). edit: hmm looks like it was just a media client issue. seems 4K remuxes stream fine now.
  8. The trigger is happening regardless, it's the "Prefetched" amount that stays around longer, indicating to me that the prefetched data is being kept in the prefetch cache longer. this also means you can easily accrue 10-30 GB of prefetch cache if you set your window high. try my experiment that I stated above, you'll see.
  9. thanks guys, that all makes sense. I still think there's something funky with the "Prefetched:" figure in the UI then. Seriously, try modifying the the Prefetch Time Window to something like 10 seconds, then prefetch something. That number will go up to the amount that you prefetched (150 MB or whatever), then drop down to 0 MB (and disappear), 10 seconds later. Now set it to 300 seconds and prefetch. It'll keep that data for 300 seconds, then drop down to 0 MB (and disappear). so either there's a bug in the UI, the documentation is wrong, or the functionality is different.
  10. The window does not appear to function like that. You can change the window and watch the prefetched data get held in the UI for the time you set. Try setting the window to 30 minutes and set a timer, it'll hold that prefetched data for 30 minutes and then drop it. How would 1 MB in 30 seconds work? If it did work how you are saying, how do you access 1 MB in 30 seconds? It would take like a couple seconds at MOST to access that much data so it would never trigger that "window". The other issue with a low trigger size is that if Plex or any other program is analyzing file metadata, it triggers
  11. Just FYI Google File Stream still has a ~700 GB / day upload limit. and way more things work with Stablebit Cloud Drive than File Stream. try running portable apps from File Stream, they rarely work. File Stream is a bit of a black box atm with limited features but it could potentially be great.
  12. Ok, did some experimenting! So, the prefetcher from what I understand now is essentially another, higher priority, "cache", and the window is how long the prefetcher keeps that data around. I've observed DRASTICALLY better performance by increasing the trigger to 100 MB and the window to around 30 minutes. I also increased Transcoder default throttle buffer in Plex to 10 minutes so that it will go ahead and keep requesting additional data. Another question: does data ever move from the prefetcher into the normal cache or is it dropped after your set window no matter what?
  13. Hi all, I have a home 1 Gb/s up/down connection and a very beefy Plex server. I use Cloud Drive with GSuite and a 500 GB SSD cache drive. 90% of my content is high bitrate 1080p video. I was wondering what the ideal I/O performance settings were. I get pretty good performance as it is with these settings, but some things are a bit unclear and I'd like to optimize things: Download threads: 10 Minimum download size: 100 MB Prefetch trigger: 10 MB Prefetch forward: 3000 MB Prefetch time window: 30 seconds It seems the prefetcher isn't too intelligent on the file level. That is
  14. AH HA! it's because I have Arq running backups to the same Gsuite account. must be hitting the limit there.
  15. I'm getting User Rate Limit Exceeded on downloads (prefetches) and there's no way I've uploaded 750 GB within 24 hours.
  • Create New...