Jump to content
Covecube Inc.

red

Members
  • Content Count

    42
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by red

  1. Sorry about the tinted screengrab (f.lux). What's this about?
  2. red

    Optimal settings for Plex

    Just out of interest, why do you split the Cloud drive into multiple volumes?
  3. To briefly return to this topic: if I set Plex to read from HybridPool, the scanning of my library takes well over 2 hours. If I only point Plex to LocalPool, it's over in a couple of minutes. So something is causing the drive to let Plex chug via CloudDrive instead of just getting it done fast via LocalPool, maybe the amount of read requests Plex is generating? Any ideas how I could debug why it's doing so?
  4. I've had to restart the service now daily. I'm now running the troubleshooter before restarting the service yet again. I think I'll need to schedule automatic restarts to it every 12 hours or so if this keeps up. Since last restart, I was able to upload 340GB succesfully (100Mbit up).
  5. I'm still experiencing this on 1.1.2.1177 I went and wiped my cloud drive and started from scratch on 1174 couple of weeks ago, and so far uploaded 9TB of data, but this issue arose again two days ago. After restarting my PC yesterday the uploads continued, until today they are stuck with I/O errors again after half a TB or so. Instead of a reboot I attempted to restart the clouddrive service, and now it's uploading again.
  6. I was finally able to fix the situtation by detaching the cloud drive, then removing the cloud drive from the pool, rebooting, and re-attaching and adding it to the pool. Erroneous 0kb files that could not be accessed are now gone.
  7. Turns out the files actually show up on the cloud drive, but windows says they are corrupt and I cannot access them. I'll attempt to remove the folders after I back the data up, if possible. Edit: There's around 150GB of data actually that's in this state, but the warning only shows a dozen at a time, rest comes after "..."
  8. Hi, I'm duplicating my local pool to a cloud drive and receiving the following error for a dozen files or so: The files are around 1 to 2 GB in size and I have over a 1000GB free space on my local pool, and around 7TB in my cloud drive. Rebooting or updating to latest beta did not resolve this issue, those files have been stuck for around a week now. What should I do to resolve the issue?
  9. Yes. Just in case I'll clarify what my suggestion meant: I point Plex there to have all reads always 100% from localpool, and anything that's only in cloud shouldn't be visible, while having the localpool duplicated as is to clouddrive. I'd love for a "non-hacky" way to achieve similar; So, a slave parity disk in a pool that is never read from.
  10. For the problem, it's due pointing Plex to localpools poopart/plexstuff folder. Using it in unintended way I know.Which is why the suggestion so I could just point it directly pooldrive:/plexstuff while having Drivepool duplicate pooldrive to cloud on the side. If I point plex to the combined hybridpool, the speeds take a hit (I have open issue on this but haven't been able to dedicate time to gather proper logs etc). A feature like htis would make the issue moot too, though.
  11. This is a common warning for me: It happens because I point Plex to the LocalPool, which is a part of HybridPool, that just functions as local data into cloud duplication. This way I can use all my physical drives to 100%, and if something breaks, I can then grab it from cloud. I suppose could do this by ridding myself with HybridPool and just RSYNcin data daily from LocalPool to the CloudDrive, but I just love well DrivePool handles this 99.9% of the time without my intervention. So for the suggestion: 1) It would be awesome to have a setting that would always let me overwrite the cloud data with local data if the file parts differ and local time modified for the file is newer. Even doubly so if there could be a flag to only enable this for files smaller than X megabytes. or 2) A setting to mark certain drive in drivepool to be a slave of sorts. Data from slave is never pushed to other drives unless the user manually triggers "duplicate from slave". All other data is pushed to the slave volume normally. This would let users put some things in the slave disk that they never want to store locally, without having to do folder by folde rules and have the duplication throw a fitz because things can't be duplicated, as they are not allowed on disk X ever.
  12. From personal experience, both virtualbox and vmware's disk sharing between host and guest are atrociously slow for nowadays standards. Especially large number of smaller files is just unusably slow for stuff like searches without indices or web development where multiple small files need to bread in short timespan. I would advise against using this sort of setups for anything but testing, but that's just me.
  13. Btw, I started experiencing quite long read delays with my 100GB SSD cache and 20TB files in my "HybridPool". Despite having local copies of files, double clicking on a video would just wait a good 3-4 seconds before starting playback, even though I could see from Drivepool the file was being accessed only locally and no bits were being downloaded. Same thing with browsing the folder structure. White background in explorer after opening a folder for many seconds. I mainly use my setup with Plex, so I just pointed Plex to read everything from the LocalPool directly. The CouldDrive part just acts as a simple parity for if a disk dies. Cheaper and easier than raid. Just remember you can't delete files from the LocalPool directly or they'll just re-appear on next re-balance/duplicate.
  14. I had a similar problem after adding new disks to my pool yesterday. Today the problem solved itself after I've a couple of re-boots. Curious..
  15. ~24 hours and a reboot or two later: I unchecked Ordered File Placmenet plugin and re-checked the SSD Optimizer's "Ordered File Placement". The software is functioning as expected again so my problem is solved.
  16. Ordered File Placement via SSD optimizer not working properly, files from the Wester Digital drive won't move out and for some reason Drivepool is filling the 2nd disk on the list first. ...but when I uncheck the option from SSD Optimizer, and use the separate plugin, it seems to function properly: disks are being filled in the correct order. But since I'm using the separate plugin, all fresh writes seem to be going to the Archive #1 instead of the buffer SSD. Here are my balancing settings: What have I setup wrong? This begun after removing one old 1TB drive and adding two new 8TB drives to the pool, same settings worked properly before.
  17. When I replied I was still on 974, since like I said in the other thread the download link you provided for 976 was timing out. I realized it was probably already released then, and found the installer from the actual download page. I've had 0 issues after updating! Before I updated, I was indeed getting actual unmounts.
  18. This may now have started to disconnect my drive too. Have had to reconnect three times today and I have no other messages apart from no authorization constantly.
  19. Then it's the same than for me. WIndows for some reason is reporting the actual space used (in Explorer) in the first, while the latter is true and programs are actually able to write on the disk. This has to do with some inner workings of the sparse files as I've understood and I don't think there's going to be a fix for it apart from restarting the CloudDrive (and DrivePool if you use that) service every now and then (that's what I do when my dedicated SSD has been filled to the brim like this). At least by restarting just the services via `services.msc` you don't have to reboot. A fix would be nice but I can live without it.
  20. Try setting cache type to fixed. I think they are using sparse files with the cache and it often shows that it's allocating a lot of space even though it's really actually free. Check the cache folder by right clicking it and select properties. For me "size" is often something crazy, but "size on disk" (actual usage) is way smaller.
  21. I too am suffering from this, multiple re-authorizations each day. I cannot download from the link above, the page just times out (dl.covecube.com took too long to respond.) Pinging dl.covecube.com [64.111.101.68] with 32 bytes of data: Request timed out.
  22. I'll chime in here, as I've been fiddling around a lot with settings to find what's fastest for Plex library scans. I have 250MBit downstream and 50MBbit up. Provider: GDrive (Gsuite account) Cache size: I keep changing this based on other activities as I'm limited by quite a small SSD, 60GB for now. Minimum download size: 5MB - We're running with high thread count small minimum DL size so that we have many threads for scanning that die fast when they've done their job. Download threads: 15 - this lets Plex read a lot of files concurrently. Prefetch trigger: 50MB - I want Plex to prefetch whenever it's doing a deep analyze of a file or when I'm watching something, but I don't want partial scans triggering prefetch. Prefetch forward: 500MB Prefetch time window: 30s (I don't really know the function of this setting, though) For me, prefetch on / off doesn't really affect scans at all, scans don't trigger it for me at all at current settings - it kicks in when playing files from the cloud though, and I've been wondering if it would be a good idea to put it even higher as most of my files are around a lot larger than 500MB.
  23. For anyone reading this, Plex is smart enough if you move an existing Library file from Path A to Path B to realize you actually moved it, and that it's not a new duplicate. So while what @josefilion did worked, it really doesn't need to be done that complexly, or at one go. I recently moved from one huge folder with subfolders to one with A-C, B-D etc subfolders because it was taking Windows a long time to list the contents and I basicly just grabbed a few hundred folders at once with cut & paste to the new subfolders and me having "Detect changes" enabled in Plex made it pick up the moved files almost instantly. So basicly: No need to turn off Plex. Just move your files how you want as long as they are not currently being played / scanned and you move them within the Library root folder.
×
×
  • Create New...