Jump to content
Covecube Inc.

red

Members
  • Content Count

    42
  • Joined

  • Last visited

  • Days Won

    1

red last won the day on January 11 2018

red had the most liked content!

About red

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Sorry about the tinted screengrab (f.lux). What's this about?
  2. red

    Optimal settings for Plex

    Just out of interest, why do you split the Cloud drive into multiple volumes?
  3. To briefly return to this topic: if I set Plex to read from HybridPool, the scanning of my library takes well over 2 hours. If I only point Plex to LocalPool, it's over in a couple of minutes. So something is causing the drive to let Plex chug via CloudDrive instead of just getting it done fast via LocalPool, maybe the amount of read requests Plex is generating? Any ideas how I could debug why it's doing so?
  4. I've had to restart the service now daily. I'm now running the troubleshooter before restarting the service yet again. I think I'll need to schedule automatic restarts to it every 12 hours or so if this keeps up. Since last restart, I was able to upload 340GB succesfully (100Mbit up).
  5. I'm still experiencing this on 1.1.2.1177 I went and wiped my cloud drive and started from scratch on 1174 couple of weeks ago, and so far uploaded 9TB of data, but this issue arose again two days ago. After restarting my PC yesterday the uploads continued, until today they are stuck with I/O errors again after half a TB or so. Instead of a reboot I attempted to restart the clouddrive service, and now it's uploading again.
  6. I was finally able to fix the situtation by detaching the cloud drive, then removing the cloud drive from the pool, rebooting, and re-attaching and adding it to the pool. Erroneous 0kb files that could not be accessed are now gone.
  7. Turns out the files actually show up on the cloud drive, but windows says they are corrupt and I cannot access them. I'll attempt to remove the folders after I back the data up, if possible. Edit: There's around 150GB of data actually that's in this state, but the warning only shows a dozen at a time, rest comes after "..."
  8. Hi, I'm duplicating my local pool to a cloud drive and receiving the following error for a dozen files or so: The files are around 1 to 2 GB in size and I have over a 1000GB free space on my local pool, and around 7TB in my cloud drive. Rebooting or updating to latest beta did not resolve this issue, those files have been stuck for around a week now. What should I do to resolve the issue?
  9. Yes. Just in case I'll clarify what my suggestion meant: I point Plex there to have all reads always 100% from localpool, and anything that's only in cloud shouldn't be visible, while having the localpool duplicated as is to clouddrive. I'd love for a "non-hacky" way to achieve similar; So, a slave parity disk in a pool that is never read from.
  10. For the problem, it's due pointing Plex to localpools poopart/plexstuff folder. Using it in unintended way I know.Which is why the suggestion so I could just point it directly pooldrive:/plexstuff while having Drivepool duplicate pooldrive to cloud on the side. If I point plex to the combined hybridpool, the speeds take a hit (I have open issue on this but haven't been able to dedicate time to gather proper logs etc). A feature like htis would make the issue moot too, though.
  11. This is a common warning for me: It happens because I point Plex to the LocalPool, which is a part of HybridPool, that just functions as local data into cloud duplication. This way I can use all my physical drives to 100%, and if something breaks, I can then grab it from cloud. I suppose could do this by ridding myself with HybridPool and just RSYNcin data daily from LocalPool to the CloudDrive, but I just love well DrivePool handles this 99.9% of the time without my intervention. So for the suggestion: 1) It would be awesome to have a setting that would always let me overwrite the cloud data with local data if the file parts differ and local time modified for the file is newer. Even doubly so if there could be a flag to only enable this for files smaller than X megabytes. or 2) A setting to mark certain drive in drivepool to be a slave of sorts. Data from slave is never pushed to other drives unless the user manually triggers "duplicate from slave". All other data is pushed to the slave volume normally. This would let users put some things in the slave disk that they never want to store locally, without having to do folder by folde rules and have the duplication throw a fitz because things can't be duplicated, as they are not allowed on disk X ever.
  12. From personal experience, both virtualbox and vmware's disk sharing between host and guest are atrociously slow for nowadays standards. Especially large number of smaller files is just unusably slow for stuff like searches without indices or web development where multiple small files need to bread in short timespan. I would advise against using this sort of setups for anything but testing, but that's just me.
  13. Btw, I started experiencing quite long read delays with my 100GB SSD cache and 20TB files in my "HybridPool". Despite having local copies of files, double clicking on a video would just wait a good 3-4 seconds before starting playback, even though I could see from Drivepool the file was being accessed only locally and no bits were being downloaded. Same thing with browsing the folder structure. White background in explorer after opening a folder for many seconds. I mainly use my setup with Plex, so I just pointed Plex to read everything from the LocalPool directly. The CouldDrive part just acts as a simple parity for if a disk dies. Cheaper and easier than raid. Just remember you can't delete files from the LocalPool directly or they'll just re-appear on next re-balance/duplicate.
  14. I had a similar problem after adding new disks to my pool yesterday. Today the problem solved itself after I've a couple of re-boots. Curious..
×
×
  • Create New...