Jump to content
Covecube Inc.

red

Members
  • Content Count

    46
  • Joined

  • Last visited

  • Days Won

    1

red last won the day on January 11 2018

red had the most liked content!

About red

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I thought I found the issue out by sorting Windows Search of the Cloud Drive contents via latest changes, and there were some few thousand small changed files which should have been going to a whole another drive, I fixed that, and I see no actual changes in the data for the past 30 minutes. Yet the upload counter is sitting at around 50MB now and pushing upload @ 40Mbit. The upload size left seems to be decreasing at a very small speed though, like one megabyte a minute. I'll continue investigating. Thanks for the info about the block level stuff, makes sense. This is most likely a mistake by yours truly.
  2. I'm seeing some weird constant uploading, while "to upload" is staying at around 20 t o 30MB. Tried looking through technical details, toggling manyt things to verbose in the service log and looking at Windows resource monitor, but all I can see are the chunk names. So is there something I can enable to see what exact files are causing the I/O? Edit: I was able to locate the culprit by other means, but I'll leave the thread up since I'm interested to know wether what I asked is possible via Cloud Drive.
  3. Updated to latest beta and same thing. I issued a bug report now.
  4. Quick details: 2x Cloud Drives = CloudPool 5x Storage Drives = LocalPool CloudPool + Localpool = HybridPool. Only balancer enabled on HybridPool is usage limiter that forces that unduplicated files are stored in cloud. So I recently upgraded my motherboard and cpu, and upgradfed to 2.2.4.1162, today I noticed that if I disable read striping, no video files opened via Windows File Explorer (or other means) from the HybridPool play at all. The file appears on DrivePools "Disk Performance" part for a moment wiht 0/b read rate. If I open a small file, like an image, it works. If I enable read striping, the file is primarily read from LocalPool as expected, but I never want it to be read from Cloud if it exists in Local, and sometimes under a bit of load, the file playback from cloud is triggered. I'm at my wits end how to debug / sort this issue out by myself. Any help would be greatly apperciated. Edit: new beta fixed this
  5. Sorry about the tinted screengrab (f.lux). What's this about?
  6. red

    Optimal settings for Plex

    Just out of interest, why do you split the Cloud drive into multiple volumes?
  7. To briefly return to this topic: if I set Plex to read from HybridPool, the scanning of my library takes well over 2 hours. If I only point Plex to LocalPool, it's over in a couple of minutes. So something is causing the drive to let Plex chug via CloudDrive instead of just getting it done fast via LocalPool, maybe the amount of read requests Plex is generating? Any ideas how I could debug why it's doing so?
  8. I've had to restart the service now daily. I'm now running the troubleshooter before restarting the service yet again. I think I'll need to schedule automatic restarts to it every 12 hours or so if this keeps up. Since last restart, I was able to upload 340GB succesfully (100Mbit up).
  9. I'm still experiencing this on 1.1.2.1177 I went and wiped my cloud drive and started from scratch on 1174 couple of weeks ago, and so far uploaded 9TB of data, but this issue arose again two days ago. After restarting my PC yesterday the uploads continued, until today they are stuck with I/O errors again after half a TB or so. Instead of a reboot I attempted to restart the clouddrive service, and now it's uploading again.
  10. I was finally able to fix the situtation by detaching the cloud drive, then removing the cloud drive from the pool, rebooting, and re-attaching and adding it to the pool. Erroneous 0kb files that could not be accessed are now gone.
  11. Turns out the files actually show up on the cloud drive, but windows says they are corrupt and I cannot access them. I'll attempt to remove the folders after I back the data up, if possible. Edit: There's around 150GB of data actually that's in this state, but the warning only shows a dozen at a time, rest comes after "..."
  12. Hi, I'm duplicating my local pool to a cloud drive and receiving the following error for a dozen files or so: The files are around 1 to 2 GB in size and I have over a 1000GB free space on my local pool, and around 7TB in my cloud drive. Rebooting or updating to latest beta did not resolve this issue, those files have been stuck for around a week now. What should I do to resolve the issue?
  13. Yes. Just in case I'll clarify what my suggestion meant: I point Plex there to have all reads always 100% from localpool, and anything that's only in cloud shouldn't be visible, while having the localpool duplicated as is to clouddrive. I'd love for a "non-hacky" way to achieve similar; So, a slave parity disk in a pool that is never read from.
  14. For the problem, it's due pointing Plex to localpools poopart/plexstuff folder. Using it in unintended way I know.Which is why the suggestion so I could just point it directly pooldrive:/plexstuff while having Drivepool duplicate pooldrive to cloud on the side. If I point plex to the combined hybridpool, the speeds take a hit (I have open issue on this but haven't been able to dedicate time to gather proper logs etc). A feature like htis would make the issue moot too, though.
  15. This is a common warning for me: It happens because I point Plex to the LocalPool, which is a part of HybridPool, that just functions as local data into cloud duplication. This way I can use all my physical drives to 100%, and if something breaks, I can then grab it from cloud. I suppose could do this by ridding myself with HybridPool and just RSYNcin data daily from LocalPool to the CloudDrive, but I just love well DrivePool handles this 99.9% of the time without my intervention. So for the suggestion: 1) It would be awesome to have a setting that would always let me overwrite the cloud data with local data if the file parts differ and local time modified for the file is newer. Even doubly so if there could be a flag to only enable this for files smaller than X megabytes. or 2) A setting to mark certain drive in drivepool to be a slave of sorts. Data from slave is never pushed to other drives unless the user manually triggers "duplicate from slave". All other data is pushed to the slave volume normally. This would let users put some things in the slave disk that they never want to store locally, without having to do folder by folde rules and have the duplication throw a fitz because things can't be duplicated, as they are not allowed on disk X ever.
×
×
  • Create New...