Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by red

  1. Simple question. Been hoping to get dark mode support for ages so I finally thought to toss the question here as well.
  2. I thought I found the issue out by sorting Windows Search of the Cloud Drive contents via latest changes, and there were some few thousand small changed files which should have been going to a whole another drive, I fixed that, and I see no actual changes in the data for the past 30 minutes. Yet the upload counter is sitting at around 50MB now and pushing upload @ 40Mbit. The upload size left seems to be decreasing at a very small speed though, like one megabyte a minute. I'll continue investigating. Thanks for the info about the block level stuff, makes sense. This is most likely a mistake by yours truly.
  3. I'm seeing some weird constant uploading, while "to upload" is staying at around 20 t o 30MB. Tried looking through technical details, toggling manyt things to verbose in the service log and looking at Windows resource monitor, but all I can see are the chunk names. So is there something I can enable to see what exact files are causing the I/O? Edit: I was able to locate the culprit by other means, but I'll leave the thread up since I'm interested to know wether what I asked is possible via Cloud Drive.
  4. Updated to latest beta and same thing. I issued a bug report now.
  5. Quick details: 2x Cloud Drives = CloudPool 5x Storage Drives = LocalPool CloudPool + Localpool = HybridPool. Only balancer enabled on HybridPool is usage limiter that forces that unduplicated files are stored in cloud. So I recently upgraded my motherboard and cpu, and upgradfed to, today I noticed that if I disable read striping, no video files opened via Windows File Explorer (or other means) from the HybridPool play at all. The file appears on DrivePools "Disk Performance" part for a moment wiht 0/b read rate. If I open a small file, like an image, it works. If I enable read striping, the file is primarily read from LocalPool as expected, but I never want it to be read from Cloud if it exists in Local, and sometimes under a bit of load, the file playback from cloud is triggered. I'm at my wits end how to debug / sort this issue out by myself. Any help would be greatly apperciated. Edit: new beta fixed this
  6. Sorry about the tinted screengrab (f.lux). What's this about?
  7. Just out of interest, why do you split the Cloud drive into multiple volumes?
  8. To briefly return to this topic: if I set Plex to read from HybridPool, the scanning of my library takes well over 2 hours. If I only point Plex to LocalPool, it's over in a couple of minutes. So something is causing the drive to let Plex chug via CloudDrive instead of just getting it done fast via LocalPool, maybe the amount of read requests Plex is generating? Any ideas how I could debug why it's doing so?
  9. I've had to restart the service now daily. I'm now running the troubleshooter before restarting the service yet again. I think I'll need to schedule automatic restarts to it every 12 hours or so if this keeps up. Since last restart, I was able to upload 340GB succesfully (100Mbit up).
  10. I'm still experiencing this on I went and wiped my cloud drive and started from scratch on 1174 couple of weeks ago, and so far uploaded 9TB of data, but this issue arose again two days ago. After restarting my PC yesterday the uploads continued, until today they are stuck with I/O errors again after half a TB or so. Instead of a reboot I attempted to restart the clouddrive service, and now it's uploading again.
  11. I was finally able to fix the situtation by detaching the cloud drive, then removing the cloud drive from the pool, rebooting, and re-attaching and adding it to the pool. Erroneous 0kb files that could not be accessed are now gone.
  12. Turns out the files actually show up on the cloud drive, but windows says they are corrupt and I cannot access them. I'll attempt to remove the folders after I back the data up, if possible. Edit: There's around 150GB of data actually that's in this state, but the warning only shows a dozen at a time, rest comes after "..."
  13. Hi, I'm duplicating my local pool to a cloud drive and receiving the following error for a dozen files or so: The files are around 1 to 2 GB in size and I have over a 1000GB free space on my local pool, and around 7TB in my cloud drive. Rebooting or updating to latest beta did not resolve this issue, those files have been stuck for around a week now. What should I do to resolve the issue?
  14. Yes. Just in case I'll clarify what my suggestion meant: I point Plex there to have all reads always 100% from localpool, and anything that's only in cloud shouldn't be visible, while having the localpool duplicated as is to clouddrive. I'd love for a "non-hacky" way to achieve similar; So, a slave parity disk in a pool that is never read from.
  15. For the problem, it's due pointing Plex to localpools poopart/plexstuff folder. Using it in unintended way I know.Which is why the suggestion so I could just point it directly pooldrive:/plexstuff while having Drivepool duplicate pooldrive to cloud on the side. If I point plex to the combined hybridpool, the speeds take a hit (I have open issue on this but haven't been able to dedicate time to gather proper logs etc). A feature like htis would make the issue moot too, though.
  16. This is a common warning for me: It happens because I point Plex to the LocalPool, which is a part of HybridPool, that just functions as local data into cloud duplication. This way I can use all my physical drives to 100%, and if something breaks, I can then grab it from cloud. I suppose could do this by ridding myself with HybridPool and just RSYNcin data daily from LocalPool to the CloudDrive, but I just love well DrivePool handles this 99.9% of the time without my intervention. So for the suggestion: 1) It would be awesome to have a setting that would always let me overwrite the cloud data with local data if the file parts differ and local time modified for the file is newer. Even doubly so if there could be a flag to only enable this for files smaller than X megabytes. or 2) A setting to mark certain drive in drivepool to be a slave of sorts. Data from slave is never pushed to other drives unless the user manually triggers "duplicate from slave". All other data is pushed to the slave volume normally. This would let users put some things in the slave disk that they never want to store locally, without having to do folder by folde rules and have the duplication throw a fitz because things can't be duplicated, as they are not allowed on disk X ever.
  17. From personal experience, both virtualbox and vmware's disk sharing between host and guest are atrociously slow for nowadays standards. Especially large number of smaller files is just unusably slow for stuff like searches without indices or web development where multiple small files need to bread in short timespan. I would advise against using this sort of setups for anything but testing, but that's just me.
  18. Btw, I started experiencing quite long read delays with my 100GB SSD cache and 20TB files in my "HybridPool". Despite having local copies of files, double clicking on a video would just wait a good 3-4 seconds before starting playback, even though I could see from Drivepool the file was being accessed only locally and no bits were being downloaded. Same thing with browsing the folder structure. White background in explorer after opening a folder for many seconds. I mainly use my setup with Plex, so I just pointed Plex to read everything from the LocalPool directly. The CouldDrive part just acts as a simple parity for if a disk dies. Cheaper and easier than raid. Just remember you can't delete files from the LocalPool directly or they'll just re-appear on next re-balance/duplicate.
  19. I had a similar problem after adding new disks to my pool yesterday. Today the problem solved itself after I've a couple of re-boots. Curious..
  20. ~24 hours and a reboot or two later: I unchecked Ordered File Placmenet plugin and re-checked the SSD Optimizer's "Ordered File Placement". The software is functioning as expected again so my problem is solved.
  21. Ordered File Placement via SSD optimizer not working properly, files from the Wester Digital drive won't move out and for some reason Drivepool is filling the 2nd disk on the list first. ...but when I uncheck the option from SSD Optimizer, and use the separate plugin, it seems to function properly: disks are being filled in the correct order. But since I'm using the separate plugin, all fresh writes seem to be going to the Archive #1 instead of the buffer SSD. Here are my balancing settings: What have I setup wrong? This begun after removing one old 1TB drive and adding two new 8TB drives to the pool, same settings worked properly before.
  22. When I replied I was still on 974, since like I said in the other thread the download link you provided for 976 was timing out. I realized it was probably already released then, and found the installer from the actual download page. I've had 0 issues after updating! Before I updated, I was indeed getting actual unmounts.
  23. This may now have started to disconnect my drive too. Have had to reconnect three times today and I have no other messages apart from no authorization constantly.
  • Create New...