Jump to content

red

Members
  • Posts

    51
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by red

  1. 21 hours ago, tphank said:

    Did you set your cloud drive to read only?  Still able to download files?


    No, I could not, as it was part of a DrivePool and CloudDrive wouldn't let me. I just paused the uploads (they unpaused a few times by themselves, though). I was able to get all data out and have now removed the disks.

  2. Thanks for the added info. I'll add for anyone else reading this that it seems like everything works just fine even though Google itself has marked the drive read-only. I've set balancing rules now so that the LocalPool can contain unduplicated & duplicated, and CloudPool may only contain duplicated data. Triggering "Balance" after that change has continued the process of moving files from cloud to local. So far in that mode, I've been able to move a bit over 3TB, with around 4TB to go 👍

  3. I'm going to be hit by this issue 28th of August myself, and I was planning to set read only a few days before, but to my dismay, once I tried it, StableBit CloudDrive notified me that "A cloud drive that is part of a StableBit DrivePool pool cannot be made read-only."

    I've been downloading data (that's only stored in cloud) for four weeks now, but Google is giving me around 40-50mbit/s downstream and with this pace, I'm still fifteen days shy of grabbing all my data. It's almost as if they anticipated people attempting to migrate away from their ecosystem. Before I started the process, I would easily hit 500-600mbit/s with multiple threads.

    Now I'm thinking if I just need to take the scary route of hitting "Remove disk" and let DrivePool handle fetching any unduplicated data from CloudPool to LocalPool.

    Does anyone know how exactly "Duplicate files later" works in conjunction with a CloudPool?

  4. In the past few months, electricity prices have skyrocketed. I've set my PC too sleep automatically after 10 minuutes of idle time. I use wake-on-lan to wake my PC   up so that Plex is available when I want to watch stuff. My library consists of 20TB local data, and around 40TB of cloud data.

    I've set my CloudDrive cache at 500GB to a dedicated SSD. This means that when I want to watch a series that's purely in the cloud, there's ample space to cache whatever I play, and it stays cached for a long time. This setup has served me well for a few years.

    Queue setting sleep timer for my PC. No issues.

    Then I added a smart power plug as well. I remotely tell my PC to shutdown, and after 30 minutes, AC power is removed. This rids my setup of any access to electricity.

    This has resulted in the scenario, that most days as I cold boot my PC, drive decides to undergo recovery. I'm not sure why, because when I shut down, I don't abort the service, and my PC shuts down at it's own leasure. EDIT: Unless the issue is that 30min is not enough time for the CloudDrive service to gracefully shut down?

    I kind of understand why recovery uploads the cache back to cloud, but a 500GB upload every few days? Whoa. How can I sort the issue out without turning my cache down to a measly few GBs?

    The large cache size has really sped up when Plex decides to do a full folder scan of the complete structure. I've around 25k series files, and 2k movie files, plus assorted .srt and .jpg files per folder for metadata.

  5. I thought I found the issue out by sorting Windows Search of the Cloud Drive contents via latest changes, and there were some few thousand small changed files which should have been going to a whole another drive, I fixed that, and I see no actual changes in the data for the past 30 minutes. Yet the upload counter is sitting at around 50MB now and pushing  upload @ 40Mbit. The upload size left seems to be decreasing at a very small speed though, like one megabyte a minute.

    I'll continue investigating. Thanks for the info about the block level stuff, makes sense. This is most likely a mistake by yours truly.

  6. I'm seeing some weird constant uploading, while "to upload" is staying at around 20 t o 30MB. Tried looking through technical details, toggling manyt things to verbose in the service log and looking at Windows resource monitor, but all I can see are the chunk names.

    So is there something I can enable to see what exact files are causing the I/O?

    Edit: I was able to locate the culprit by other means, but I'll leave the thread up since I'm interested to know wether what I asked is possible via Cloud Drive.

  7. Quick details:

    • 2x Cloud Drives = CloudPool
    • 5x Storage Drives = LocalPool
    • CloudPool + Localpool = HybridPool.

    Only balancer enabled on HybridPool is usage limiter that forces that unduplicated files are stored in cloud.

    So I recently upgraded my motherboard and cpu, and upgradfed to 2.2.4.1162, today I noticed that if I disable read striping, no video files opened via Windows File Explorer  (or other means) from the HybridPool play at all. The file appears on DrivePools "Disk Performance" part for a moment wiht 0/b read rate. If I open a small file, like an image, it works.

    If I enable read striping, the file is primarily read from LocalPool as expected, but I never want it to be read from Cloud if it exists in Local, and sometimes under a bit of load, the file playback from cloud is triggered.

    I'm at my wits end how to debug / sort this issue out by myself. Any help would be greatly apperciated.

     

    Edit: new beta fixed this :)

  8. To briefly return to this topic: if I set Plex to read from HybridPool, the scanning of my library takes well over 2 hours. If I only point Plex to LocalPool, it's over in a couple of minutes. So something is causing the drive to let Plex chug via CloudDrive instead of just getting it done fast via LocalPool, maybe the amount of read requests Plex is generating? Any ideas how I could debug why it's doing so?

  9. I'm still experiencing this on 1.1.2.1177

    I went and wiped my cloud drive and started from scratch on 1174 couple of weeks ago, and so far uploaded 9TB of data, but this issue arose again two days ago. After restarting my PC yesterday the uploads continued, until today they are stuck with I/O errors again after half a TB or so.

    Instead of a reboot I attempted to restart the clouddrive service, and now it's uploading again.

  10. Turns out the files actually show up on the cloud drive, but windows says they are corrupt and I cannot access them. I'll attempt to remove the folders after I back the data up, if possible.

    Edit: There's around 150GB of data actually that's in this state, but the warning only shows a dozen at a time, rest comes after "..."

  11. Hi, I'm duplicating my local pool to a cloud drive and receiving the following error for a dozen files or so:

     

    image.png.24dddede575f19c39435f228ec8a9409.png

     

    The files are around 1 to 2 GB in size and I have over a 1000GB free space on my local pool, and around 7TB in my cloud drive. Rebooting or updating to latest beta did not resolve this issue, those files have been stuck for around a week now. What should I do to resolve the issue?

  12. Yes. Just in case I'll clarify what my suggestion meant:

    I point Plex there to have all reads always 100% from localpool, and anything that's only in cloud shouldn't be visible, while having the localpool duplicated as is to clouddrive. I'd love for a "non-hacky" way to achieve similar; So, a slave parity disk in a pool that is never read from.

  13. For the problem, it's due pointing Plex to localpools poopart/plexstuff folder. Using it in unintended way I know.Which is why the suggestion so I could just point it directly pooldrive:/plexstuff while having Drivepool duplicate pooldrive to cloud on the side. If I point plex to the combined hybridpool, the speeds take a hit (I have open issue on this but haven't been able to dedicate time to gather proper logs etc). A feature like htis would make the issue moot too, though.

  14. This is a common warning for me:

    T5ZadD.png

    It happens because I point Plex to the LocalPool, which is a part of HybridPool, that just functions as local data into cloud duplication. This way I can use all my physical drives to 100%, and if something breaks, I can then grab it from cloud. I suppose could do this by ridding myself with HybridPool and just RSYNcin data daily from LocalPool to the CloudDrive, but I just love well DrivePool handles this 99.9% of the time without my intervention.

    So for the suggestion:

    1) It would be awesome to have a setting that would always let me overwrite the cloud data with local data if the file parts differ and local time modified for the file is newer. Even doubly so if there could be a flag to only enable this for files smaller than X megabytes.

    or

    2) A setting to mark certain drive in drivepool to be a slave of sorts. Data from slave is never pushed to other drives unless the user manually triggers "duplicate from slave". All other data is pushed to the slave volume normally. This would let users put some things in the slave disk that they never want to store locally, without having to do folder by folde rules and have the duplication throw a fitz because things can't be duplicated, as they are not allowed on disk X ever.

     

  15. From personal experience, both virtualbox and vmware's disk sharing between host and guest are atrociously slow for nowadays standards. Especially large number of smaller files is just unusably slow for stuff like searches without indices or web development where multiple small files need to bread in short timespan. I would advise against using this sort of setups for anything but testing, but that's just me.

  16. Btw, I started experiencing quite long read delays with my 100GB SSD cache and 20TB files in my "HybridPool". Despite having local copies of files, double clicking on a video would just wait a good 3-4 seconds before starting playback, even though I could see from Drivepool the file was being accessed only locally and no bits were being downloaded. Same thing with browsing the folder structure. White background in explorer after opening a folder for many seconds.

    I mainly use my setup with Plex, so I just pointed Plex to read everything from the LocalPool directly. The CouldDrive part just acts as a simple parity for if a disk dies. Cheaper and easier than raid. Just remember you can't delete files from the LocalPool directly or they'll just re-appear on next re-balance/duplicate.

×
×
  • Create New...