Jump to content

red

Members
  • Posts

    51
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by red

  1. Ordered File Placement via SSD optimizer not working properly, files from the Wester Digital drive won't move out and for some reason Drivepool is filling the 2nd disk on the list first.

    yf08JG.png

    ...but when I uncheck the option from SSD Optimizer, and use the separate plugin, it seems to function properly: disks are being filled in the correct order. But since I'm using the separate plugin, all fresh writes seem to be going to the Archive #1 instead of the buffer SSD.

    x9u9W3.png

    Here are my balancing settings:

     Q1V0p7.png

    What have I setup wrong? This begun after removing one old 1TB drive and adding two new 8TB drives to the pool, same settings worked properly before.

  2. Quote

     as you mentioned. 24TB, but 4GB 

    Then it's the same than for me. WIndows for some reason is reporting the actual space used (in Explorer) in the first, while the latter is true and programs are actually able to write on the disk. This has to do with some inner workings of the sparse files as I've understood and I don't think there's going to be a fix for it apart from restarting the CloudDrive (and DrivePool if you use that) service every now and then (that's what I do when my dedicated SSD has been filled to the brim like this). At least by restarting just the services via `services.msc` you don't have to reboot.

    A fix would be nice but I can live without it. :P

  3. Try setting cache type to fixed. I think they are using sparse files with the cache and it often shows that it's allocating a lot of space even though it's really actually free.

    Check the cache folder by right clicking it and select properties. For me "size" is often something crazy, but "size on disk" (actual usage) is way smaller.

  4. I too am suffering from this, multiple re-authorizations each day. I cannot download from the link above, the page just times out (dl.covecube.com took too long to respond.)

     

    Pinging dl.covecube.com [64.111.101.68] with 32 bytes of data:
    Request timed out.

  5. I'll chime in here, as I've been fiddling around a lot with settings to find what's fastest for Plex library scans. I have 250MBit downstream and 50MBbit up.

    Provider: GDrive (Gsuite account)
    Cache size: I keep changing this based on other activities as I'm limited by quite a small SSD, 60GB for now.
    Minimum download size: 5MB - We're running with high thread count small minimum DL size so that we have many threads for scanning that die fast when they've done their job.
    Download threads: 15 - this lets Plex read a lot of files concurrently.
    Prefetch trigger: 50MB - I want Plex to prefetch whenever it's doing a deep analyze of a file or when I'm watching something, but I don't want partial scans triggering prefetch.
    Prefetch forward: 500MB
    Prefetch time window: 30s (I don't really know the function of this setting, though)

    For me, prefetch on / off doesn't really affect scans at all, scans don't trigger it for me at all at current settings - it kicks in when playing files from the cloud though, and I've been wondering if it would be a good idea to put it even higher as most of my files are around a lot larger than 500MB.

  6. For anyone reading this, Plex is smart enough if you move an existing Library file from Path A to Path B to realize you actually moved it, and that it's not a new duplicate. So while what @josefilion did worked, it really doesn't need to be done that complexly, or at one go. I recently moved from one huge folder with subfolders to one with A-C, B-D etc subfolders because it was taking Windows a long time to list the contents and I basicly just grabbed a few hundred folders at once with cut & paste to the new subfolders and me having "Detect changes" enabled in Plex made it pick up the moved files almost instantly.

    So basicly: No need to turn off Plex. Just move your files how you want as long as they are not currently being played / scanned and you move them within the Library root folder.

  7. That's plausible, but I still don't understand why the reserved space overtakes "Other" on the drive. At least I figure a CloudDrive cache that's outside of pools should be seen by DrivePool as Other and not Unduplicated, right?

    At the moment it's appears to be correct, 50GB other and 9GB unduplicated as it has just automaticly balanced. I attached the info you asked, but will do so again a 2nd time once it's showing up oddly again (Other gets eaten by Undup).

    GEU0M0.png

    NTFS Volume Serial Number :        0x64decb92decb5b46
    NTFS Version   :                   3.1
    LFS Version    :                   2.0
    Number Sectors :                   0x000000000ee7b8ff
    Total Clusters :                   0x0000000001dcf71f
    Free Clusters  :                   0x0000000000dd0642
    Total Reserved :                   0x000000000000418d
    Bytes Per Sector  :                512
    Bytes Per Physical Sector :        512
    Bytes Per Cluster :                4096
    Bytes Per FileRecord Segment    :  1024
    Clusters Per FileRecord Segment :  0
    Mft Valid Data Length :            0x00000000057c0000
    Mft Start Lcn  :                   0x00000000000c0000
    Mft2 Start Lcn :                   0x0000000000000002
    Mft Zone Start :                   0x00000000000c57c0
    Mft Zone End   :                   0x00000000000cb500
    Max Device Trim Extent Count :     512
    Max Device Trim Byte Count :       0xffffffff
    Max Volume Trim Extent Count :     62
    Max Volume Trim Byte Count :       0x40000000
    Resource Manager Identifier :      F042DDAB-A150-11E5-A63E-ED2EA8DC0809

     

  8. I saw similar weirdity with my LocalPool which consists of SSD and two drives. The SSD also has 75% of it's space dedicated to CloudDrive (proportional) cache and when DrivePool is writing on the drive it almost looks as if it's eating space from the CloudDrive cache instead. Total space used by DrivePool/CloudDrive did not go up at all even though files were being written on it. At the same time. the CloudDrive size decreases the more files are written on the SSD until it's around the % I set to "write" on the cache settings.

    I figured maybe the programs are somehow working in tandem and DrivePool is using up the space I've set for "write" on the prop. cache, idk, this didn't go away with balancing/re-measure but it finally sorted itself out as I changed the cache to fixed. I can see such feature being useful in some cases but it wasn't what I wanted.

  9. This is specificly for going around the default! For example, a user might want to route all traffic through a VPN, except for some traffic that's already fully encrypted back and forth (such as CloudDrive or similar).

    Some apps bind to adapter on launch, some whenever the default changes, some let you manually bind to certain adapter.

    Also, a VPN provider can only give you that much bandwidth, so to get everything out of your connection, parts of traffic set to certain adapter can be really helpful (or at the same time, making sure some traffic never goes via certain adapter).

    Thanks for adding the request. :)

  10. Some programs let me do this: 7UKg9I.png

     

    I would love if it was possible with CloudDrive, because having a fast connection but CloudDrive following software VPN:s isn't optimal most of the time (be it work related VPN or not). So I guess this is a feature suggestion (I know you guys are swamped though) :)

  11. I'm temporarily having to endure a 10/10 line and I am experiencing constant CloudDrive unmounts due to prolonged times reading/writing stemming from lack of bandwidth.

    The delays are fine and expected, but what isn't is that the drive keeps disconnecting all the time due these "errors". I literally need to spam the notification windows clear button every half a minute during Plex Library scans so that the drive wont unmount. I can still watch content from the drive if I let Plex buffer a while, but I really wish there was some way to turn off the automatic unmounting. I mean, everything does work, with a slight delay, it's infuriating that on top of that I have to babysit the drive. :/

    dUCBbW.png

    je6ZuB.png

    6l8Pli.png

  12. I see the below screen, I hit retry and after a few seconds the program returns to the same screen. This happened after my machine recovered from a BSOD caused by the graphics driver.

    8ArwMc.png

    Any advice how to resolve the problem?

    log:

    0:10:35.7: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:10:37.3: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:10:38.6: Warning: 0 : [CloudDrives] Cannot mount cloud part f48fc9bb-1671-4bcc-96e2-e4ae3f0c550a. It was force unmounted.

     

    EDIT: The issue resolved itself right after I posted. Typical. :)

    0:12:11.0: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:12:11.0: Information: 0 : [CloudDrives] Drive f48fc9bb-1671-4bcc-96e2-e4ae3f0c550a was previously unmountable. User is not retrying mount. Skipping.
    0:12:11.0: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:12:11.0: Information: 0 : [CloudDrives] Drive f48fc9bb-1671-4bcc-96e2-e4ae3f0c550a was previously unmountable. User is not retrying mount. Skipping.
    0:12:11.0: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:12:11.0: Information: 0 : [CloudDrives] Drive f48fc9bb-1671-4bcc-96e2-e4ae3f0c550a was previously unmountable. User is not retrying mount. Skipping.
    0:12:12.6: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:12:12.6: Information: 0 : [CloudDrives] Drive f48fc9bb-1671-4bcc-96e2-e4ae3f0c550a was previously unmountable. User is not retrying mount. Skipping.
    0:12:12.6: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:12:12.6: Information: 0 : [CloudDrives] Drive f48fc9bb-1671-4bcc-96e2-e4ae3f0c550a was previously unmountable. User is not retrying mount. Skipping.
    0:12:12.6: Information: 0 : [Disks] Got Pack_Arrive (pack ID: c51ff8e7-f1ca-4e9e-a17e-dcf32852d714)...
    0:12:12.6: Information: 0 : [Disks] Got Disk_Arrive (disk ID: 3fd8f508-6fc6-494a-b2fe-22b52f437e55)...
    0:12:13.6: Information: 0 : [Disks] Updating disks / volumes...
    0:13:08.7: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:13:11.0: Information: 0 : [CloudDrive] [R] Size: 8,192 (x1, IsPrefetch=False)
    0:13:11.0: Information: 0 : [CloudDrive] [W] Ranges: 100 Size: 10,846,208
    0:13:12.9: Information: 0 : [CloudDrive] [R] Size: 3,112,960 (x1, IsPrefetch=False)

     

  13. I'm back for more questions, but first; here are my HybridPool balancing settings:

     

    4S9tiS.png

    COHSCj.pngR12mr3.png

    1) What I'm now trying to achieve is that when I copy new files to my HybridPool, they would be written on LocalPool first (has SSD - fast), then at night, duplicated to the CloudDrive. What is happening, it would seem, instead, is that DrivePool writes some data directly to the CloudDrive and balances it back to LocalPool (no SSD - slow).Is there any way to achieve said functionality?

    2) So duplication is enabled, but I've set a handful of very large folders to only be placed on the CloudDrive, and have changed duplication settings for these folders to be 1x. Balancing doesn't seem to like this, as I've tested for two nights in a row to leave the HybridPool balancing, but when I've come to the PC in the morning, the percentage is dead on in the same spot, and the balance of drives is the same -- any suggestions what might be causing this? Am I better off moving this "pure cloud content" away from the pool and just symlinking it to my HybridPool (I literally need it to be accessible from the same folder path - even if it's stored in the cloud)?

     

    mEHlGr.png

     

    This is how I've set File placement for the data I want only to be kept on cloud storage, but available in the same pool, just via CloudDrive streaming & local caching.

  14. Thanks Drashna, I made that now. Can I cut & paste files from my first drivepool and put it under the PoolPart folder that was now created inside it (fast), or do I need to move them from drive to drive the usual way (slow)?

    Second question: If now copy my data into this "hybridpool", it seems to be spreading partly in the localpool and partly in the cloudpool. How can I have the cloudpool part only be there for parity and have disk reads and initial writes from the localpool side (so that basicly the cloudpool is just for the copies and nothing more).

  15. Hey,

    I've set up a small test with 3x physical drives in DrivePool, 1 SSD drive and 2 regular 4TB drives. I'd like to make a set up where these three drives can be filled up to their brim and any contents are duplicated only on a fourth drive: a CloudDrive. No regular writes nor reads should be done from the CloudDrive, it should only function as parity for the 3 drives.

    Am I better off making a separate CloudDrive and scheduling an rsync to mirror the DrivePool contents to CloudDrive, or can this be done with DrivePool (or DrivePools) + CloudDrive combo? I'm running latest beta for both.

    What I tried so far didn't work too well, immediately some files I were moving were being actually written on the parity drive even though I set it to only contain duplicated content. I got that to stop by going into File Placement and unticking parity drive from every folder (but this is an annoying thing to have to maintain whenever new folders are added).

    1) tCvCcS.png

    2) 7UjB8k.png

×
×
  • Create New...