Jump to content

Edrock200

Members
  • Posts

    36
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Edrock200

  1. Just to add to this, from the other thread, I started manually stopping the service vice dismounting the drive prior to a pc reboot. Last night, the service shut down cleanly after a few minutes without any intervention. Unfortunately upon restart it began the chunk id rescan/rebuild and took several hours.

     

    I've also recently received "Access Denied" when attempting to detach my drive on random occasions. I've been able to get around this by offlining the drive in windows disk manager, then detaching. Interestingly, one time when remounting after doing this, Windows (not cloud drive) mounted the drive as read only. I was able to use the diskpart utility to make it writable again.

  2. Not sure if this is related to your issue, but I had drive corruptions as well with my 256tb cloud drive. Chkdsk would detect the errors, but could never repair them. I then found an article on Microsoft's knowledgebase stating that chkdsk repairs require the Volume Shadow Copy service, and that service does not work on partitions over 64TB. I couldn't shrink my partition size in clouddrive, but I could drink in in Windows Disk manager to 50TB, then I was successful in having chkdsk repair the volume.

  3. To the op, if you clear your cache does it "jump start" for a while? If so, assuming connectivity is good, could be a cache drive I/o issue, prefetch settings, or something is trying to do aggressive random read/writes to the drive. Open perfmon and check drive tab for details on which processes are reading and writing to what files.

  4. Fwiw I ran into this problem too when seeding my drive with Google drive sync. What I found was Google drive writes temp files to a hidden directory on the same drive your seeding, then copies them over when complete. This causes constant requests to the same chunks, eventually causing throttling then disconnects.

     

     

    If your seeding method works the same way you can try my fix. I changed the cache mode from fixed to proportional and left it at 50/50 so the temp chunks could stay in read cache. Before I did this my downloads were constantly at least 50% of my upload speeds even though I don't have verify chunks on nor was I doing any reads against the drive, and eventually disconnects. Prefetch was unusable.

     

    After changing it barely reads during seeding and prefetch works. The only time I get slow downs are when my disk I/o on cache drive can't keep up, but generally only lasts a few seconds but doesn't cause disconnects. Here's my settings:

     

    50tb clouddrive encrypted

    80gb proportional cache 50/50

    10 dl threads

    5 ul threads

    Prefetch trigger 5mb

    Prefetch fwd 50mb (have tested UpTo 200mb w/o issue but found anything over 50 didn't increase overall avg read speed during burst transfers). Get steady ~15-20MB (150-200mbs) read speed on burst transfer downloads with these settings. Have gig connection but upload throttled to 350 and dl to 250mbs. Unlocking these throttles did not increase download speeds with prefetch set higher. Uploads would surge to 550+ then get throttled by Google regardless of thread count.

    Time window 30 seconds

    Upload chunk 20mb

    Download size 5mb (anything larger caused disconnect/throttling issues and slow performance, and lower would cause the same issues due to clouddrive thread count surge when it tries to catch up)

    Chunk cache size 100mb

    Provider Google drive

    Drive indexing disabled

    Antivirus scans disabled on drive

     

    Not sure if drive pool will support, but if your using encryption for each drive you may want to make your drives decrypted, add to drive pool, then use bitlocker to encrypt drive pool drive so your CPU isn't doing multiple encryption/decryption cycles against the same data. I would also disable read striping until your seeding is complete to prevent a download thread surge.

  5. it appears to be normalized now. Fingers crossed for you! You might want to try editing the Program Files/Stablebit/Clouddrive/Clouddrive.service.default.config file to up the timeout and retry threshold/tolerance counts, then restart the service.

  6. The "StableBit CloudDrive Native Service" is EXPLICITLY there to halt the shutdown process so that the system does not just terminate the main service.   And it is handled this way, well.... to prevent the exact issues you're seeing.   

     

    So something is definitely going on here. 

     

    Though, it may worth seeing if this happens only on pre-release drives or on newer drives, as well.

     

     

    That said, you can manually stop the service by opening up the Services management console (run "services.msc", or open Computer Management).  

     

    Thanks Christopher! Just to clarify, I see the Native service is listed as "Stablebit CloudDrive Shutdown Service", this is in addition to the primary Stablebit CloudDrive Service. I'm assuming I should stop down the primary "Stablebit Clouddrive Service." If so, I can just create a batch file with elevated privileges to to stop the service then restart the machine whenever I need to restart. Thanks again!

  7. Mystery solved. I had installed netdrive and mounted my Google drive, but didn't use it to copy anything to/from drive. My a/v scanner is configured to scan local and net drives, and it began attempting to scan repeatedly. Apparently Google didn't like this. I contacted them after they unlocked the account and figured it out. All is well. Sorry for the false flag.

     

    I like the idea of sharing the drive though to prevent getting locked out. I own my domain and gsuite account. I bought it and have 6 accounts paid for monthly (6 family members, we each pay $10/mo). I have about 20tb of system backups, system images, photos, multimedia and my iTunes library. On a side note, the latest version of cloudrive allows me to use prefetch now w/o causing disconnects or throttling. :)

  8. Just a heads up, you can create drives up to 256tb, you just have to type it in. You can also resize drives, but you need to ensure the cluster size you set on creation will support the larger drive sizes. I created on massive 256tb. I'm not saying this is the best setup, but from a performance perspective it works just as well as my 2tb test drive in terms of speed. Cloud drive doesn't actually allocate the full drive space on creation, so I aside from the OS needing to mount the file allocation space, I don't think you will see performance differences based on drive sizes.

     

    It's also important to note that although some settings can be changed on the fly or when mounting the drive, some settings cannot be changed once the drive is created unless you destroy it and start over. Off the top of my head the ones that cannot be changed I believe are: chunk size, cluster size, whether or not the drive is encrypted, if encrypted the key itself, and I think a chunk verification setting, but I could be wrong on that last one.

     

    Min download size and chunk cache size can be changed, but not on the fly. You need to dismount and remount the drive to change this setting.

     

    Prefetch, up/down thread count, bandwidth throttling and cache size/type can all be changed on the fly.

     

    The biggest perfomance gains I've seen are from the chunk size, min chunk download/upload, read/write i/o threads and prefetch settings. These settings vary drastically depending on what type of media you are storing. If it's small data files, then smaller chunks are better. If it's large multimedia, larger chunks are better. I'm doing primarily multimedia storage so my settings are:

     

    5/5 read/write threads

    20MB chunks (max for Google Drive)

    100MB Chunk Cache

    20MB Min Download size

    50GB Cache, Epandable (I've changed this from 20 all the way up to 80GB without much difference in streaming performance, however because I frequently move large files around, I benefit from the increased cache.

     

    Prefetch:

    20MB Trigger

    100MB Prefetch

    120 Second Window

     

     

    250MB/250MB Up/Down Throttle

     

    You'll probably notice that most others doing multimedia have much more aggressive prefetch settings than mine. In my case, these settings allow 3 to 4 HD streams without issue, and more aggressive prefetch settings cause me throttling and disconnect issues.

  9. In case anyone else comes across this thread who is also using Plex, don't let Plex scan your drive until your initial upload is done. I have mine set not to generate thumbnails or due deep media analysis, but it still makes a lot of calls to the drive on initial scan which will potentially cause throttling. Lessons learned on my end:

     

    Create drive

    Turn prefetch off

    Turn off excessive indexers and Google sync clients if your drive is using Google storage*

    Preload drive with content

    Turn on media scanners and let initial scan complete

    Turn on prefetch

     

    If you try to do it all in parallel you will most likely get throttled and it will probably take longer than doing it in the order above. Since Google drive client doesn't offer any detailed stats, I didn't realize it had essentially slowed to zero.

     

    *If you are using Google drive client to preload your clouddrive, throttle drive client up/down to 99/99mbs (max it will allow) and cloud drive to 250/250 up down with 5/5 io threads up/down and no prefetch.

     

    Should you find that you need to move your drive to another pc and still want drive client linked to cloudrive for syncing, do this:

     

    Old PC:

    Close Google drive client

    Backup folder %localappdata%\google\drive

    Detach cloud drive

     

    New pc:

    Install Google drive client and authenticate to account. Accept defaults but immediately close client after logon

    Mount clouddrive to SAME drive letter as old PC

    Restore %localappdata%\google\drive from backup to new pc

    Relaunch Google drive client

     

    This will restore the drive client file hashes and prevent your new pc from needing to download and hash every single file again. Alternatively you can open regedit and go to hkcu/software/Google/drive and point the appdata directory to your cloud drive and restore to that directory, so the hash stays with the drive. Since it's frequently used it should stay in cache when mounted.

    Ed

    *you can turn prefetch back on after initial load of files onto cloud drive

    *I did not mean to mark my own reply as best answer. I was just trying to mark the entire thread as solved. :)

×
×
  • Create New...