Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by modplan

  1. Wow a blast from the past. Still following this thread apparently, got an email notification. I abandoned cloud drive over a year ago since this issue made it utterly useless for me and no progress was being made on a resolution. Sad to see this hasn't been fixed yet as I would still find my purchase useful if it had been fixed and could begin use cloud drive again. Chris, if this is an issue with sparse files on NTFS, perhaps cloud drive should not be using them and use something else instead? I write software for a living, and if a dependency is broken in a way that I cannot fix and that negatively impacts my software, I find a workaround by using something else. period.
  2. Yep I bet the driver is the bottleneck here. Your best bet would likely be 2 VMs that are on the same network. Share the destination drive over CIFS to the source VM and do the copy that way. You'll have some minor network overhead, but will no longer have the concurrent upload & download overhead.
  3. Bummer, since you thought he may have a fix way back in December. Unfortunate, but thanks for the update.
  4. Any updates since a month ago? (I check in monthly to see if I can use this software again)
  5. Thanks Chris, any updates since last month? Would like to get back to using CloudDrive, not seeing anything obviously relevant in the changelog.
  6. Thanks Christopher. While most of that was understood, I only find fault with categorizing an issue that fills up a 120GB SSD and knocks all of my drives offline in 6-12 hours an issue that "generally resolves itself" unless by resolving itself you mean forcing the dismount of my drives so that the clusters are finally freed.... For me, and it looks like others, it makes CloudDrive utterly useless.
  7. I very much doubt it based upon the changelog, but please do report back with your findings.
  8. Still no updates about this issue? It has been open several months.
  9. Does clouddrive treat this like a system crash if an upload/download is in progress or if anything is in 'To Upload' ?? or does it handle it gracefully ?? If graceful, I could set up a scheduled task to do this nightly, or even every few hours. Having this same problem, a separate thread was made about it OVER 2 months ago. I've basically abandoned Clouddrive because of this, if it is never fixed I'll probably ask for a refund (but not sure if I'll get it, since I purchased all the way back in January). My cache drive fills all the way up and then forcibly dismounts my clouddrive disks every day or two, sometimes in less than a day. Despite what has been said, the reserved clusters NEVER, EVER go down over time unless I reboot or detach the disks.
  10. Any updates on this? Sitting on 80GB of reserved clusters that are about to knock all of my drives offline (due to the SSD filling up). There has been no activity on the cloud drives since mount other than pinning data (which seems to run over and over an awful lot), and the reserved clusters have not decreased once this week since checking them every time I am on the server, they always increase, even after days of no activity to the cloud drive. I understand this is an NTFS issue, as per our previous conversation, but issue# 27177 was created to investigate if the issue could be avoided I think. PS: I've tried every cache combination available I think, currently using a 1GB fixed cache on each of 2 drives, yet still 80GB of sparse files clogging my drive :( I invested in CloudDrive and, after promising (but SLOWWW) results, invested in a dedicated 120GB SSD cache specifically for CloudDrive. Speeds increased DRAMATICALLY, and then I ran into this Edit: currently running .725 btw
  11. Has metadata and directory structure been pinned? This is expected until that happens, once that info is pinned, browsing directories should be instant.
  12. But the drive is being forcefully dismounted because the cache drive is "Full" and new data cannot be downloaded from Google, after several of these errors, CloudDrive forcefully dismounts the drive, isn't this expected behavior?
  13. That is very very unfortunate. Is there an issue open for Alex's investigation?
  14. Is there no workaround for this? Waiting seems to do nothing, the cache drive fills up before any clusters are released, at least, and I cannot reboot my server every 24 hours. Was buying a 120GB SSD to use as a cloud drive cache a complete waste of money?
  15. Microsoft Windows [Version 6.1.7601] Copyright © 2009 Microsoft Corporation. All rights reserved. C:\Users\Administrator>fsutil fsinfo ntfsinfo z: NTFS Volume Serial Number : 0x70dc1bd6dc1b9586 Version : 3.1 Number Sectors : 0x000000000df937ff Total Clusters : 0x0000000001bf26ff Free Clusters : 0x0000000001becaca Total Reserved : 0x0000000002002160 Bytes Per Sector : 512 Bytes Per Physical Sector : 512 Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x0000000000200000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x0000000000000002 Mft Zone Start : 0x00000000000c0200 Mft Zone End : 0x00000000000cca00 RM Identifier: A2098C65-70BC-11E6-9CFE-005056C00008 Looks like my cache drive was fully filled with reserved data and my drives were forcefully dismounts last night. It has been sitting like this for hours and the reserved amount has not gone down.
  16. If we are talking the same issue (sorry if not) my new SSD cache drive slowly fills up, with little activity, until it is full (though it isn't full at all if you check the folders on the drive as mentioned), and then Cloud Drive forcefully dismounts my 2 drives due to the cache drive being out of space. Did not see this when the cache was on the RAID array, but again, several TBs free. Seeing this every few days with the cache moved to a 120GB SSD, however. Makes Cloud Drive terribly unpredictable and unstable.
  17. Is this why my new SSD drive I bought specifically for a cache keeps filling up and dismounting my drives? Right now the drive only has 400MB on it if I select all folders (hidden folders included) -> right click -> properties. But if I click "My Computer" it shows the drive as having only 4GB free (It is a 120GB SSD) and every morning when I wake up my cloud drives are dismounted (after a nightly backup). The nightly backup is not backing up more than 120GB or anywhere near it. If this is the cause, there is no way I can fix it? This did not happen when the cache was on my (slow) RAID array, but there was about 2TB free on that.
  18. You can get around this by importing the content locally, and then moving it to your Clouddrive drive. For example I have 2 folders in my 'TV' library for plex, X:\Video\TV and E:\Video\TV, X: is local, E: is Clouddrive. All content is originally added to X:, where it is indexed. Then, as I watch a whole season or something, and I want to 'archive' a show, I move it to E: Plex will not re-index since it notices that it is just the same file that has moved, updates the pointer to the file instantly, and everything works perfectly.
  19. Windows uses RAM as a cache when copying files between drives. My guess is that your large transfer eventually saturates your computer's RAM before that RAM can be destaged to the cache drive. Since your computer is now starving for RAM, other tasks begin to fail/take forever. My server has 16GB of RAM and I saw something similar. I would suggest using UltraCopier to throttle the copy. I set the throttle to the same as my upload speed and I had no more issues trying to copy a 4TB folder to my CloudDrive (other than it taking a few weeks)
  20. Just a PSA post here. I recently passed 1.26 MILLION files in my google drive account. Only ~400,000 of these files were from CloudDrive, but I use lots of other apps that write lots of files to my account and thought I would pass on this info. 1) Google Drive for Windows client will crash, even if you only have 1 folder synced to your desktop: I have only a few docs folders actually synced to my desktop client, but the app insists on downloading an entire index of all of your files on your account into memory, before writing it to disk, when this crosses 2.1GB of RAM, as it did for me after 1.26 million files, the app will crash (due to Google Drive for Windows still stupidly being 32-bit). No workaround other than lowering your number of files on your account. 2) Google Drive API documentation warns of API breakdowns as you cross 1 million files on your account, query sorting can cease to function, etc, who knows which apps depend on API calls that could start to fail. I've spent the last 10 days running a python script I wrote to delete un wanted/needed files, one by one. 10 days, and I probably have 10 days left. I hope to get to ~600,000 total by the time I am done. Hope this helps someone in the future.
  21. http://dl.covecube.com/CloudDriveWindows/beta/download/
  22. For #1 I use ultracopier when I am doing large copies to throttle the copy and keep the cache drive from getting overloaded. You could try that. For #3 I think I noted earlier when you mentioned it and Chris later confirmed that CloudDrive has no concept of files, just the raw data blocks, so I doubt per file prefetching is possible.
  23. This has always been the case for me. However it is due to my HDD where the cache is hosted being absolutely slammed, with queue lengths super high. I even see the exact same error you describe, it is a daily occurrence for me. But it has nothing to do with CloudDrive but rather hardware limitations. Is your HDD, or SSD even, too busy when this happens?
  24. modplan

    Moving the Cache?

    Are they all stopping with close to the same 'Duration'? There is a timeout (it used to be 100 secs, not sure what it is now) where the Chunk will abort and retry if it is not fully uploaded in that amount of time. So if you have 20 threads for example, and your upload speed is not fast enough to upload all 20 chunks within the timeout window, you will see a lot of this.
  • Create New...