Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won


darkly last won the day on September 30 2017

darkly had the most liked content!

About darkly

  • Rank
    Advanced Member

Recent Profile Visitors

629 profile views
  1. I can second this. Been using beta .1316 on my Plex server for months now and things have been running smoothly for the most part (other than some API issues that we never determined the exact reason for but those have mostly resolved themselves). The only reason I brought any of this up is I'm finally getting around to setting up CloudDrive/DrivePool on my new build and wasn't sure what versions I should install for a fresh start. Looks like I'll just stick with beta .1316 for now. Thanks!
  2. confusing cuz v1356 is also suggested by Christopher in the first reply to this thread. Also the stable changelog doesn't mention it at all (https://stablebit.com/CloudDrive/ChangeLog?Platform=win), and the version numbers for the stables are completely different right now to the betas (1.1.6 vs 1.2.0 for all betas including the ones that introduced the fixes we're discussing).
  3. Are you using this for plex or something similar as well? I've been uploading several hundred gigabytes per day over the last few days and that's when I'm seeing the error come up. It doesn't seem to affect performance or anything. It usually continues just fine, but that error pops up 1-3 times, sometimes more. My settings are just about the same with some slight differences in the Prefetcher section. I've seen a handful of conflicting suggestions when it comes to that here. This is what I have right now: Don't think that should cause the issues I'm seeing though... Are you also using your
  4. Might do that if it keeps happening. Out of curiosity, can you share your I/O Performance settings?
  5. still going on. Happens several times a day consistently since I upgraded to 1316. Never had the error before unless I was actually having a connection issue with my internet.
  6. I should probably mention I'm using my own API keys, though I don't see how that should affect this in this way (I was using my own API keys before the upgrade too). I'm also on a gigabit fiber connection and nothing about that has changed since the upgrade. As far as I can tell, this feels like an issue with CD.
  7. I'm noticing that since upgrading to 1316, I'm getting a lot more I/O errors saying there was trouble uploading to google drive. Is there some under the hood setting that changed which would cause this? I've noticed it happen a few times a day now where previously it'd hardly ever happen. Other than that, not noticing any issues with performance. EDIT Here's a screenshot of the error:
  8. I had the same experience the other night. I'm just worried about potential issues in the future with directories that are over the limit as I mentioned in the comment above. Overall performance has become much better as well for my drives shared over the network (but keep in mind I was upgrading from a VERY old version so that probably played a factor in my performance previously).
  9. Are there any other under the hood changes with 1314 vs the current stable that we should be aware of? Someone mentioned a "concurrentrequestcount" setting on a previous beta? What does that affect? What else should we be aware of before upgrading? I'm still on quite an old version and I've been hesitant to upgrade, partly cuz losing access to my files for over 2 weeks was too costly. Apparently the new API limits are still not being applied to my API keys, so I've been fine so far, but I know I'll have to make the jump soon. Wondering if I should do it on 1314 or wait for the next stable. The
  10. wayback machine confirms the rule did not exist last year. Sorry, but at this point, I've just given on what he's saying. I've been uploading this whole time and I still have yet to see this error so . . . really not sure when it's going to hit me. As far as staged rollouts go, this is remarkably slow lol. Do the limits apply to duplicating files directly on Google Drive? Regardless, I was hoping more for a built-in option for this, not to do it manually, but I have no problem doing it manually. If I upgrade to the beta, does it automatically start trying to migrate my drives o
  11. welp, maybe it's just a matter of time then . . . Does anyone have a rough idea when google implemented this change? Hope we see a stable, tested update that resolves this soon. Any idea if what I suggested in my previous post is possible? I have plenty (unlimited) space on my gdrive. I don't see why it shouldn't be possible for CloudDrive to convert the drive into the new format by actually copying the data to a new, SEPARATE drive with the correct structure, rather than upgrading the existing drive in place and locking me out of all my data for what will most likely be a days long proc
  12. I don't have most of these answers, but something did occur to me that might explain why I'm not seeing any issues using my personal API keys with a CloudDrive over 70TB. I partitioned my CloudDrive into multiple partitions, and pooled them all using DrivePool. I noticed earlier that each of my partitions only have about 7-8TB on them (respecting that earlier estimate that problems would start up between 8-10TB of data). Can anyone confirm whether or not a partitioned CloudDrive would keep each partition's data in a different directory on Google Drive?
  13. again, this makes no sense. Why would they conform to google's limits, then release an update that DOESN'T CONFORM TO THOSE LIMITS, only to release a beta months later that forces an entire migration of data to a new format THAT CONFORMS TO THOSE LIMITS AGAIN?
  14. Is there no possibility of implementing a method of upgrading CloudDrives to the new format without making the data completely inaccessible? How about literally cloning the data over onto a new drive that follows the new format while leaving the current drive available and untouched? Going days without access to the data is quite an issue for me...
  • Create New...