Jump to content
Covecube Inc.

darkly

Members
  • Content Count

    81
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by darkly

  1. I can second this. Been using beta .1316 on my Plex server for months now and things have been running smoothly for the most part (other than some API issues that we never determined the exact reason for but those have mostly resolved themselves). The only reason I brought any of this up is I'm finally getting around to setting up CloudDrive/DrivePool on my new build and wasn't sure what versions I should install for a fresh start. Looks like I'll just stick with beta .1316 for now. Thanks!
  2. confusing cuz v1356 is also suggested by Christopher in the first reply to this thread. Also the stable changelog doesn't mention it at all (https://stablebit.com/CloudDrive/ChangeLog?Platform=win), and the version numbers for the stables are completely different right now to the betas (1.1.6 vs 1.2.0 for all betas including the ones that introduced the fixes we're discussing).
  3. Are you using this for plex or something similar as well? I've been uploading several hundred gigabytes per day over the last few days and that's when I'm seeing the error come up. It doesn't seem to affect performance or anything. It usually continues just fine, but that error pops up 1-3 times, sometimes more. My settings are just about the same with some slight differences in the Prefetcher section. I've seen a handful of conflicting suggestions when it comes to that here. This is what I have right now: Don't think that should cause the issues I'm seeing though... Are you also using your
  4. Might do that if it keeps happening. Out of curiosity, can you share your I/O Performance settings?
  5. still going on. Happens several times a day consistently since I upgraded to 1316. Never had the error before unless I was actually having a connection issue with my internet.
  6. I should probably mention I'm using my own API keys, though I don't see how that should affect this in this way (I was using my own API keys before the upgrade too). I'm also on a gigabit fiber connection and nothing about that has changed since the upgrade. As far as I can tell, this feels like an issue with CD.
  7. I'm noticing that since upgrading to 1316, I'm getting a lot more I/O errors saying there was trouble uploading to google drive. Is there some under the hood setting that changed which would cause this? I've noticed it happen a few times a day now where previously it'd hardly ever happen. Other than that, not noticing any issues with performance. EDIT Here's a screenshot of the error:
  8. I had the same experience the other night. I'm just worried about potential issues in the future with directories that are over the limit as I mentioned in the comment above. Overall performance has become much better as well for my drives shared over the network (but keep in mind I was upgrading from a VERY old version so that probably played a factor in my performance previously).
  9. Are there any other under the hood changes with 1314 vs the current stable that we should be aware of? Someone mentioned a "concurrentrequestcount" setting on a previous beta? What does that affect? What else should we be aware of before upgrading? I'm still on quite an old version and I've been hesitant to upgrade, partly cuz losing access to my files for over 2 weeks was too costly. Apparently the new API limits are still not being applied to my API keys, so I've been fine so far, but I know I'll have to make the jump soon. Wondering if I should do it on 1314 or wait for the next stable. The
  10. wayback machine confirms the rule did not exist last year. Sorry, but at this point, I've just given on what he's saying. I've been uploading this whole time and I still have yet to see this error so . . . really not sure when it's going to hit me. As far as staged rollouts go, this is remarkably slow lol. Do the limits apply to duplicating files directly on Google Drive? Regardless, I was hoping more for a built-in option for this, not to do it manually, but I have no problem doing it manually. If I upgrade to the beta, does it automatically start trying to migrate my drives o
  11. welp, maybe it's just a matter of time then . . . Does anyone have a rough idea when google implemented this change? Hope we see a stable, tested update that resolves this soon. Any idea if what I suggested in my previous post is possible? I have plenty (unlimited) space on my gdrive. I don't see why it shouldn't be possible for CloudDrive to convert the drive into the new format by actually copying the data to a new, SEPARATE drive with the correct structure, rather than upgrading the existing drive in place and locking me out of all my data for what will most likely be a days long proc
  12. I don't have most of these answers, but something did occur to me that might explain why I'm not seeing any issues using my personal API keys with a CloudDrive over 70TB. I partitioned my CloudDrive into multiple partitions, and pooled them all using DrivePool. I noticed earlier that each of my partitions only have about 7-8TB on them (respecting that earlier estimate that problems would start up between 8-10TB of data). Can anyone confirm whether or not a partitioned CloudDrive would keep each partition's data in a different directory on Google Drive?
  13. again, this makes no sense. Why would they conform to google's limits, then release an update that DOESN'T CONFORM TO THOSE LIMITS, only to release a beta months later that forces an entire migration of data to a new format THAT CONFORMS TO THOSE LIMITS AGAIN?
  14. Is there no possibility of implementing a method of upgrading CloudDrives to the new format without making the data completely inaccessible? How about literally cloning the data over onto a new drive that follows the new format while leaving the current drive available and untouched? Going days without access to the data is quite an issue for me...
  15. that doesn't change the fact that versions prior to the beta mentioned above don't respect the 500K child limit that is forced directly by google... unless one of the devs wants to chime in and drop the bombshell that earlier versions actually did respect this and only the later versions for some reason stopped respecting the limit, only to have to go back to respecting it in the betas . . . /s
  16. I mean, I'd imagine I'd have to sooner or later. The child directory/file limit still exists on google drive, and 1178 isn't equipped to handle that in any capacity. I'm just wondering why I haven't gotten any issues yet considering how much data I have stored.
  17. Soo..... I'm still on version 1.1.2.1178 (been reluctant to upgrade with the various reports I keep seeing), but my gdrive has over 100TB of data on it and I've never seen the error about the number of children being exceeded. I use my own API keys. Have I just be extremely lucky up until this point?
  18. Thanks. I've actually tested having parts of a pool absent before and I'm pretty sure you can still write to the remaining drives. My problem is that once the cloud drive comes online, I don't want to simply "balance" the pool. I don't want the local disk(s) in the pool to be used at all. Ideally, once the CloudDrive comes back online, all data is taken off the local disks and moved to the CloudDrive. I'm just not sure how I'd configure that behavior to begin with. If the first part of what you said is correct (that balancing would be paused), then that's great, as that's exactly the behavior
  19. I'm currently using a CloudDrive that is partitioned into many smaller parts and pooled together with DrivePool. The CloudDrive is encrypted and NOT set up to automatically mount when the OS loads. One downside of this is that if any applications are expecting a directory to exist, it won't be able to find it until I unlock the CloudDrive and DrivePool picks up at least one partition. I have an idea on how to resolve this, but I'm not sure exactly how to implement it. I'm thinking that if I either add a local drive to the existing pool, or create a new pool consisting of just the local drive +
  20. Most changelogs don't go in depth into the inner workings of added functionality. They only detail the fact that the functionality now exists, or reference some bug that has been fixed. Expecting to find detailed information on how duplication functions in the CloudDrive changelog feels about the same as expecting Plex server changelogs to detail how temporary files created during transcoding are handled. When your cache is low on space, you first get a warning that writes are being throttled. At some point you get warnings that CD is having difficulty reading data from t
  21. oh THAT'S where it is. I rarely think of the changelog as a place for technical documentation of features . . . Thanks for pointing me to that anyway. This has definitely not been my experience, except in the short term. While writes are throttled for a short while, I've found that extended operation under this throttling can cause SERIOUS issues as CloudDrive attempts to read chunks and is unable to due to the cache being full. I've lost entire drives due to this.
  22. I fully understand that the two functions are technically different, but my point is that functionally, both can affect my data at a large scale. My question is about what the actual step-by-step result of enabling these features on an existing data set would be, not what the end result is intended to be (as it might never get there). An existing large dataset with a cache drive nowhere near the size of it is the key point to my scenarios. What (down to EVERY key step) happens from the MOMENT I enable duplication in CloudDrive? All of my prior experience with CloudDrive suggests to me that my
  23. Can you shine some more light on what exactly happens when these type of features are turned on or off on existing drives with large amounts of data already saved? I'm particularly interested in the details behind a CloudDrive+DrivePool nested setup (multiple partitions on CloudDrives pooled). Some examples: 40+TB of data on a single pool consisting of a single clouddrive split into multiple partitions, mounted with a 50GB cache on a 1TB drive. What EXACTLY happens when duplication is turned on in CD, or if file duplication is turned on in DP? 40+TB on a pool, which itself is co
  24. I have two machines, both running clouddrive and drivepool. They're both configured in a similar way. One CloudDrive is split into multiple partitions formatted in NTFS, those partitions are joined by DrivePool, and the resulting pool is nested in yet another pool. Only this final pool is mounted via drive letter. One machine is running Windows Server 2012, the other is running Windows 10 (latest). I save a large video file (30GB+) onto the drive in the windows server machine. That drive is shared over the network and opened on the windows 10 machine. I then initiate a transfer on the Windows
×
×
  • Create New...