Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    373
  • Joined

  • Last visited

  • Days Won

    27

srcrist last won the day on June 6

srcrist had the most liked content!

About srcrist

  • Rank
    Advanced Member

Recent Profile Visitors

690 profile views
  1. It sounds like we might be stumbling on some buggy cache code, between this and the previous notes from Chase about deleting the cache. Make sure you guys are submitting actual tickets and troubleshooter dumps as well, so Alex and Christopher can take a look at the related code and your logs.
  2. The cache also includes information that has been modified but not yet uploaded. Everyone should be *very* careful before simply deleting the local cache. Note that any modifications that have not yet been uploaded to the cloud provider will be permanently lost, and you could potentially corrupt your drive as a result. I believe that you are theoretically correct as long as none of the information in the cache has yet to be uploaded, but extreme caution should be used before following this example. Anyone who deletes their cache with data in the "to upload" state will, A), definitely lose that data for good and, B), potentially corrupt their drive depending on what that data is (read: file system data, for example).
  3. OK. My mistake, then. I haven't started this process, I just thought I remembered this fact from some rClone work I've done previously. I'll remove that comment.
  4. I'm sorry, but this seems incorrect. A Google search returns precisely zero results for that API error prior to this month. The earliest result that I can find was this result from a Japanese site 20 days ago. I don't know why you seem so resistant to the notion that this is a change in Google's API that will need to be addressed by all of us sooner or later, but the objective information available seems to suggest that this is the case. I am also not having errors at the moment (which might be related to the fact that my account is a legitimate account at a large institution, and not a personal GSuite Business account) but I anticipate that we will all, eventually, need to migrate to a structure conformant with this new policy. Note that this obscurity and suddenness is consistent with Google's API changes in the past as well. Nothing about this should be particularly surprising. It's just the sloppy way in which Google typically does business. EDIT: Note also that this API error code (numChildrenInNonRootLimitExceeded) was added to Google's API reference sometime between April and June.
  5. It should probably just be noted that Stablebit is precisely two people. They have never had, to my knowledge, any more people than just Alex doing software development.
  6. Note that the forums are not the official support channel. If you need support, submit a ticket to support at this link: https://stablebit.com/Contact
  7. That is not possible. It's contrary to the intended purpose of this particular tool. You want something more similar to rClone, Netdrive, or Google's own Sync and Backup software.
  8. Hey guys, So, to follow up after a day or two: the only person who says that they have completed the migration is saying that their drive is now non-functional. Is this accurate? Has nobody completed the process with a functional drive--to be clear? I can't really tell if anyone trying to help Chase has actually completed a successful migration, or if everyone is just offering feedback based on hypothetical situations. I don't even want to think about starting this unless a few people can confirm that they have completed the process successfully.
  9. Has anyone with a large drive completed this process at this point? If you don't mind, can you folks drop a few notes on your experience? Setting aside the fact that the data is inaccessible and other quality of life issues like that, how long did the process take? How large was your drive? Did you have to use personal API keys in order to avoid rate limit errors? Anything else you'd share before someone began to migrate data? Trying to get a feel for the downtime and prep work I'm looking at to begin the process.
  10. srcrist

    Beta 1.2.0.1305 = WTF

    The discussion at this link will explain what it is doing (and why). Google has implemented new API limits in their service and the drive will become unusable without the changes, eventually. I do agree that BETA releases should be marked as such in the upgrade notification, and am waiting myself to actually upgrade to the new format. But this is what it going on, and why.
  11. I'm not really going to get bogged down with another technical discussion with you. I'm sorry. I can only tell you why this change was originally implemented and that the circumstances really had nothing to do with bandwidth. If you'd like to make a more formal feature request, the feedback submission form on the site is probably the best way to do so. They add feature requests to the tracker alongside bugs, as far as I know.
  12. No, I think this is sort of missing the point of the problem. If you have, say, 1 GB of data, and you divide that data up into 100MB chunks, each of those chunks will necessarily be accessed more than, say, a bunch of 10MB chunks, no matter how small or large the minimum download size, proportional to the number of requested reads. The problem was that CloudDrive was running up against Google's limits on the number of times any given file can be accessed, and the minimum download size wouldn't change that because the data still lives in the same chunk no matter what portion of it you download at a time. Though a larger minimum download will help in cases where a single contiguous read pass might have to read the same file more often, it wouldn't help in cases wherein any arbitrary number of reads has to access the same chunk file more often--and my understanding was that it was the latter that was the problem. File system data, in particular, is an area where I see this being an issue no matter how large the minimum download. In any case, they obviously could add the ability for users to work around this. My point was just that it had nothing to do with bandwidth limitations, so an increase in available user-end bandwidth wouldn't be likely to impact the problem.
  13. I believe the 20MB limit was because larger chunks were causing problems with Google's per-file access limitations (as a result of successive reads), not a matter of bandwidth requirements. The larger chunk sizes were being accessed more frequently to retrieve any given set of data, and it was causing data to be locked on Google's end. I don't know if those API limitations have changed.
  14. Well the first problem is that you only allow the CloudDrive volume to hold duplicated data, while you only let the other volumes hold unduplicated data, which will prevent your setup from duplicating data to the CloudDrive volume at all (none of the other drives can hold data that is duplicated on the CloudDrive volume). So definitely make sure that all of the volumes that you want to duplicate can hold duplicated data. Second, before you even start this process, I would caution you against using a single 256TB NTFS volume for your CloudDrive, as any volume over 60TB exceeds the size limit for shadow copy and, also, thus, chkdsk. That is: a volume that large cannot be repaired by chkdsk in case of any file system corruption, and is effectively doomed to increase corruption over time. So you might consider a CloudDrive with multiple partitions of less than 60TB apiece. That being said, NEITHER of these things should have any impact on the write speed to the drive. The pool should effectively be ignoring the CloudDrive altogether, since it cannot duplicate data to the CloudDrive, and only the other drives can accept new, unduplicated data. The SSD balancer means that all NEW data should be written to the SSD cache drive first, so I would look at the performance of that underlying drive. Maybe even try disabling the SSD balancer temporarily and see how performance is if that drive is bypassed, and, if it's better, start looking at why that drive might be causing a bottleneck. What sort of drive is your CloudDrive cache drive, and how is that cache configured? Also, what are the CloudDrive settings? Chunk size? Cluster size? Encryption? Etc.
×
×
  • Create New...