Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. Once you change the API key in the advanced settings, you will need to reauthorize the drive (or detach it and reattach it). It will not immediately switch to the new key until it recreates the API authentication. Did you do that? Also, how many upload and download threads are you using? 11 download threads per second is likely just hitting Google's API limit, and it is likely why you're getting throttled--especially since it looks like you're also allowing around 10 upload threads at the same time. In my experience, any more than 10-15 threads, accounting for both directions, will lead to exponential backoff and API throttling on Google's part. Try something like 8-10 download threads and 3-5 upload threads and see if it reduces the throttling or makes it disappear.
  2. Yessir. This article on the wiki details the process: https://wiki.covecube.com/StableBit_CloudDrive_Q7941147 Note both that the process on Google's end is a little outdated (though you can still probably navigate the process from that article), and that switching to a personal key isn't really necessary and carries new limitations relative to Stablebit's developer keys (for example, people using their personal keys were already getting these per-folder limitations back in June, while Stablebit's key has just had this limitation enforced this month). So, caveat emptor.
  3. If you're still having issues converting a drive with any release newer than .1314, you should absolutely open a ticket.
  4. User Rate Limit Exceeded is an API congestion error from Google. If you're being given that error, and you're using your own key, you're hitting the usage limits on your key. If you're using Stablebit's default key, then they are hitting their limits--which is possible, especially if many people are having to do a lot of API calls at once to convert drives. That's just speculation, though. If it's stablebit's key, you'll want to open a ticket and let them know so they can request a raise on their limit. You can also try reauthorizing the drive, as stablebit has several API keys already--or switching to your own.
  5. The fix for these issues is several months old. It was fixed back in July. EDIT: THIS IS INCORRECT. .1318 is the latest stable release on the site, and it has the fix (which was in .1314). You don't need to use a beta version, if you don't want.
  6. Open a ticket using the link provided in the post you quoted. They'll get you sorted.
  7. It looks like Google's Enterprise Standard plan still has unlimited for even a single user for $20 a month. So this change appears to be little more than a price increase for people who were previously using their unlimited business plan. That remains, to my knowledge, the only provider comparable to Google's own previous plan.
  8. CloudDrive itself operates at a block level, so it isn't aware of what files in the file system are being written to or read by the applications on your computer. Much like the firmware for a HDD or SDD is unaware of that information as well. So, that is to say, that it isn't possible via CloudDrive. Windows Resource Monitor or Process Explorer or another tool to look at Windows' file system usage would be required--as it sounds like you discovered.
  9. Neither of these errors/issues are likely related to the topic of this thread. If the drive is RAW, the file system is likely corrupted. The change this thread references only restructures the way the data is stored on the provider, it isn't changing any of the data itself. If it was corrupting data on the provider, we'd be seeing that manifest in other ways for other users--and that isn't the case. I would suggest opening a ticket with support here: https://stablebit.com/Contact in order to try and troubleshoot how your file system was actually corrupted. Though even Windows Updates have corrupted ReFS volumes on a regular basis. It just isn't a stable file system, and that's why the functionality has been rolled back in Windows 10 for now. So if you blinked or breathed wrong, that might be the culprit. And a User Rate Limit Exceeded error is an API limit error. It means the API key you're using is hitting the limit of calls per time period, and Google won't allow it to access the API any more. Are you using the default API key, or your own?
  10. If you're following the instructions correctly, you are simply reshuffling data around on the same drives. They are file system level changes, and will not require any data to be reuploaded. It should complete in a matter of seconds. And the purpose of the multiple partitions is covered above: 1) to keep each volume below the 60TB limit for VSS and Chkdsk, and, 2) to avoid multiple caches thrashing the same drive and to make bandwidth management easier. If you have multiple cache drives then you can, of course, use multiple CloudDrive drives instead of volumes on the same drive, but make sure that you can actually support that overhead. I'd suggest an SSD cache drive per CloudDrive drive--the cache can be surprisingly resource heavy, depending on your usage. In any case, though, there isn't really a compelling need to use multiple CloudDrive drives over volumes--especially if the drives will all be stored on the same provider space. There just isn't really any advantage to doing so.
  11. srcrist

    Change cluster size

    You cannot make any changes to the cluster size server-side. Changing the cluster size modifies the drive structure within the chunk files that are stored on your provider. All of the data would have to be downloaded and reuploaded. You CAN, however, change the cluster size per-volume (partition), so if you can just leave your existing data in place, you can create a new volume with a different cluster size.
  12. Several people have confirmed that the "Enterprise Standard Plan" is $20 a month and retains the unlimited with even a single user. So this isn't likely to translate to much more than a price increase. See, for an example: https://old.reddit.com/r/DataHoarder/comments/j61wcg/g_suite_becomes_google_workspace_12month/g7wfre3/
  13. There should be no reason to change any advanced settings, and I would discourage people from doing so unless you specifically need to change those settings for some reason. The per-folder limitation issue should be resolved by using a beta newer than .1314 with the default settings.
  14. Note: You guys should not have to do anything specific. The advanced options setting is to force CloudDrive to convert the entire drive--including existing data--to a hierarchical format. That should not be necessary to fix your problem, and will take many hours (often several days) for a large drive. Any release later than .1314 should automatically use a hierarchy for any new data added to the drive, and move chunks to the hierarchical structure when and if they are modified. If you were still using a .12XX release, you will have to migrate to a BETA. You can find BETA releases here: http://dl.covecube.com/CloudDriveWindows/beta/download/ Using any beta newer than .1314 should immediately fix the per-folder limit issue. If it does not, I would definitely open a ticket before doing anything else: https://stablebit.com/Contact
  15. I don't have any info on this other than to say that I am not experiencing these issues, and that I haven't experienced any blue screens related to those settings. That user isn't suggesting a rollback, they're suggesting that you edit the advanced settings to force your drive to convert to the newer hierarchical format. I should also note that I do not work for Covecube--so aside from a lot of technical experience with the product, I'm probably not the person to consult about new issues. I think we might need to wait on Christopher here. My understanding, though, was that those errors were fixed with release .1314. It presumes that existing data is fine as-is, and begins using a hierarchical structure for any NEW data that you add to the drive. That should solve the problem. So make sure that you're on .1314 or later for sure. Relevant changelog: .1314 * [Issue #28415] Created a new chunk organization for Google Drive called Hierarchical Pure. - All new drives will be Hierarchical Pure. - Flat upgraded drives will be Hierarchical, which is now a hybrid Flat / Hierarchical mode. - Upgrading from Flat -> Hierarchical is very fast and involves no file moves. * Tweaked Web UI object synchronization throttling rules. .1312 * Added the drive status bar to the Web UI. .1310 * Tuned statistics reporting intervals to enable additional statistics in the StableBit Cloud. .1307 * Added detailed logging to the Google Drive migration process that is enabled by default. * Redesigned the Google Drive migration process to be quicker in most cases: - For drives that have not run into the 500,000 files per folder limit, the upgrade will be nearly instantaneous. - Is able to resume from where the old migration left off.
  16. Note that deleting files off of the drive and deleting the drive itself are not the same thing. Simply removing files off of the file system of the drive will not remove the drive data from your provider, because a drive of X capacity still exists, just like removing files off of your physical drive does not cause it to physically shrink. If you want to remove the storage files located on the provider then you have to either shrink the drive or destroy it. Also note that, with CloudDrive, the "files" stored on the provider are not representative of the files stored on the drive in the first place. They are chunks of data that comprise the drive structure that CloudDrive uses to store information. Once they exist on the provider, there is no need to delete them unless the drive structure itself changes. CloudDrive will simply overwrite and modify them as data is added and removed.
  17. At least there is a silver lining. That sucks that CenturyLink's service and hardware is so locked down and convoluted, though. I use a wired router and separate wireless access points, personally, and that would be a nightmare for my setup. Hope it works out.
  18. Generally speaking, even if other router functions are locked down by the ISP on their hardware, consumers typically have access to change the router to bridge mode themselves. Are you sure that you don't have access to that via the web configuration for the router? Is the fiber gateway also integrated into the router, or can you just replace it with your own and bypass their hardware entirely?
  19. There really isn't anything unique about CloudDrive's network usage that should impact the router that you use over and above any other high-use network application. So any general router recommendations should be fine. I personally rely on Small Net Builder for reviews and ratings for home network hardware. Might be worth a look for you.
  20. There will definitely be a performance hit, but it won't necessarily be prohibitive. The extent to which it matters will depend largely on your specific use case, and the issues revolve predominately around the same areas where hard drives perform more poorly compared to their solid state cousins in general. For example: if you have a drive wherein you are constantly writing and reading simultaneously--that SSD is going to make a profound difference. If you have a drive where you're predominately just reading contiguous larger files, the SSD shouldn't have as big an impact. As a general rule, just imagine that your CloudDrive *is* whatever sort of drive you use for the cache, and the limitations will be similar.
  21. Writes to your CloudDrive drive will be throttled when there is less than (I believe) 5GB remaining on the cache drive. This is normal and intended functionality. Just get used to throttled writes until all of the data is duplicated. Your local write speed is obviously faster than your upload. Completely expected. You can install whatever you want to a pool.
  22. srcrist

    A weird question

    If I am reading this correctly, you want to be able to upload the data to your cloud storage remotely, and then access it later locally. You could do this with CloudDrive using, as mentioned above, a (Windows) VPS to upload the data, and then detatch the CloudDrive drive from the VPS and reattach it locally. But this is tedious. Since CloudDrive only allows your drive to be mounted from a single machine at any given time, it really isn't the best tool for what you're trying to accomplish. A tool like rClone which simply accesses Google Drive's native file structure would be of much more use, I think. You could use any cheap VPS (and several other services, for that matter) to add files to your drive, and then use rClone to access them locally as a folder. I suspect that your question is, overall, based on a misconception of how CloudDrive works. It will not access the files that exist on your Google Drive space. Only its own storage files that are contained on your Drive.
  23. Now that i read this again: make sure that your duplication is enabled on the master pool. Not the local subpool. Duplication enabled on THAT pool will only duplicate to drives contained within that pool.
  24. I'm not 100% sure what the issue is, as I think it could be multiple things based on this information. One common mistake is to not enable duplication on both pools. Both pools need to be allowed to accept duplicated data, while only the local pool should be allowed to accept unduplicated data. If only one pool is allowed to contain duplicates, there will be no pool to duplicate to. So maybe check that setting in your balancers. Balancer logic can get pretty complicated, so I may not be able to be as helpful as I'd like with respect to your specific aims and configuration. But it should be relatively simple. You need two pools (local and cloud), both configured to accept duplicated data, the local pool configured to accept your new (unduplicated) data, and 2x duplicated enabled on the master pool in order to duplicate between them.
  25. Yeah, sure thing. The only other thing that I would point out is that chkdsk has a hard limit of 60TB per volume. So you'll probably not want to exceed that limit. A single CloudDrive drive can, however, be partitioned into multiple volumes if you need more space, and those volumes can be recombined using DrivePool to create a unified filesystem that can still be error checked and repaired by chkdsk.
×
×
  • Create New...