Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by srcrist

  1. Once you change the API key in the advanced settings, you will need to reauthorize the drive (or detach it and reattach it). It will not immediately switch to the new key until it recreates the API authentication. Did you do that?

    Also, how many upload and download threads are you using? 11 download threads per second is likely just hitting Google's API limit, and it is likely why you're getting throttled--especially since it looks like you're also allowing around 10 upload threads at the same time. In my experience, any more than 10-15 threads, accounting for both directions, will lead to exponential backoff and API throttling on Google's part. Try something like 8-10 download threads and 3-5 upload threads and see if it reduces the throttling or makes it disappear.

  2. 4 minutes ago, TMGPhilip said:

    i just had to wait a few hours and it cleared itself. Is there any clear documentation on using our own api key? 

    Yessir. This article on the wiki details the process: https://wiki.covecube.com/StableBit_CloudDrive_Q7941147

    Note both that the process on Google's end is a little outdated (though you can still probably navigate the process from that article), and that switching to a personal key isn't really necessary and carries new limitations relative to Stablebit's developer keys (for example, people using their personal keys were already getting these per-folder limitations back in June, while Stablebit's key has just had this limitation enforced this month). So, caveat emptor.

  3. 2 hours ago, TMGPhilip said:

    I am also haviing the same issue after the upgrade. 

    User Rate Limit Exceeded is an API congestion error from Google. If you're being given that error, and you're using your own key, you're hitting the usage limits on your key. If you're using Stablebit's default key, then they are hitting their limits--which is possible, especially if many people are having to do a lot of API calls at once to convert drives. That's just speculation, though.

    If it's stablebit's key, you'll want to open a ticket and let them know so they can request a raise on their limit. You can also try reauthorizing the drive, as stablebit has several API keys already--or switching to your own.

  4. On 10/18/2020 at 5:55 PM, Krisp said:

    Is there any work pending on fixing GDrive issue? Any estimate?

    The fix for these issues is several months old. It was fixed back in July.

    On 10/18/2020 at 5:55 PM, Krisp said:

    I'm just not ready to risk a beta version of software and lose again TBs of data as it happen in the past.


    .1318 is the latest stable release on the site, and it has the fix (which was in .1314). You don't need to use a beta version, if you don't want.

  5. It looks like Google's Enterprise Standard plan still has unlimited for even a single user for $20 a month. So this change appears to be little more than a price increase for people who were previously using their unlimited business plan. That remains, to my knowledge, the only provider comparable to Google's own previous plan.

  6. CloudDrive itself operates at a block level, so it isn't aware of what files in the file system are being written to or read by the applications on your computer. Much like the firmware for a HDD or SDD is unaware of that information as well. So, that is to say, that it isn't possible via CloudDrive. Windows Resource Monitor or Process Explorer or another tool to look at Windows' file system usage would be required--as it sounds like you discovered.

  7. 9 hours ago, CMBC said:

    All my ReFS drive are now RAW, lost everything....


    6 hours ago, havok said:

    I did the following but i keep getting drive cannot be mounted please help.

    Neither of these errors/issues are likely related to the topic of this thread.

    If the drive is RAW, the file system is likely corrupted. The change this thread references only restructures the way the data is stored on the provider, it isn't changing any of the data itself. If it was corrupting data on the provider, we'd be seeing that manifest in other ways for other users--and that isn't the case. I would suggest opening a ticket with support here: https://stablebit.com/Contact in order to try and troubleshoot how your file system was actually corrupted. Though even Windows Updates have corrupted ReFS volumes on a regular basis. It just isn't a stable file system, and that's why the functionality has been rolled back in Windows 10 for now. So if you blinked or breathed wrong, that might be the culprit.

    And a User Rate Limit Exceeded error is an API limit error. It means the API key you're using is hitting the limit of calls per time period, and Google won't allow it to access the API any more. Are you using the default API key, or your own?



  8. 12 hours ago, Niloc said:

    For the step where you're moving the data from the hidden PoolPart folder on your E: drive, are you moving the data to the new D: drive pool, or to the F: drive you created from the new 55TB of space? I have a slow upload speed, so I really don't want to wait months to reupload 7TB of data.

    And what is the purpose of creating the F: drive? Why create a new drive rather than just expand the size of the E: drive and add it to the D: pool all on its own? Is there an advantage to having 2 partitions rather than just 1?

    If you're following the instructions correctly, you are simply reshuffling data around on the same drives. They are file system level changes, and will not require any data to be reuploaded. It should complete in a matter of seconds.

    And the purpose of the multiple partitions is covered above: 1) to keep each volume below the 60TB limit for VSS and Chkdsk, and, 2) to avoid multiple caches thrashing the same drive and to make bandwidth management easier. If you have multiple cache drives then you can, of course, use multiple CloudDrive drives instead of volumes on the same drive, but make sure that you can actually support that overhead. I'd suggest an SSD cache drive per CloudDrive drive--the cache can be surprisingly resource heavy, depending on your usage. In any case, though, there isn't really a compelling need to use multiple CloudDrive drives over volumes--especially if the drives will all be stored on the same provider space. There just isn't really any advantage to doing so.

  9. 1 hour ago, sfg said:


    I know after creating a partition (ie google drive), it is not possible to change the cluster size of that partition. I'm wondering if it is possible to create a new partition with a larger cluster size, and migrate the data I currently have uploaded to the new partition, server side. I would like to have partitions bigger than 64 TB in windows, but I realized this too late, after I already uploaded almost 50TB.


    You cannot make any changes to the cluster size server-side. Changing the cluster size modifies the drive structure within the chunk files that are stored on your provider. All of the data would have to be downloaded and reuploaded. You CAN, however, change the cluster size per-volume (partition), so if you can just leave your existing data in place, you can create a new volume with a different cluster size.

  10. 3 hours ago, otravers said:

    I just got the email introducing the new plans, I'm guessing that we have a few months left before they force the transition, a year at best for those people who just signed up to an annual GSuite plan before they were retired. I just checked and it doesn't look like you can convert a flexible (i.e. monthly) plan to an annual Gsuite plan, now it's all about migrating us to Workspace.

    Several people have confirmed that the "Enterprise Standard Plan" is $20 a month and retains the unlimited with even a single user. So this isn't likely to translate to much more than a price increase.

    See, for an example: https://old.reddit.com/r/DataHoarder/comments/j61wcg/g_suite_becomes_google_workspace_12month/g7wfre3/

  11. 3 hours ago, Gijs said:

    After seeing the positive feedback from other people, I downloaded the beta and applied the settings by JoeyD. Can confirm this works! Thanks @JoeyD! :)

    There should be no reason to change any advanced settings, and I would discourage people from doing so unless you specifically need to change those settings for some reason. The per-folder limitation issue should be resolved by using a beta newer than .1314 with the default settings.

  12. Note: You guys should not have to do anything specific. The advanced options setting is to force CloudDrive to convert the entire drive--including existing data--to a hierarchical format. That should not be necessary to fix your problem, and will take many hours (often several days) for a large drive. Any release later than .1314 should automatically use a hierarchy for any new data added to the drive, and move chunks to the hierarchical structure when and if they are modified. If you were still using a .12XX release, you will have to migrate to a BETA. You can find BETA releases here: http://dl.covecube.com/CloudDriveWindows/beta/download/

    Using any beta newer than .1314 should immediately fix the per-folder limit issue. If it does not, I would definitely open a ticket before doing anything else: https://stablebit.com/Contact

  13. 20 hours ago, superka said:

    Last time i did a rollback my computer entered in a bluescreen reboot mode all the time.

    @srcrist can we have some info about this? i'm afraid

    I don't have any info on this other than to say that I am not experiencing these issues, and that I haven't experienced any blue screens related to those settings. That user isn't suggesting a rollback, they're suggesting that you edit the advanced settings to force your drive to convert to the newer hierarchical format.

    I should also note that I do not work for Covecube--so aside from a lot of technical experience with the product, I'm probably not the person to consult about new issues. I think we might need to wait on Christopher here. My understanding, though, was that those errors were fixed with release .1314. It presumes that existing data is fine as-is, and begins using a hierarchical structure for any NEW data that you add to the drive. That should solve the problem. So make sure that you're on .1314 or later for sure.

    Relevant changelog:

    * [Issue #28415] Created a new chunk organization for Google Drive called Hierarchical Pure.
        - All new drives will be Hierarchical Pure.
        - Flat upgraded drives will be Hierarchical, which is now a hybrid Flat / Hierarchical mode.
        - Upgrading from Flat -> Hierarchical is very fast and involves no file moves.
    * Tweaked Web UI object synchronization throttling rules.
    * Added the drive status bar to the Web UI.
    * Tuned statistics reporting intervals to enable additional statistics in the StableBit Cloud.
    * Added detailed logging to the Google Drive migration process that is enabled by default.
    * Redesigned the Google Drive migration process to be quicker in most cases:
        - For drives that have not run into the 500,000 files per folder limit, the upgrade will be nearly instantaneous.
        - Is able to resume from where the old migration left off.


  14. On 9/19/2020 at 6:16 PM, Salt said:

    I downloaded CloudDrive to demo it and see how it performs... I made a drive on my local disk, added ~10GB of files, waited for it to "upload", and then when I was done, I deleted the same files out of the cloud drive.  I waited several days and both the CloudDrive cache and the actual folder on disk were still 10GB.  Does deleting files work at all?  Is there an extra step I am missing?  My concern is if I were to make a Cloud Drive on some kind of bucket service and end up paying for storage that I'm not using because I thought the files were deleted... and you can't delete things manually when everything is chunked and encrypted.

    Note that deleting files off of the drive and deleting the drive itself are not the same thing. Simply removing files off of the file system of the drive will not remove the drive data from your provider, because a drive of X capacity still exists, just like removing files off of your physical drive does not cause it to physically shrink. If you want to remove the storage files located on the provider then you have to either shrink the drive or destroy it.

    Also note that, with CloudDrive, the "files" stored on the provider are not representative of the files stored on the drive in the first place. They are chunks of data that comprise the drive structure that CloudDrive uses to store information. Once they exist on the provider, there is no need to delete them unless the drive structure itself changes. CloudDrive will simply overwrite and modify them as data is added and removed. 

  15. 55 minutes ago, LazarusLong said:

    Fortunately there's no contracts with either provider so I can switch over anytime. Plus I'll save 10 bucks/month lol. 

    At least there is a silver lining. That sucks that CenturyLink's service and hardware is so locked down and convoluted, though. I use a wired router and separate wireless access points, personally, and that would be a nightmare for my setup. Hope it works out.

  16. Generally speaking, even if other router functions are locked down by the ISP on their hardware, consumers typically have access to change the router to bridge mode themselves. Are you sure that you don't have access to that via the web configuration for the router? Is the fiber gateway also integrated into the router, or can you just replace it with your own and bypass their hardware entirely?

  17. There really isn't anything unique about CloudDrive's network usage that should impact the router that you use over and above any other high-use network application. So any general router recommendations should be fine. I personally rely on Small Net Builder for reviews and ratings for home network hardware. Might be worth a look for you.

  18. There will definitely be a performance hit, but it won't necessarily be prohibitive. The extent to which it matters will depend largely on your specific use case, and the issues revolve predominately around the same areas where hard drives perform more poorly compared to their solid state cousins in general. For example: if you have a drive wherein you are constantly writing and reading simultaneously--that SSD is going to make a profound difference. If you have a drive where you're predominately just reading contiguous larger files, the SSD shouldn't have as big an impact.

    As a general rule, just imagine that your CloudDrive *is* whatever sort of drive you use for the cache, and the limitations will be similar.

  19. 4 hours ago, ryan74 said:

    My cache drive is 100GB SSD and it's full

    Writes to your CloudDrive drive will be throttled when there is less than (I believe) 5GB remaining on the cache drive. This is normal and intended functionality. Just get used to throttled writes until all of the data is duplicated. Your local write speed is obviously faster than your upload. Completely expected.

    You can install whatever you want to a pool.

  20. On 8/22/2020 at 3:16 AM, Rpscv said:

    Hello guys, I have a question that you may be able to help me.

    The idea of Stablebit is amazing, I love it, my problem is that my upload speed is terrible.

    So I was thinking, it would be possible to pay for a service or program that downloads the file I want (a torrent or a direct link) and uploads it to my google drive? 

    Like a seedbox but for uploading the file directly so it doesn't have to use my upload speed.

    If I am reading this correctly, you want to be able to upload the data to your cloud storage remotely, and then access it later locally.

    You could do this with CloudDrive using, as mentioned above, a (Windows) VPS to upload the data, and then detatch the CloudDrive drive from the VPS and reattach it locally. But this is tedious.

    Since CloudDrive only allows your drive to be mounted from a single machine at any given time, it really isn't the best tool for what you're trying to accomplish. A tool like rClone which simply accesses Google Drive's native file structure would be of much more use, I think. You could use any cheap VPS (and several other services, for that matter) to add files to your drive, and then use rClone to access them locally as a folder.

    I suspect that your question is, overall, based on a misconception of how CloudDrive works. It will not access the files that exist on your Google Drive space. Only its own storage files that are contained on your Drive.

  21. 3 hours ago, ryan74 said:

    I selected Folder Duplication on MainDrivePool (P:) and choose the Folders I want duplicated.

    Now that i read this again: make sure that your duplication is enabled on the master pool. Not the local subpool. Duplication enabled on THAT pool will only duplicate to drives contained within that pool.

  22. I'm not 100% sure what the issue is, as I think it could be multiple things based on this information.

    One common mistake is to not enable duplication on both pools. Both pools need to be allowed to accept duplicated data, while only the local pool should be allowed to accept unduplicated data. If only one pool is allowed to contain duplicates, there will be no pool to duplicate to. So maybe check that setting in your balancers.

    Balancer logic can get pretty complicated, so I may not be able to be as helpful as I'd like with respect to your specific aims and configuration. But it should be relatively simple. You need two pools (local and cloud), both configured to accept duplicated data, the local pool configured to accept your new (unduplicated) data, and 2x duplicated enabled on the master pool in order to duplicate between them.

  23. 4 hours ago, ryan74 said:

    Thank you for your reply. @srcrist

    Yeah, sure thing. The only other thing that I would point out is that chkdsk has a hard limit of 60TB per volume. So you'll probably not want to exceed that limit. A single CloudDrive drive can, however, be partitioned into multiple volumes if you need more space, and those volumes can be recombined using DrivePool to create a unified filesystem that can still be error checked and repaired by chkdsk.

  • Create New...