Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    461
  • Joined

  • Last visited

  • Days Won

    34

Everything posted by srcrist

  1. On your H: and your V: you will have a poolpart folder, if they have been added to the pool. Put everything ELSE on those drives, like the CloudDrive folder, if that's where your content is, into the poolpart folder. Then remeasure and it will be accessible from your G:.
  2. All of the content, on each drive, that you want to access from the pool, should be moved to the hidden pool folder on that respective drive.
  3. This is backwards. You don't want to move the hidden directory, you want to move everything else TO the hidden directory. The contents of the hidden directory is what is in your pool.
  4. Make sure that you both submit tickets to support here: https://stablebit.com/Contact
  5. Just including a quote in case you have email notification set up. Forgot to quote you in the previous post.
  6. Hmm...not sure. The good news is that if drive corruption were the issue, you should see the chksum errors in the log and the notification drop-down. I don't see any pending notifications in your UI, unless you cleared them. So it's probably not corruption. What does the cache look like for this drive? What sort of drive is it on, how large, and what type of cache? And, to be clear: You are showing the "default i/o settings" in your screenshot. Those apply when you create a new drive. Is your existing, mounted drive actually using those settings as well? Verify within the "Manage Drive" m
  7. Not to my knowledge, no. CloudDrive is just a tool to store a psuedo-native file system on another sort of storage protocol. CloudDrive takes non-native file storage like FTP, SMB, or a cloud storage host, and provides a native (NTFS or ReFS) file system for Windows. It's a middle man for the a storage API/protocol so that windows can access that storage as if it were a local drive. But the side effect is that the data must be accessed via the CloudDrive application. It is not intended to be a front-end for the non-native API. Tools like rClone or Netdrive are. Unless some future version of th
  8. To be clear: encryption is (and has always been) optional even on CloudDrive volumes hosted on cloud providers. You do not have to enable encryption for volumes stored on cloud storage or local storage. But what I suspect that you are actually talking about is that you want the data to be accessible in whatever native, unobfuscated file format your applications are accessing when encryption is disabled, but that simply isn't possible with CloudDrive. It just isn't how this particular software works. As mentioned in the above post from June, there ARE tools you can use to basically access
  9. If you have the thread organized by votes, that post from Christopher is actually from Oct 12th. It's not the first post from the thread chronologically, which was made back in June. You just have your thread sorted by votes. This is correct though, and something I had not noticed before. Also the ReFS fix from beta .1373 and the ReFS fix listed in the stable changelog for stable .1318 are for different issues (28436 vs 28469). No versions between 1.1.2.X and 1.2.0.X are represented in the betas at all, but, for example, the Google Drive API keys which were added in BETA .1248 are repr
  10. EDIT: THIS POST APPEARS TO BE INCORRECT. Both. We were both talking about different things. I was referring to the fix for the per-folder limitations, Christopher is talking about the fix for whatever issue was causing drives not to update, I believe. Though it is a little ambiguous. The fixes for the per-folder limitation issue were, in any case, in the .1314 build, as noted in the changelog: http://dl.covecube.com/CloudDriveWindows/beta/download/changes.txt EDIT: Christopher might also have been talking about the ReFS issue, which looks like it was fixed in .1373. I'm
  11. Yeah, I mean, I still wouldn't worry too much until you talk with Christopher and Alex via the contact form. The truth is that CloudDrive would be relatively scary looking software for an engine that is looking for spoopy behavior and isn't familiar with its specific signature. It has kernel-mode components, interacts with services, hooks network adapters, accesses the cpu clock, and does things to Windows' I/O subsystem based on network input. Take a second and think about how that must look to an algorithm looking for things that are harming your PC via the internet. By all means, exerc
  12. You'd have to ask via the contact form to get some sort of confirmation: https://stablebit.com/Contact Though this is almost certainly just a false positive. They're not uncommon. I only see two detections on my version, in any case (https://www.virustotal.com/gui/file/1c12d59c11f5d362ed34d16a170646b1a5315d431634235da0a32befa5c5ec5c/detection). So Tell's rising number of detections may be indicative of another (scarier) problem. Or just overzealous engines throwing alarms about kernel-mode software.
  13. Open a ticket. You should be able to force the migration process, but they also need to know when it isn't working to troubleshoot a longer-term fix.
  14. I actually see multiple problems here, related to prompt uploads and data rate. The first is that, yes, I'm not sure that your prefetcher settings make much sense for what you've described your use case to be--and the second is that you've actually configured a delay here on uploads--contrary to your actual objective to upload data as fast as possible. But, maybe most importantly, you've also disabled the minimum download, which is the most important setting that increases throughput. So let's tackle that delay first. Your "upload threshold" setting should be lower. That setting says "sta
  15. 10MB chunks may also impact your maximum throughput as well. The more chunks your data is stored in, if you accessing more than that amount of data, the more overhead those chunks will add to the data speed. The API limits are for your account, not the API key. So they will all share your allotment of API calls per user. It does not matter that they use different API keys to access your account. That may be, but note that Google Drive is not, regardless of whether or not it's a business account, an enterprise grade cloud storage service. Google does offer such a service (Google
  16. Is this a legacy drive created on the beta? I believe support for chunk sizes larger than 20MB was removed from the Google Drive provider in .450, unless that change was reverted and I am unaware. 20MB should be the maximum for any Google Drive drive created since. 50mbps might be a little slow if you are, in fact, using 100MB chunks--but you should double-check that. Gotcha. Yeah, if you set additional limitations on the API key on Google's end, then you'd have to create a key without those restrictions. And CloudDrive is the only application accessing that Google Drive space?
  17. Sometimes the threads-in-use lags behind the actual total. As long as you aren't getting throttled anymore, you're fine. Netdrive and rClone are not appropriate comparisons. They are uploading/downloading several gigbytes of contiguous data at a time. The throughput of that sort of data stream will always be measured as larger. CloudDrive pulls smaller chunks of data as needed and also no data at all if it's using data that is stored in the local cache. You're not going to get a measured 50MB/s per thread with CloudDrive from Google. It just isn't going to happen. 50mbps for a single t
  18. I mean, the short answer is probably not using your API key. But using your own key also isn't the solution to your real problem, so I'm just sorta setting it aside. It's likely an issue with the way you've configured the key. If you don't see whatever custom name you created in the Google API dashboard when you go to authorize your drive, then you're still using the default Stablebit keys. But figuring out exactly where the mistake is might require you to open a ticket with support. Again, though, the standard API keys really do not need to be changed. They are not, in any case, causing
  19. Any setting that is only available by editing the JSON is considered "advanced." See: https://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings That is more, in one drive, than Google's entire API limits will allow per account. You'll have to make adjustments. I'm not 100% sure the exact number, but the limit is somewhere around 10-15 simultaneous API connections at any given time. If you exceed that, Google will start requesting exponential back-off and CloudDrive will comply. This will significantly impact performance, and you will see those throttling errors that you are seei
  20. Once you change the API key in the advanced settings, you will need to reauthorize the drive (or detach it and reattach it). It will not immediately switch to the new key until it recreates the API authentication. Did you do that? Also, how many upload and download threads are you using? 11 download threads per second is likely just hitting Google's API limit, and it is likely why you're getting throttled--especially since it looks like you're also allowing around 10 upload threads at the same time. In my experience, any more than 10-15 threads, accounting for both directions, will lead t
  21. Yessir. This article on the wiki details the process: https://wiki.covecube.com/StableBit_CloudDrive_Q7941147 Note both that the process on Google's end is a little outdated (though you can still probably navigate the process from that article), and that switching to a personal key isn't really necessary and carries new limitations relative to Stablebit's developer keys (for example, people using their personal keys were already getting these per-folder limitations back in June, while Stablebit's key has just had this limitation enforced this month). So, caveat emptor.
  22. If you're still having issues converting a drive with any release newer than .1314, you should absolutely open a ticket.
  23. User Rate Limit Exceeded is an API congestion error from Google. If you're being given that error, and you're using your own key, you're hitting the usage limits on your key. If you're using Stablebit's default key, then they are hitting their limits--which is possible, especially if many people are having to do a lot of API calls at once to convert drives. That's just speculation, though. If it's stablebit's key, you'll want to open a ticket and let them know so they can request a raise on their limit. You can also try reauthorizing the drive, as stablebit has several API keys already--o
  24. The fix for these issues is several months old. It was fixed back in July. EDIT: THIS IS INCORRECT. .1318 is the latest stable release on the site, and it has the fix (which was in .1314). You don't need to use a beta version, if you don't want.
×
×
  • Create New...