Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by srcrist

  1. I thought it was relatively clear that this point had already been reached, frankly.
  2. I am not going to sit here and have some argument with you about this. If you think your data is made safer by rclone, use rclone and let the issue be. I really don't care about your data, and this matters vastly more to you than it does to me. I'm sorry that you don't understand the actual point. Nobody is forcing you to either use CloudDrive or come here and ask for explanations that you don't actually want. You have a good day.
  3. A lot of that is simply repeating what I wrote. But nothing that I wrote is nonsense, and you ignore the inherent volatility issues of cloud storage at your peril. It is obviously true that in this one particular case and for a specific use case rclone did not have the same issues. But hand-waving away data loss simply because it's in the form of stale but complete files reveals that you are a bit myopic about different use scenarios--which is weird, because it's something that you are also acknowledging when you say "Saying Rclone data is stagnant is nonsense too, it entirely depends on
  4. Upload verification will not prevent provider-side data corruption of any form (absolutely nothing will), but that wasn't the concern you mentioned. It WILL prevent the write hole issue that you described, as CloudDrive will not regard the data as "written" until it can verify the integrity of the data on the provider by retrieving it. Either way, though, I would discourage you from looking at rClone as "safer" in any way. It is not. Any cloud storage is not a safe backup mechanism for data in general, and all cloud storage should be treated as inherently and equally volatile and only use
  5. Nobody can really tell you what happened to prompt that cautionary warning about Google Drive other than Google themselves. What we know is that in March of 2019 Google had a major outage that affected multiple services including Drive and Gmail. After the service disruption was resolved, Drive started returning what appeared to be stale, outdated chunks rather than up-to-date data, and that corrupted CloudDrive volumes. I think the widest held presumption is that Google had to do a rollback to a previous backup of some form, and thus replaced some chunks with older versions of themselves--but
  6. On your H: and your V: you will have a poolpart folder, if they have been added to the pool. Put everything ELSE on those drives, like the CloudDrive folder, if that's where your content is, into the poolpart folder. Then remeasure and it will be accessible from your G:.
  7. All of the content, on each drive, that you want to access from the pool, should be moved to the hidden pool folder on that respective drive.
  8. This is backwards. You don't want to move the hidden directory, you want to move everything else TO the hidden directory. The contents of the hidden directory is what is in your pool.
  9. Make sure that you both submit tickets to support here: https://stablebit.com/Contact
  10. Just including a quote in case you have email notification set up. Forgot to quote you in the previous post.
  11. Hmm...not sure. The good news is that if drive corruption were the issue, you should see the chksum errors in the log and the notification drop-down. I don't see any pending notifications in your UI, unless you cleared them. So it's probably not corruption. What does the cache look like for this drive? What sort of drive is it on, how large, and what type of cache? And, to be clear: You are showing the "default i/o settings" in your screenshot. Those apply when you create a new drive. Is your existing, mounted drive actually using those settings as well? Verify within the "Manage Drive" m
  12. Not to my knowledge, no. CloudDrive is just a tool to store a psuedo-native file system on another sort of storage protocol. CloudDrive takes non-native file storage like FTP, SMB, or a cloud storage host, and provides a native (NTFS or ReFS) file system for Windows. It's a middle man for the a storage API/protocol so that windows can access that storage as if it were a local drive. But the side effect is that the data must be accessed via the CloudDrive application. It is not intended to be a front-end for the non-native API. Tools like rClone or Netdrive are. Unless some future version of th
  13. To be clear: encryption is (and has always been) optional even on CloudDrive volumes hosted on cloud providers. You do not have to enable encryption for volumes stored on cloud storage or local storage. But what I suspect that you are actually talking about is that you want the data to be accessible in whatever native, unobfuscated file format your applications are accessing when encryption is disabled, but that simply isn't possible with CloudDrive. It just isn't how this particular software works. As mentioned in the above post from June, there ARE tools you can use to basically access
  14. If you have the thread organized by votes, that post from Christopher is actually from Oct 12th. It's not the first post from the thread chronologically, which was made back in June. You just have your thread sorted by votes. This is correct though, and something I had not noticed before. Also the ReFS fix from beta .1373 and the ReFS fix listed in the stable changelog for stable .1318 are for different issues (28436 vs 28469). No versions between 1.1.2.X and 1.2.0.X are represented in the betas at all, but, for example, the Google Drive API keys which were added in BETA .1248 are repr
  15. EDIT: THIS POST APPEARS TO BE INCORRECT. Both. We were both talking about different things. I was referring to the fix for the per-folder limitations, Christopher is talking about the fix for whatever issue was causing drives not to update, I believe. Though it is a little ambiguous. The fixes for the per-folder limitation issue were, in any case, in the .1314 build, as noted in the changelog: http://dl.covecube.com/CloudDriveWindows/beta/download/changes.txt EDIT: Christopher might also have been talking about the ReFS issue, which looks like it was fixed in .1373. I'm
  16. Yeah, I mean, I still wouldn't worry too much until you talk with Christopher and Alex via the contact form. The truth is that CloudDrive would be relatively scary looking software for an engine that is looking for spoopy behavior and isn't familiar with its specific signature. It has kernel-mode components, interacts with services, hooks network adapters, accesses the cpu clock, and does things to Windows' I/O subsystem based on network input. Take a second and think about how that must look to an algorithm looking for things that are harming your PC via the internet. By all means, exerc
  17. You'd have to ask via the contact form to get some sort of confirmation: https://stablebit.com/Contact Though this is almost certainly just a false positive. They're not uncommon. I only see two detections on my version, in any case (https://www.virustotal.com/gui/file/1c12d59c11f5d362ed34d16a170646b1a5315d431634235da0a32befa5c5ec5c/detection). So Tell's rising number of detections may be indicative of another (scarier) problem. Or just overzealous engines throwing alarms about kernel-mode software.
  18. Open a ticket. You should be able to force the migration process, but they also need to know when it isn't working to troubleshoot a longer-term fix.
  19. I actually see multiple problems here, related to prompt uploads and data rate. The first is that, yes, I'm not sure that your prefetcher settings make much sense for what you've described your use case to be--and the second is that you've actually configured a delay here on uploads--contrary to your actual objective to upload data as fast as possible. But, maybe most importantly, you've also disabled the minimum download, which is the most important setting that increases throughput. So let's tackle that delay first. Your "upload threshold" setting should be lower. That setting says "sta
  20. 10MB chunks may also impact your maximum throughput as well. The more chunks your data is stored in, if you accessing more than that amount of data, the more overhead those chunks will add to the data speed. The API limits are for your account, not the API key. So they will all share your allotment of API calls per user. It does not matter that they use different API keys to access your account. That may be, but note that Google Drive is not, regardless of whether or not it's a business account, an enterprise grade cloud storage service. Google does offer such a service (Google
  21. Is this a legacy drive created on the beta? I believe support for chunk sizes larger than 20MB was removed from the Google Drive provider in .450, unless that change was reverted and I am unaware. 20MB should be the maximum for any Google Drive drive created since. 50mbps might be a little slow if you are, in fact, using 100MB chunks--but you should double-check that. Gotcha. Yeah, if you set additional limitations on the API key on Google's end, then you'd have to create a key without those restrictions. And CloudDrive is the only application accessing that Google Drive space?
  22. Sometimes the threads-in-use lags behind the actual total. As long as you aren't getting throttled anymore, you're fine. Netdrive and rClone are not appropriate comparisons. They are uploading/downloading several gigbytes of contiguous data at a time. The throughput of that sort of data stream will always be measured as larger. CloudDrive pulls smaller chunks of data as needed and also no data at all if it's using data that is stored in the local cache. You're not going to get a measured 50MB/s per thread with CloudDrive from Google. It just isn't going to happen. 50mbps for a single t
  23. I mean, the short answer is probably not using your API key. But using your own key also isn't the solution to your real problem, so I'm just sorta setting it aside. It's likely an issue with the way you've configured the key. If you don't see whatever custom name you created in the Google API dashboard when you go to authorize your drive, then you're still using the default Stablebit keys. But figuring out exactly where the mistake is might require you to open a ticket with support. Again, though, the standard API keys really do not need to be changed. They are not, in any case, causing
  24. Any setting that is only available by editing the JSON is considered "advanced." See: https://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings That is more, in one drive, than Google's entire API limits will allow per account. You'll have to make adjustments. I'm not 100% sure the exact number, but the limit is somewhere around 10-15 simultaneous API connections at any given time. If you exceed that, Google will start requesting exponential back-off and CloudDrive will comply. This will significantly impact performance, and you will see those throttling errors that you are seei
  • Create New...