Jump to content
Covecube Inc.

Alex

Administrators
  • Content Count

    251
  • Joined

  • Last visited

  • Days Won

    45

Everything posted by Alex

  1. Thanks for signing up and testing! Just to clarify how centralized licensing works: If you don't enter any Activation IDs into the StableBit Cloud then the existing licensing system will continue to work exactly as it did before. You are not required to enter your Activation IDs into the StableBit Cloud in order to use it. So you can absolutely keep using your existing activated apps as they were and simply link them to the cloud. Now there was a bug on the Dashboard (which I've just corrected) that incorrectly told you that you didn't have a valid license when using legacy licensing. But
  2. The StableBit Cloud is a brand new online service developed by Covecube Inc. that enhances our existing StableBit products with cloud-enabled features and centralized management capabilities. The StableBit Cloud also serves as a foundational technology platform for the future development of new Covecube products and services. To get started, visit: https://stablebit.cloud Read more about it in our blog post: https://blog.covecube.com/2021/01/stablebit-cloud/
  3. Sorry about that, it's fixed now. Google changed some code on their end for the recaptcha that we're using, and that started conflicting with one of the javascript libraries that stablebit.com uses (mootools).
  4. As a response to multiple reports of cloud drive corruption from the Google Drive outage of June 2 2019, we are posting this warning starting with StableBit CloudDrive 1.1.1.1128 BETA on all Google Drive cloud drives. News story: https://www.zdnet.com/article/google-heres-what-caused-sundays-big-outage/ Please use caution and always backup your data by keeping multiple copies of your important files.
  5. I just want to clarify this. We cannot guarantee that cloud drive data loss will never happen again in the future. We ensure that StableBit CloudDrive is as reliable as possible by running it through a number of very thorough tests before every release, but ultimately a number of issues are beyond our control. First and foremost, the data itself is stored somewhere else, and there is always the change that it could get mangled. Hardware issues with your local cache drive, for example, could cause corruption as well. That's why it is always a good idea to keep multiple copies of your
  6. As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
  7. 1c. If a cloud drive fails the checksum / HMAC verification, the error is returned to the OS in the kernel (and eventually the app caller). It's treated exactly the same as a physical drive failing a checksum error. StableBit DrivePool won't automatically pull the data from a duplicate, but you have a few options here. First, you will see error notifications in the StableBit CloudDrive UI, and at this point you can either: Ignore the the verification failure and pull in the corrupted data to continue using your drive. In StableBit CloudDrive, see Options -> Troubleshooting ->
  8. 1. By default, no. But you can easily enable checksumming by checking Verify chunk integrity in the Create a New Cloud Drive window (under Advanced Settings). 1a. For unencrypted drives, chunk verification works by applying a checksum at the end of every 1 MB block of data inside every chunk. When that data is downloaded, the checksum is verified to make sure that the data was not corrupted. If the drive is encrypted, an HMAC is used in order to ensure that the data has not been modified maliciously (the exact algorithm is described in the manual here https://stablebit.com/Support/CloudDr
  9. My apologies. We've been having issues with the forum's database for the past couple of days, and as a result the forum has been up and down. I believe that the problem is resolved and there should be no further downtime.
  10. Just to reply to some of these concerns (without any of the drama of the original post): Scattered files By default, StableBit DrivePool places new files on the volume with the most free disk space (queried at the time of file creation). It's as simple as that. Grouping files "together" has been requested a bunch, but I personally, don't see a need for it. There will always be a need to split across folders even with a best-effort "grouping" algorithm. The only foolproof way to protect yourself from file loss is file duplication. But I will be looking into the possibility of implementing
  11. There was a certificate issue on bitflock.com (cert expired, whoops). Fixing it now, should be back in a few minutes. Sorry about that. Additionally, there were some load / scalability issues and that's now resolved as well. BitFlock is running on Microsoft Azure, so everything scales quite nicely.
  12. Alex

    Forum Downtime

    We've been having a few more database issues within the past 24 hours due to a database migration that took place recently. There was a problem with login authentication and general database load issues which I believe are now all ironed out. Once again, this was isolated to the forum / wiki / blog.
  13. Our forum, wiki and blog web sites experienced an issue with the database server that caused those sites to be down for the past 34 hours. The issue has been resolved and everything is back up and running. I'm sorry for the inconvenience. StableBit.com, the download server, and software activation services were not affected.
  14. StableBit CloudDrive's encryption takes place in our kernel virtual disk driver, right before the data hits any permanent storage medium (i.e. the local on-disk cache). This is before any data is split up into chunks for uploading, and the provider implementation is inconsequential at encryption point. The local on-disk cache itself is actually also split up into chunks, but those are much larger at 100 GB per chunk and don't affect the use of CBC. Our units of encryption for CBC are sector sized, and sectors are inherently read and written atomically on a disk, so the choice of using CBC
  15. Sorry to hear that it's back. If we can catch it as it's happening I'm sure that we can get to the bottom of what's going on. Yes, I'm available, but it's better to go through the scheduling system so that we can get our timezones synchronized and my availability schedule is in that system as well. I've PMed the scheduling link to you, so please set up an appointment at your earliest convenience. Thanks,
  16. Oh, and regarding the partial reads / no partial reads issue, there is an Issue open for that in the form of a feature request. Right now, StableBit CloudDrive reads all checksummed / signed data in 1 MB units. So even if 1 byte needs to be read, 1 MB will actually be read and cached. Can we increase that 1 MB to let's say 10 MB? Yes we can, but I'm afraid that it will adversely affect the read performance of slower bandwidth users. So that's something that needs to be tested and that's why this is in the form of a feature request.
  17. Technically speaking, StableBit CloudDrive doesn't really deal with files directly. It actually emulates a virtual disk for the Operating System, and other applications or the Operating System can then use that disk for whatever purpose that suits them. Normally, a file system kernel driver (such as NTFS) will be mounted on the disk and will provide file based access. So when you say that you're accessing 2 files on the cloud drive, you're actually dealing with the file system (typically NTFS). The file system then translates your file-based requests (e.g. read 10248 bytes at offset 0 of f
  18. In this post I'm going to talk about the new large storage chunk support in StableBit CloudDrive 1.0.0.421 BETA, why that's important, and how StableBit CloudDrive manages provider I/O overall. Want to Download it? Currently, StableBit CloudDrive 1.0.0.421 BETA is an internal development build and like any of our internal builds, if you'd like, you can download it (or most likely a newer build) here: http://wiki.covecube.com/Downloads The I/O Manager Before I start talking about chunks and what the current change actually means, let's talk a bit about how StableBit CloudDrive
  19. Yeah, it's basically just a flip of a switch if this works. All providers use a common infrastructure for chunking.
  20. Ok, I have some good news and more good news regarding the Google Drive provider. I've been playing around with the Google Drive provider and it had some issues which were not related to OAuth at all. In fact, the problem was that Google was returning 403 Forbidden HTTP status codes when there were too many requests sent. By default StableBit CloudDrive treats all 403 codes as an OAuth token failure. But Google being Google has a very thorough documentation page explaining all of the various error codes, what they mean and how to handle them. Pretty awesome: https://developers.google.com/d
  21. StableBit CloudDrive performs a checksum check after downloading every chunk, if and only if, the Use checksum verification checkbox is checked when you first create the drive. It's always enabled by default for cloud providers, and not available as an option for the Local Disk and File Share providers. The difference between using upload verification vs. simply just checksum verification is that, checksum verification is performed much later. That is, it's performed when you need to download that chunk from the provider as a result of a local read request from the local cloud drive.
  22. I think the equivalent of a TrueCrypt volume header in StableBit CloudDrive is the drive metadata. The drive metadata is a file that's stored in the cloud (or locally for the Local Disk / File Share providers) that describes your drive. It contains the size of the drive, the block size, the provider type, etc... In essence it contains everything necessary to mount the cloud drive. In addition, if the drive is encrypted, it contains the necessary data to validate your encryption key. Currently StableBit CloudDrive doesn't have an automated way of backing up this file, but if your cloud prov
  23. Alex

    Local cache data

    I called this "Full Round Trip Encryption", for a lack of a better term, and it was very much a core part of the original design. Specifically, what this means is that any byte that gets written to an encrypted cloud drive, is never stored in its unencrypted form on any permanent storage medium (either locally or in the cloud).
  24. When I said unique, I didn't mean that we're unique at offering encryption for the cloud, or mounting your cloud data as a drive. Surely that's been done by other products. And by the way, I'm not trying to convince you to use StableBit CloudDrive, in fact, I'm saying the exact opposite. If you want to share your files using the tools that your cloud provider offers (such as web access), StableBit CloudDrive simply can't do what you're asking. I'm simply trying to explain the motivation behind the design. Think about TrueCrypt. When you create a TrueCrypt container and mount that a
  25. Me too, that's the way that I read it initially also. But the difference in allowed bandwidth would be huge if I'm wrong, so I really want to avoid any potential misunderstandings. As for larger chunks, It's possible to utilize larger chunks, but keep in mind, the size of the chunk controls the latency of read I/O. So if you try to read anything on the cloud drive that's not stored locally, that's an automatic download of at least the chunk size before the read can complete. And we have to keep read I/O responsive in order for the drive's performance to be acceptable. In my testing a 1
×
×
  • Create New...