-
Posts
252 -
Joined
-
Last visited
-
Days Won
48
Alex last won the day on April 14 2021
Alex had the most liked content!
About Alex

Contact Methods
-
Website URL
https://covecube.com
Profile Information
-
Gender
Male
-
Location
New York, USA
Alex's Achievements

Advanced Member (3/3)
104
Reputation
-
This was a little tricky because this glitch actually had to do with the CSS prefers-reduced-motion feature and not the resolution. In order to improve the responsiveness of the web site, the StableBit Cloud will turn off most animations on the site if the browser makes a request to do so. That particular offset issue only showed up when prefers-reduced-motion was requested. It should now be fixed as of 7788.4522. E.g. RDP can trigger prefers-reduced-motion when launching a browser from within a connected session.
-
TPham reacted to a post in a topic: Cloud Providers
-
johnnymc reacted to an answer to a question: My Rackmount Server
-
Shane reacted to a question: check-pool-fileparts
-
Thanks for signing up and testing! Just to clarify how centralized licensing works: If you don't enter any Activation IDs into the StableBit Cloud then the existing licensing system will continue to work exactly as it did before. You are not required to enter your Activation IDs into the StableBit Cloud in order to use it. So you can absolutely keep using your existing activated apps as they were and simply link them to the cloud. Now there was a bug on the Dashboard (which I've just corrected) that incorrectly told you that you didn't have a valid license when using legacy licensing. But normally, that warning should not be there. Refresh the page and it should go away (hopefully). However, as soon as you enter at least 1 Activation ID into the StableBit Cloud then the old licensing system turns off and the new cloud licensing system goes into effect. I cannot reproduce the HTTP 400 error that you're seeing when adding a license. Please open a support case with the Activation ID that you're using and we'll look into it: https://stablebit.com/Contact Thank you,
-
KingfisherUK reacted to a post in a topic: Introducing the StableBit Cloud
-
The StableBit Cloud is a brand new online service developed by Covecube Inc. that enhances our existing StableBit products with cloud-enabled features and centralized management capabilities. The StableBit Cloud also serves as a foundational technology platform for the future development of new Covecube products and services. To get started, visit: https://stablebit.cloud Read more about it in our blog post: https://blog.covecube.com/2021/01/stablebit-cloud/
-
Sorry about that, it's fixed now. Google changed some code on their end for the recaptcha that we're using, and that started conflicting with one of the javascript libraries that stablebit.com uses (mootools).
-
Hello good night from Granada, Spain. I need to know if a version of your Stablebit.cloudDrive product is possible for use on the Android operating system (Nvidia SHIELD TV). I am willing to pay the price that you indicate for a version of your application for use on NVIDIA SHIELD TV (Android TV box). Thank you very much and receive a cordial and affectionate greeting.
I would like to be able to use your program "Stablebit.CloudDriver" on my Nvidia Shield TV ( Android TV Box ) to be able to mount or map or virtualize my Gdrive unit on my Nvidia Shield TV player and use PLEX with that Gdrive unit BUT I need to be able to install "Stablebit.CloudDriver" on my Nvidia Shield TV ( Android )
Is an Android version possible? Please
I can't find any Android application capable of "emulating or mounting or mapping or virtualizing" my Gdrive on my Android system and I need it to be able to use PLEX with that drive, because PLEX is not capable of reading from Gdrive since the Plex-Cloud service was disabled.Thank you very much for your attention and time
-
As a response to multiple reports of cloud drive corruption from the Google Drive outage of June 2 2019, we are posting this warning starting with StableBit CloudDrive 1.1.1.1128 BETA on all Google Drive cloud drives. News story: https://www.zdnet.com/article/google-heres-what-caused-sundays-big-outage/ Please use caution and always backup your data by keeping multiple copies of your important files.
-
I just want to clarify this. We cannot guarantee that cloud drive data loss will never happen again in the future. We ensure that StableBit CloudDrive is as reliable as possible by running it through a number of very thorough tests before every release, but ultimately a number of issues are beyond our control. First and foremost, the data itself is stored somewhere else, and there is always the change that it could get mangled. Hardware issues with your local cache drive, for example, could cause corruption as well. That's why it is always a good idea to keep multiple copies of your important files, using StableBit DrivePool or some other means. With the latest release (1.1.1.1128, now in BETA) we're adding a "Caution" message to all Google Drive cloud drives that should remind everyone to use caution when storing data on Google Drive. The latest release also includes cloud data duplication (more info: http://blog.covecube.com/2019/06/stablebit-clouddrive-1-1-1-1128-beta/) which you can enable for extra protection against future data loss or corruption.
-
JesterEE reacted to an answer to a question: Surface scan and SSD
-
Spider99 reacted to a post in a topic: M.2 Drives - No Smart data - NVME and Sata
-
Jaga reacted to a post in a topic: M.2 Drives - No Smart data - NVME and Sata
-
As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
-
Penryn reacted to a post in a topic: Ridiculous questions on CloudDrive
-
1c. If a cloud drive fails the checksum / HMAC verification, the error is returned to the OS in the kernel (and eventually the app caller). It's treated exactly the same as a physical drive failing a checksum error. StableBit DrivePool won't automatically pull the data from a duplicate, but you have a few options here. First, you will see error notifications in the StableBit CloudDrive UI, and at this point you can either: Ignore the the verification failure and pull in the corrupted data to continue using your drive. In StableBit CloudDrive, see Options -> Troubleshooting -> Chunks -> Ignore chunk verification. This may be a good idea if you don't care about that particular file. You can simply delete it and continue normally. Remove the affected drive from the pool and StableBit DrivePool will automatically reduplicated the data onto another known good drive. And thank you for your support, we really appreciate it. As far as Issue #27859, I've got a potential fix that looks good. It's currently being qualified for the Microsoft Server 2016 certification, Normally this takes about 12 hours, so a download should be ready by Monday (could even be as soon as tomorrow if all goes well).
-
1. By default, no. But you can easily enable checksumming by checking Verify chunk integrity in the Create a New Cloud Drive window (under Advanced Settings). 1a. For unencrypted drives, chunk verification works by applying a checksum at the end of every 1 MB block of data inside every chunk. When that data is downloaded, the checksum is verified to make sure that the data was not corrupted. If the drive is encrypted, an HMAC is used in order to ensure that the data has not been modified maliciously (the exact algorithm is described in the manual here https://stablebit.com/Support/CloudDrive/Manual?Section=Encryption#Chunk Integrity Verification). 1b. No, ECC correction is not supported / applied by StableBit CloudDrive. 2. It's perfectly fine. The chunk data can be safely stored on a StableBit DrivePool pool. 3. Yep, you should absolutely be able to do that, however, it looks like you've found a bug here with your particular setup. But no worries, I'm already working on a fix. One issue has to do with how cloud drives are unmounted when you reboot or shutdown your computer. Right now, there is no notion of drive unmount priority. But because of the way you set up your cloud drives (cloud drive -> pool -> cloud drive), we need to make sure that the root cloud drive gets unmounted first before unmounting the cloud drive that is part of the pool. So essentially, each cloud drive needs to have an unmount priority, and that priority needs to be based on how those drives are organized in the overall storage hierarchy. Like I said, I'm already working on a fix and that should be ready within a day or so. It'll take a few days after that to run the driver through the Microsoft approval process. A fix should be ready sometime next week. Bug report: https://stablebit.com/Admin/IssueAnalysis/27859 Watch the bug report for progress updates and you can always download the latest development builds here: http://wiki.covecube.com/Downloads http://dl.covecube.com/CloudDriveWindows/beta/download/ As far as being ridiculous, I don't think so I think what you want to do should be possible. In one setup, I use a cloud drive to power a Hyper-V VM server, storing the cache on a 500GB SSD and backing that with a 30+ TB hybrid (cloud / local) pool of spinning drives using the Local Disk provider.
-
Alex changed their profile photo
-
Alex reacted to a question: Yay - B2 provider added :)
-
Jaga reacted to a question: check-pool-fileparts
-
Just to reply to some of these concerns (without any of the drama of the original post): Scattered files By default, StableBit DrivePool places new files on the volume with the most free disk space (queried at the time of file creation). It's as simple as that. Grouping files "together" has been requested a bunch, but I personally, don't see a need for it. There will always be a need to split across folders even with a best-effort "grouping" algorithm. The only foolproof way to protect yourself from file loss is file duplication. But I will be looking into the possibility of implementing this in the future. Why is there no database that keeps track of your files? It would degrade performance, introduce complexity, and be another point of failure. The file system is already a database that keeps track of your files, and a very good one. What you're asking for is a backup database of your files (i.e. an indexing engine). StableBit DrivePool is a simple high performance disk pooling application, and it was specifically designed to not require a database. As for empty folders in pool parts, yes StableBit DrivePool doesn't delete empty folders if they don't affect the functionality of the pool. The downside of this is some extra file system metadata on your pool parts. The upside is that, when placing a new file on that pool part, the folder structure doesn't have to be recreated if it's already there. The cons don't justify the extra work in my opinion, and that's why it's implemented like that. Per-file balancing is excluded by design. The #1 requirement of the balancing engine is scalability, which implies no periodic balancing. I could be wrong, but just an observation, it appears that you may want to use StableBit DrivePool exclusively as a means of organizing the data in your pool parts (and not as a pooling engine). That's perhaps why empty folders, no exact per-file balancing rules, and databases of missing files bother you. The pool parts are designed to be "invisible" in normal use, and all file access is supposed to be happening through the pool. The main scenario for accessing pool parts is to perform manual file recovery, not much more. Duplication x2 duplication protects your files from at most 1 drive failure at a time. x3 can handle 2 drive failures concurrently. You can have 2 groups of drives and duplicate everything among those 2 groups with hierarchical pooling. Simply set up a pool of your "old" drives, and then a pool of your "new" drives. Then add both pools to another pool, and enable x2 pool duplication on the parent pool. That's really how StableBit DrivePool works. If you want something different, that's not what this is. Balancing The architecture of the balancing engine is not to enable "Apple-like" magic (although, sometimes, apple-like magic is pretty cool). The architecture was based on answering the question "How can you design a balancing engine that scales very well, and that doesn't require periodic balancing?". First you would need to keep track of file sizes on the pool parts, so that requires either an indexing engine (which was the first idea), but better yet, if you had full control of the file system, you could measure file sizes in real-time very quickly. You then need to take that concept and combine it with some kind of balancing rules. The result is what you see in StableBit DrivePool. The rest of the points here are either already there or don't make any sense. One area that I do agree with you on is improving the transparency of what each balancer is doing. We do already have the balancing markers on the horizontal UI's Disks list, but it would be nice to see what each balancer is trying to do and how they're interacting. Settings Setting filters by size is not possible. Filters are implemented as real-time (kernel-based) new file placement rules. At the time of file creation, the file size is not known. Implementing a periodic balancing pass is not ideal due to performance (as we've seen in Drive Extender IMO). UI UI is subjective. The UI is intended to be minimalistic. Performance is meant to give you a quick snapshot into what your pool is doing, not to be a comprehensive UI like the Resource Monitor. There are 2 types of settings / actions. Global application settings and per-pool settings / actions. That's why you see 2 menus. Per-pool settings are different for each pool. "Check-all", "Uncheck-all". Yeah, good idea. It should probably be there. Will add it. As for wrap panels for lists of drives in file placement, I think that would be harder to scan through in this case. In some places in the UI, selecting a folder will start a size calculation of that folder in the background. The size calculation will never interrupt another ongoing task, it's automatically abortable, and it's asynchronous. It doesn't block or affect the usage of the UI. Specifically in file placement, the chart at the bottom left is meant to give you a quick snapshot of how the files in the selected folder are organized. It's sorted by size to give you a quick at-a-glance view of the top 5 drives that contain those files, or how duplicated vs. non-duplicated space is used. So yes, there's definitely a reason for why it's like that. The size of the buttons in the Balancing window reflects traditional WinForms (or Win32) design. It's a bit different than the rest of the program, and that's intentional. The balancing window was meant to be a little more advanced, and I feel Win32 is a little better at expressing more complex UIs, while WPF is good at minimalistic design. I would not want to build an application that only has a web UI. A desktop application has to first and foremost have a desktop user interface. As for an API, I feel that if we have a comprehensive API then we could certainly make a StableBit DrivePool SDK available for integration into 3rd party solutions. Right now, that's not on the roadmap, but it's certainly an interesting idea. I think that the performance of the UIs for StableBit DrivePool and CloudDrive are very good right now. You keep referring to Telerik, but that's simply not true. Both of those products don't use Telerik. By far, it's using mostly standard Microsoft WPF controls, along with some custom controls that were written from scratch.Very sparingly it does use some other 3rd party controls where it makes sense. All StableBit UIs are entirely asynchronous and multithreaded. That was one of the core design principles. The StableBit Scanner UI is a bit sluggish, yes I do agree, especially when displaying many drives. I do plan on addressing that for the next release. Updates Yes, it has been a while since the last stable release of StableBit DrivePool (and Scanner). But as you can see, there has been a steady stream of updates since that time: http://dl.covecube.com/DrivePoolWindows/beta/download/ We are working with an issue oriented approach, in order to ensure a more stable final release: Customer reports potential issue to stablebit.com/Contact (or publicly here on the forum). Technical support works with customer and software development to determine if the issue is a bug. Bugs are entered into the system with a priority. Once all bugs are resolved for a given product, a public BETA is pushed out to everyone. Meanwhile, as bugs are resolved, internal builds are made available to the community who wants to test them and report more issues (see dev wiki http://wiki.covecube.com/Product_Information). Simple really. And as you can see, there has been a relatively steady stream of builds for StableBit DrivePool. When StableBit CloudDrive was released, there were 0 open bugs for it. Not to say that it was perfect, but just to give everyone an idea of how this works. Up until recently, StableBit DrivePool had a number of open bugs, they are now all resolved (as of yesterday actually). There are still some open issues (not bugs) through, when those are complete, we'll have our next public BETA. I don't want to add anymore features to StableBit DrivePool at this point, but just concentrate on getting a stable release out. I'm not going to comment on whether I think your post is offensive and egotistic. But I did skip most of the "Why the hell do you...", "Why the fuck would you...", "I have no clue what the hell is going on" parts.
-
There was a certificate issue on bitflock.com (cert expired, whoops). Fixing it now, should be back in a few minutes. Sorry about that. Additionally, there were some load / scalability issues and that's now resolved as well. BitFlock is running on Microsoft Azure, so everything scales quite nicely.
-
We've been having a few more database issues within the past 24 hours due to a database migration that took place recently. There was a problem with login authentication and general database load issues which I believe are now all ironed out. Once again, this was isolated to the forum / wiki / blog.
-
Our forum, wiki and blog web sites experienced an issue with the database server that caused those sites to be down for the past 34 hours. The issue has been resolved and everything is back up and running. I'm sorry for the inconvenience. StableBit.com, the download server, and software activation services were not affected.
-
StableBit CloudDrive's encryption takes place in our kernel virtual disk driver, right before the data hits any permanent storage medium (i.e. the local on-disk cache). This is before any data is split up into chunks for uploading, and the provider implementation is inconsequential at encryption point. The local on-disk cache itself is actually also split up into chunks, but those are much larger at 100 GB per chunk and don't affect the use of CBC. Our units of encryption for CBC are sector sized, and sectors are inherently read and written atomically on a disk, so the choice of using CBC does not cause any issues here. This is similar to how BitLocker and other full disk encryption software works. Since the on-disk cache is a 1:1 representation of the raw data on the virtual disk, there is no place to store a MAC after the encrypted sectors on the disk. Instead, data uploaded to the provider (which is fully encrypted by this point) is signed upon uploading and verified upon downloading. Many other full drive encryption products (like BitLocker) use no MAC at all. The algorithm StableBit CloudDrive currently uses for encrypted chunk verification is HMAC SHA-512 over 1 MB sized units or the chunk size, whichever is less. The HMAC key and the encryption key are derived from the master key using HKDF (https://en.wikipedia.org/wiki/HKDF). In terms of how much data needs to be uploaded, that depends on a few factors. For example, if you modify one byte on the disk, that byte needs to be uploaded eventually to the provider. One byte modified on an encrypted drive will translate to the entire sector being modified (due to the encryption). The bytes per sector of the virtual disk can be chosen at disk creation time. Additionally, if you have an Upload threshold set (under the I/O Performance window), then we will wait until a certain amount of data needs to be uploaded before beginning to upload anything. If not, then the uploading will begin soon after the virtual disk is modified. If the provider doesn't support partial chunk uploads (like most HTTP based cloud providers), then we will need to perform a read-modify-write for the chunk that needs to be altered in order to store that sector in the provider's data store. The entire chunk will be downloaded, verified, the signed 1 MB unit will be modified with the encrypted data, it will be re-signed and the whole chunk will be re-uploaded. If the provider does support partial chunk uploads and partial chunk downloads (like the File Share provider), then only the 1 MB signed unit will be downloaded, verified, modified, re-signed, and re-uploaded.