Jump to content
Covecube Inc.

Alex

Administrators
  • Content Count

    248
  • Joined

  • Last visited

  • Days Won

    44

Alex last won the day on March 22

Alex had the most liked content!

1 Follower

About Alex

  • Rank
    Lead Programmer
  • Birthday 11/30/1979

Contact Methods

  • Website URL
    http://covecube.com

Profile Information

  • Gender
    Male
  • Location
    New York, USA

Recent Profile Visitors

1205 profile views
  1. As a response to multiple reports of cloud drive corruption from the Google Drive outage of June 2 2019, we are posting this warning starting with StableBit CloudDrive 1.1.1.1128 BETA on all Google Drive cloud drives. News story: https://www.zdnet.com/article/google-heres-what-caused-sundays-big-outage/ Please use caution and always backup your data by keeping multiple copies of your important files.
  2. I just want to clarify this. We cannot guarantee that cloud drive data loss will never happen again in the future. We ensure that StableBit CloudDrive is as reliable as possible by running it through a number of very thorough tests before every release, but ultimately a number of issues are beyond our control. First and foremost, the data itself is stored somewhere else, and there is always the change that it could get mangled. Hardware issues with your local cache drive, for example, could cause corruption as well. That's why it is always a good idea to keep multiple copies of your important files, using StableBit DrivePool or some other means. With the latest release (1.1.1.1128, now in BETA) we're adding a "Caution" message to all Google Drive cloud drives that should remind everyone to use caution when storing data on Google Drive. The latest release also includes cloud data duplication (more info: http://blog.covecube.com/2019/06/stablebit-clouddrive-1-1-1-1128-beta/) which you can enable for extra protection against future data loss or corruption.
  3. As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
  4. 1c. If a cloud drive fails the checksum / HMAC verification, the error is returned to the OS in the kernel (and eventually the app caller). It's treated exactly the same as a physical drive failing a checksum error. StableBit DrivePool won't automatically pull the data from a duplicate, but you have a few options here. First, you will see error notifications in the StableBit CloudDrive UI, and at this point you can either: Ignore the the verification failure and pull in the corrupted data to continue using your drive. In StableBit CloudDrive, see Options -> Troubleshooting -> Chunks -> Ignore chunk verification. This may be a good idea if you don't care about that particular file. You can simply delete it and continue normally. Remove the affected drive from the pool and StableBit DrivePool will automatically reduplicated the data onto another known good drive. And thank you for your support, we really appreciate it. As far as Issue #27859, I've got a potential fix that looks good. It's currently being qualified for the Microsoft Server 2016 certification, Normally this takes about 12 hours, so a download should be ready by Monday (could even be as soon as tomorrow if all goes well).
  5. 1. By default, no. But you can easily enable checksumming by checking Verify chunk integrity in the Create a New Cloud Drive window (under Advanced Settings). 1a. For unencrypted drives, chunk verification works by applying a checksum at the end of every 1 MB block of data inside every chunk. When that data is downloaded, the checksum is verified to make sure that the data was not corrupted. If the drive is encrypted, an HMAC is used in order to ensure that the data has not been modified maliciously (the exact algorithm is described in the manual here https://stablebit.com/Support/CloudDrive/Manual?Section=Encryption#Chunk Integrity Verification). 1b. No, ECC correction is not supported / applied by StableBit CloudDrive. 2. It's perfectly fine. The chunk data can be safely stored on a StableBit DrivePool pool. 3. Yep, you should absolutely be able to do that, however, it looks like you've found a bug here with your particular setup. But no worries, I'm already working on a fix. One issue has to do with how cloud drives are unmounted when you reboot or shutdown your computer. Right now, there is no notion of drive unmount priority. But because of the way you set up your cloud drives (cloud drive -> pool -> cloud drive), we need to make sure that the root cloud drive gets unmounted first before unmounting the cloud drive that is part of the pool. So essentially, each cloud drive needs to have an unmount priority, and that priority needs to be based on how those drives are organized in the overall storage hierarchy. Like I said, I'm already working on a fix and that should be ready within a day or so. It'll take a few days after that to run the driver through the Microsoft approval process. A fix should be ready sometime next week. Bug report: https://stablebit.com/Admin/IssueAnalysis/27859 Watch the bug report for progress updates and you can always download the latest development builds here: http://wiki.covecube.com/Downloads http://dl.covecube.com/CloudDriveWindows/beta/download/ As far as being ridiculous, I don't think so I think what you want to do should be possible. In one setup, I use a cloud drive to power a Hyper-V VM server, storing the cache on a 500GB SSD and backing that with a 30+ TB hybrid (cloud / local) pool of spinning drives using the Local Disk provider.
  6. My apologies. We've been having issues with the forum's database for the past couple of days, and as a result the forum has been up and down. I believe that the problem is resolved and there should be no further downtime.
  7. Thank you for your feedback. This should be fixed in 1.0.6592.33749, which I've just put up now (see link in original post to download it). I've tested with 390.65 with a Geforce 1070 Ti running on a Haswell platform and I'm not seeing anything like that. The benchmark doesn't do anything to block other system calls while it's running, and it doesn't elevate its own priority in any way. Other processes in the system should continue to function normally while the benchmark is running. In fact, the OS Calls / Second that is measured is a combined count of what the benchmark is doing combined with all of the other apps in the system. Maybe certain platforms are more sensitive to the high loads that it's generating? You can try to reduce the number of simultaneous threads in Options (little gear). I've seen different numbers quoted, but the truth is, it's workload dependent. If an app is not making many system calls, then it will be affected less, and if it's making a lot of system calls, it will be affected most. This benchmark is measuring the Operating Systems' maximum user / kernel transition rate. With the new patches, every time that transition happens the TLB (translation lookaside buffer) is flushed on the CPU. The TLB is a super fast lookup table that translates virtual memory addresses to physical ones. Without it, the CPU has to go to the RAM to perform that translation (which is much slower). Added some additional text in 1.0.6592.33749. Anecdotally, I've been playing around with this benchmark on my Hyper-V machine, and I'm seeing huge performance differences on the VMs toggling the patches.
  8. Meltdown / Spectre Benchmark A free standalone tool to assess the performance impact of the recent Meltdown and Spectre security patches for Microsoft Windows. Download: X86: Meltdown.Spectre.Benchmark_X86.exe X64: Meltdown.Spectre.Benchmark_X64.exe System Requirements: Windows 7 / Windows Server 2008 R2 or newer .NET Framework 4.5 or newer Visual C++ 2013 Redistributable (most systems should already have this) Feel free to share this tool, updates will be posted to the URLs above as necessary. (both files are signed with the Covecube Inc. certificate) FAQ Q: What is Meltdown / Spectre? A: Meltdown and Spectre are the names of 2 vulnerabilities that were recently discovered in a range of microprocessors from different manufacturers. In order to protect systems against exploitation, Operating System and hardware firmware patches will be necessary. Unfortunately, these patches will inflict an overall performance degradation, sometimes a severe one. In general: Meltdown is Intel specific and does not affect AMD processors. It's mitigated using a Windows patch from Microsoft. Non-elevated processes will be affected most by this patch. Spectre requires a Windows patch and a CPU microcode update from your motherboard manufacturer (in the form of a BIOS / firmware update). For more information on these exploits, see Wikipedia: Meltdown (security vulnerability) Spectre (security vulnerability) Q: What is this tool for? A: This tool allows you to: See if your Windows OS is patched for Meltdown / Spectre and if the patches are enabled. Benchmark the OS in order to get a sense of how the patches are affecting your performance. Enable or disable the Meltdown / Spectre patches individually in order to assess the performance difference of each. Observe the number of system calls that your OS is making in real time, as you use it normally. Q: How does the benchmark work? A: The benchmark will measure the peak rate of user to kernel transitions that the OS is capable of using all of your processor cores. This is not the same as measuring the CPU utilization or memory bandwidth. Here, we're actually measuring the efficiency of the user to kernel mode transitions. While the patches may affect other aspects of your system performance, and the performance degradation is workload dependent, this is a good way to to get a sense of what the performance impact may be.
  9. Just to reply to some of these concerns (without any of the drama of the original post): Scattered files By default, StableBit DrivePool places new files on the volume with the most free disk space (queried at the time of file creation). It's as simple as that. Grouping files "together" has been requested a bunch, but I personally, don't see a need for it. There will always be a need to split across folders even with a best-effort "grouping" algorithm. The only foolproof way to protect yourself from file loss is file duplication. But I will be looking into the possibility of implementing this in the future. Why is there no database that keeps track of your files? It would degrade performance, introduce complexity, and be another point of failure. The file system is already a database that keeps track of your files, and a very good one. What you're asking for is a backup database of your files (i.e. an indexing engine). StableBit DrivePool is a simple high performance disk pooling application, and it was specifically designed to not require a database. As for empty folders in pool parts, yes StableBit DrivePool doesn't delete empty folders if they don't affect the functionality of the pool. The downside of this is some extra file system metadata on your pool parts. The upside is that, when placing a new file on that pool part, the folder structure doesn't have to be recreated if it's already there. The cons don't justify the extra work in my opinion, and that's why it's implemented like that. Per-file balancing is excluded by design. The #1 requirement of the balancing engine is scalability, which implies no periodic balancing. I could be wrong, but just an observation, it appears that you may want to use StableBit DrivePool exclusively as a means of organizing the data in your pool parts (and not as a pooling engine). That's perhaps why empty folders, no exact per-file balancing rules, and databases of missing files bother you. The pool parts are designed to be "invisible" in normal use, and all file access is supposed to be happening through the pool. The main scenario for accessing pool parts is to perform manual file recovery, not much more. Duplication x2 duplication protects your files from at most 1 drive failure at a time. x3 can handle 2 drive failures concurrently. You can have 2 groups of drives and duplicate everything among those 2 groups with hierarchical pooling. Simply set up a pool of your "old" drives, and then a pool of your "new" drives. Then add both pools to another pool, and enable x2 pool duplication on the parent pool. That's really how StableBit DrivePool works. If you want something different, that's not what this is. Balancing The architecture of the balancing engine is not to enable "Apple-like" magic (although, sometimes, apple-like magic is pretty cool). The architecture was based on answering the question "How can you design a balancing engine that scales very well, and that doesn't require periodic balancing?". First you would need to keep track of file sizes on the pool parts, so that requires either an indexing engine (which was the first idea), but better yet, if you had full control of the file system, you could measure file sizes in real-time very quickly. You then need to take that concept and combine it with some kind of balancing rules. The result is what you see in StableBit DrivePool. The rest of the points here are either already there or don't make any sense. One area that I do agree with you on is improving the transparency of what each balancer is doing. We do already have the balancing markers on the horizontal UI's Disks list, but it would be nice to see what each balancer is trying to do and how they're interacting. Settings Setting filters by size is not possible. Filters are implemented as real-time (kernel-based) new file placement rules. At the time of file creation, the file size is not known. Implementing a periodic balancing pass is not ideal due to performance (as we've seen in Drive Extender IMO). UI UI is subjective. The UI is intended to be minimalistic. Performance is meant to give you a quick snapshot into what your pool is doing, not to be a comprehensive UI like the Resource Monitor. There are 2 types of settings / actions. Global application settings and per-pool settings / actions. That's why you see 2 menus. Per-pool settings are different for each pool. "Check-all", "Uncheck-all". Yeah, good idea. It should probably be there. Will add it. As for wrap panels for lists of drives in file placement, I think that would be harder to scan through in this case. In some places in the UI, selecting a folder will start a size calculation of that folder in the background. The size calculation will never interrupt another ongoing task, it's automatically abortable, and it's asynchronous. It doesn't block or affect the usage of the UI. Specifically in file placement, the chart at the bottom left is meant to give you a quick snapshot of how the files in the selected folder are organized. It's sorted by size to give you a quick at-a-glance view of the top 5 drives that contain those files, or how duplicated vs. non-duplicated space is used. So yes, there's definitely a reason for why it's like that. The size of the buttons in the Balancing window reflects traditional WinForms (or Win32) design. It's a bit different than the rest of the program, and that's intentional. The balancing window was meant to be a little more advanced, and I feel Win32 is a little better at expressing more complex UIs, while WPF is good at minimalistic design. I would not want to build an application that only has a web UI. A desktop application has to first and foremost have a desktop user interface. As for an API, I feel that if we have a comprehensive API then we could certainly make a StableBit DrivePool SDK available for integration into 3rd party solutions. Right now, that's not on the roadmap, but it's certainly an interesting idea. I think that the performance of the UIs for StableBit DrivePool and CloudDrive are very good right now. You keep referring to Telerik, but that's simply not true. Both of those products don't use Telerik. By far, it's using mostly standard Microsoft WPF controls, along with some custom controls that were written from scratch.Very sparingly it does use some other 3rd party controls where it makes sense. All StableBit UIs are entirely asynchronous and multithreaded. That was one of the core design principles. The StableBit Scanner UI is a bit sluggish, yes I do agree, especially when displaying many drives. I do plan on addressing that for the next release. Updates Yes, it has been a while since the last stable release of StableBit DrivePool (and Scanner). But as you can see, there has been a steady stream of updates since that time: http://dl.covecube.com/DrivePoolWindows/beta/download/ We are working with an issue oriented approach, in order to ensure a more stable final release: Customer reports potential issue to stablebit.com/Contact (or publicly here on the forum). Technical support works with customer and software development to determine if the issue is a bug. Bugs are entered into the system with a priority. Once all bugs are resolved for a given product, a public BETA is pushed out to everyone. Meanwhile, as bugs are resolved, internal builds are made available to the community who wants to test them and report more issues (see dev wiki http://wiki.covecube.com/Product_Information). Simple really. And as you can see, there has been a relatively steady stream of builds for StableBit DrivePool. When StableBit CloudDrive was released, there were 0 open bugs for it. Not to say that it was perfect, but just to give everyone an idea of how this works. Up until recently, StableBit DrivePool had a number of open bugs, they are now all resolved (as of yesterday actually). There are still some open issues (not bugs) through, when those are complete, we'll have our next public BETA. I don't want to add anymore features to StableBit DrivePool at this point, but just concentrate on getting a stable release out. I'm not going to comment on whether I think your post is offensive and egotistic. But I did skip most of the "Why the hell do you...", "Why the fuck would you...", "I have no clue what the hell is going on" parts.
  10. There was a certificate issue on bitflock.com (cert expired, whoops). Fixing it now, should be back in a few minutes. Sorry about that. Additionally, there were some load / scalability issues and that's now resolved as well. BitFlock is running on Microsoft Azure, so everything scales quite nicely.
  11. Alex

    Forum Downtime

    We've been having a few more database issues within the past 24 hours due to a database migration that took place recently. There was a problem with login authentication and general database load issues which I believe are now all ironed out. Once again, this was isolated to the forum / wiki / blog.
  12. Our forum, wiki and blog web sites experienced an issue with the database server that caused those sites to be down for the past 34 hours. The issue has been resolved and everything is back up and running. I'm sorry for the inconvenience. StableBit.com, the download server, and software activation services were not affected.
  13. StableBit CloudDrive's encryption takes place in our kernel virtual disk driver, right before the data hits any permanent storage medium (i.e. the local on-disk cache). This is before any data is split up into chunks for uploading, and the provider implementation is inconsequential at encryption point. The local on-disk cache itself is actually also split up into chunks, but those are much larger at 100 GB per chunk and don't affect the use of CBC. Our units of encryption for CBC are sector sized, and sectors are inherently read and written atomically on a disk, so the choice of using CBC does not cause any issues here. This is similar to how BitLocker and other full disk encryption software works. Since the on-disk cache is a 1:1 representation of the raw data on the virtual disk, there is no place to store a MAC after the encrypted sectors on the disk. Instead, data uploaded to the provider (which is fully encrypted by this point) is signed upon uploading and verified upon downloading. Many other full drive encryption products (like BitLocker) use no MAC at all. The algorithm StableBit CloudDrive currently uses for encrypted chunk verification is HMAC SHA-512 over 1 MB sized units or the chunk size, whichever is less. The HMAC key and the encryption key are derived from the master key using HKDF (https://en.wikipedia.org/wiki/HKDF). In terms of how much data needs to be uploaded, that depends on a few factors. For example, if you modify one byte on the disk, that byte needs to be uploaded eventually to the provider. One byte modified on an encrypted drive will translate to the entire sector being modified (due to the encryption). The bytes per sector of the virtual disk can be chosen at disk creation time. Additionally, if you have an Upload threshold set (under the I/O Performance window), then we will wait until a certain amount of data needs to be uploaded before beginning to upload anything. If not, then the uploading will begin soon after the virtual disk is modified. If the provider doesn't support partial chunk uploads (like most HTTP based cloud providers), then we will need to perform a read-modify-write for the chunk that needs to be altered in order to store that sector in the provider's data store. The entire chunk will be downloaded, verified, the signed 1 MB unit will be modified with the encrypted data, it will be re-signed and the whole chunk will be re-uploaded. If the provider does support partial chunk uploads and partial chunk downloads (like the File Share provider), then only the 1 MB signed unit will be downloaded, verified, modified, re-signed, and re-uploaded.
  14. Sorry to hear that it's back. If we can catch it as it's happening I'm sure that we can get to the bottom of what's going on. Yes, I'm available, but it's better to go through the scheduling system so that we can get our timezones synchronized and my availability schedule is in that system as well. I've PMed the scheduling link to you, so please set up an appointment at your earliest convenience. Thanks,
  15. Oh, and regarding the partial reads / no partial reads issue, there is an Issue open for that in the form of a feature request. Right now, StableBit CloudDrive reads all checksummed / signed data in 1 MB units. So even if 1 byte needs to be read, 1 MB will actually be read and cached. Can we increase that 1 MB to let's say 10 MB? Yes we can, but I'm afraid that it will adversely affect the read performance of slower bandwidth users. So that's something that needs to be tested and that's why this is in the form of a feature request.
×
×
  • Create New...