-
Posts
254 -
Joined
-
Last visited
-
Days Won
50
Reputation Activity
-
Alex got a reaction from gringott in StableBit DrivePool Per-Folder Duplication
Yep, that's what I was thinking as well when I made the initial post. But many people seem to love the feature, so it's not going away
-
Alex got a reaction from Tardas-Zib in The Roadmap
I've been thinking about how I can better communicate what is in store for each of our products, there are 3 now and another one in the works. Starting today I'll be setting up topics in each forum that I'll be updating on a regular basis. Each post will maintain what the future holds for each product.
I try to keep a lot of the development driven by user feedback, but most of that feedback doesn't happen in the public forum (but usually in tech support tickets). I'd just like to take this opportunity to highlight the direction that each product is heading in, a kind of roadmap.
I'll be setting up those posts today so look for them popping up soon in each respective forum.
-
Alex got a reaction from imxjihsk in The Roadmap
I've been thinking about how I can better communicate what is in store for each of our products, there are 3 now and another one in the works. Starting today I'll be setting up topics in each forum that I'll be updating on a regular basis. Each post will maintain what the future holds for each product.
I try to keep a lot of the development driven by user feedback, but most of that feedback doesn't happen in the public forum (but usually in tech support tickets). I'd just like to take this opportunity to highlight the direction that each product is heading in, a kind of roadmap.
I'll be setting up those posts today so look for them popping up soon in each respective forum.
-
Alex got a reaction from Tardas-Zib in StableBit DrivePool - Controlling Folder Placement
I like writing these posts because they give me feedback as to what the community is really interested in. I can see that my last post about the Scanner was not very interesting, it was probably too technical and there's probably not much to add to what I've already said.
Well, this time let's talk about StableBit DrivePool. In particular, I'd like to talk about DrivePool beyond 2.0.
Controlling Folder Placement
I think that I have a few great ideas for DrivePool 2.1+ but some of them depend on the ability to control folder (or file) placement, per pool part. I've kind of hinted at this capability in the thread that talked about taking out per-folder duplication, but I think that I've figured out how we can make this work.
What I would like to be able to do in future versions is to give you guys the ability to associate folders with one or more disks that are part of the pool. So that any files in those folders would be stored on those pool parts only (unless they're full).
This should be trivial to implement on the file system level, but the balancing framework would need to be enhanced to support this, and I think that I've figured out how to make that work.
Theoretically, you should even be able to use wildcard patterns such as /Virtual Machines/Windows* to associate all of those files with a group of pooled disks.
What do you guys think, is this worthwhile doing?
-
Alex got a reaction from meeldilla in Welcome to the new Forum
Welcome everyone to the new Forum!
In an effort to support our growing community, the old Vanilla based forum was retired. It will remain available at its old URL http://forum.covecube.com, but you won't be able to post anything new there.
This new forum can be accessed at http://community.covcecube.com.
Login Credentials
If you've created an account on the old Vanilla forum, you can actually use the same username and password to log into this forum.
In addition, this new forum now supports logging in with your Windows Live ID.
Community
With this new forum, my hope is to give our community a place to talk about technology as it surrounds our products, and not just focus on the products themselves.
To this end, I've set up a few sub-forums:
StableBit Scanner - Compatibility
With all the different hardware out there (disk controllers, hard drives, ect...) it's tough to know what works best, especially when dealing with SMART passthrough.
I'd like this to be the place to discuss what works and what doesn't.
StableBit DrivePool - Hardware
StableBit DrivePool is a very fast pooling solution that is capable of creating very large pooling arrays.
Share your setup, brag about your pool size , and get hardware advice here. Nuts & Bolts
Recently I've been overhauling the manuals section over at stablebit.com/support (yes, it's way overdue), and I found it amazing how many intricacies there are to these little programs.
I realized that it might be fun to talk about these things so I've set up a new forum called Nuts & Bolts where I will be posting a topic every now and again about some technical aspect of our software. The topics will be open for discussion, so we can go back and forth about what people think about that particular topic.
Moderators
Our "Resident Gurus" Shane and saitoh183 will be joining us in the new forum as moderators, and we have Drashna (technical support) and myself as the Admins.
-
Alex got a reaction from Christopher (Drashna) in Software I use and recommend
I just have to mention that, personally, I've used SyncBack SE (not Pro) for years. It's a great piece of software and reasonably priced too.
-
Alex got a reaction from gringott in Going to the Cloud
Secure cloud storage has been foremost on my mind. I've been thinking about different options that would offer practical and affordable cloud storage. There will be more coming from Covecube regarding cloud storage stay tuned, the wheels are already in motion.
-
Alex reacted to daveyboy37 in StableBit DrivePool - Controlling Folder Placement
This is something that I have hoped for, for a very long time. I really have never been keen on having things scattered around the pool. Especially with music... 10 album tracks scattered over 5 or 6 drives.. But then I'm probably a bit O.C.D.
I'm sure for people who have various devices streaming to different rooms this must be a good thing. Knowing that all the Disney films are all on one hard drive for the rug rats, and the teenage daughter can watch her twilight knowing its on a separate drive so no risk of intensive I/O and so on. And yes i know that this could be achieved by organising multiple pools. However when you get to 13 drives and around 22TB of data creating new pools seems like a hassle.
First thought is that this would eliminate the need.
Second thought is that once implemented the folder placement would to my mind then simplify the operation of creating separate pools and may actually lead me to do it, instead of just thinking about it
I'm all for it!!!
.
-
Alex reacted to Rychek in StableBit DrivePool - Controlling Folder Placement
At firtst I wasn't sure I would have any use for such a feature, but the more I think about it, the more I like the idea of having more control over what goes where. It could be very useful as my children get old and skilled enough to use the server. Bring on the progress!
Oh, yeah, and thanks so much for all your hard work Alex! Drivepool and Scanner are awesome and the WS 2012 E integration in the last update felt like an early Christmas present.
-
Alex got a reaction from Christopher (Drashna) in StableBit DrivePool - Controlling Folder Placement
I can be a bit wordy by my very nature. But yes, that's exactly what I'm talking about, controlling which files go onto which pool part.
And, in the future, a pool part may not necessarily represent a single local physical disk, which would make this even more interesting
-
Alex got a reaction from Christopher (Drashna) in Going to the Cloud
Oh and thank you
I'm no expert at public speaking, but I do like to talk about the products that I've built, which I believe in very strongly.
-
Alex got a reaction from Christopher (Drashna) in WS2012E Dashboard Responsiveness with Scanner Scanning
Just to follow up on this,
I've done extensive troubleshooting on this issue with Kihim in a support ticket. It seems like this is caused by the system getting bogged down at certain times leading to a periodic slowdown in the UI. At least that's what I can see from the Scanner's built-in UI performance profiler (accessed by pressing P in a BETA build).
I don't see any Scanner UI task running away with the CPU. I've also set up a local test rig with similar specs, 26 disks and a 2Ghz 8-core AMD server, and have run the Scanner UI for hours looking for any slowdowns and I couldn't find any.
But I think the bottom line here is that the Scanner uses WPF to render its UI, and WPF does require some more CPU resources than your typical Win32 application. I think that that's really the core issue here. So in the future I would like to offer a "lite" version of the UI for situations where CPU resources are scarce. I imagine that there will be a simple drop down that will let you switch between "standard" and "lite" versions of the UI, letting you run whichever one you want.
-
Alex reacted to gringott in Going to the Cloud
I like my "cloud" local and private. Therefore, no external access for me. Feel free to develop for others, however.
I do the math every year or so. Any serious online storage [TBs] cost more than buying and rotating drives every three years, which of course I haven't had to do that frequently.
You can check for yourself, how much does it cost for 55 plus TBs online 24/7?
Alex, I "saw" [heard] you being interviewed on utube "StableBit on Home Server Show 219".
Very good representation of the product. Yes, I know it was from April but I just found it this morning.
-
Alex got a reaction from nouxuntainutt in Going to the Cloud
It seems like just about every application today is going to the cloud.
What do you guys think of us adding tighter integration with the cloud (so to speak)?
For instance:
How about saving your StableBit Scanner disk scan history online?
This will mean that if you plug in the same hard drive into a different computer it will instantly know when that disk was last scanned.
Perhaps we can keep track of your disk's temperature history as well, and synchronize that with the cloud?
You would be able to query for temperature history for any disk, from any point in time.
We can do the same with disk performance, disk uptime (when it goes to sleep or when it's running),
For DrivePool, we can augment remote control peer discovery with a centralized server. So DrivePool will automatically know about every machine running DrivePool with the same Activation ID.
Whenever you add or remove a disk, DrivePool would save that event to the cloud. You would be able to see pool participation history and disk space usage utilization over time.
We can build some mobile apps around this data to let you query and access it. Would you guys find this service valuable and would you be willing to pay a small yearly fee for a service like this (say, $4.99 / yr)?
-
Alex got a reaction from gringott in StableBit Scanner - Identifying Disks Uniquely
Keeping this board on topic, I'd like to talk about something pretty technical that is central to how the StableBit Scanner operates, and perhaps get some feedback on the topic.
One of the benefits of running the StableBit Scanner is not just to predict drive failure, but to prevent it. The technical term for what the StableBit Scanner performs on your drives to prevent data loss is called Data Scrubbing (see: http://en.wikipedia.org/wiki/Data_scrubbing). By periodically scanning the entire surface of the drive you are actually causing the drive to inspect its own surface for defects and to recognize those defects before they turn into what is technically called a latent sector error (i.e. a sector that can't be read).
In order to do the periodic surface scan of a disk, the StableBit Scanner needs to know when it scanned a disk last, which means that it needs to identify a disk uniquely and remember which sectors it has scanned last and when. The StableBit Scanner uses sector ranges to remember exactly which parts of which disk were scanned when, but that's a whole other discussion.
I would like to focus this post on the issue of identifying a disk uniquely, which is absolutely required for data scrubbing to function properly, and this was overhauled in the latest BETA (2.5).
The original StableBit Scanner (1.0) used a very simple method to identify disks. StableBit Scanner 1.0 used the MBR signature to differentiate disks among each other.
For those who don't know what an MBR signature is, I'll explain it briefly here. When you buy a new disk from the store, it's probably completely blank. In other words, the disk contains all 0's written to it throughout. There is absolutely nothing written to it to differentiate it from any other disks (other than the serial number, which may not be accessible from Windows).
When you first connect such a blank disk to a Windows machine it will ask you whether you want to "initialize" it. This "initialization" is actually the writing of the MBR (master boot record, see: https://en.wikipedia.org/wiki/Master_boot_record), or GPT (GUID Partition Table) if you so choose. The MBR and GPT define the header (and perhaps footer) of the disk, kind of like when you write a letter to someone and you have a standard header and footer that always follows the same format.
One of the things that initializing a disk does is write a unique "signature" to it in the MBR or GPT. It's simply a long random number that identifies a disk uniquely. The problem with a MBR signature is that the random number is not large enough and so it is only meant to be unique on one single system. So if you connect a disk from a different computer, the disk signature on the foreign disk has a miniscule chance of being the same as a disk on the system that its being connected to.
Well, for the StableBit Scanner 1.0 this would be a problem. It would recognize the new disk as being the old disk, which would cause all sorts of issues. For one, you can't have the same disk connected to the same computer twice. That's simply not possible and we would write out an error report and crash.
StableBit Scanner 2.0 improved things a bit by utilizing the GPT signature, which was guaranteed to be unique across multiple systems. The only problem with using the GPT disk signature to identify disks uniquely is that disk cloning software is capable of placing the same signature on 2 different physical disks which would end up causing the same problem. In addition, many disks still utilize MBR, so we can't solely rely on GPT to resolve this issue.
As you can see this has not been an easy problem to solve
In the latest StableBit Scanner 2.5 BETA I've completely overhauled how we associate disk scan history (and other persistent settings) with each disk in the system. This is a major change from how things used to work before.
In 2.5 we now have a brand new Disk ID system. The Disk ID system is heuristic based and picks the best disk scan history that it knows of based on the available information. We no longer rely on a single factor such as a MBR or GPT. Instead, we survey a combination of disk identifiers and pick the disk scan history that fits the available data best.
Here is the list of factors that we use, starting from the highest priority:
Direct I/O disk serial number GPT Signature + WMI Serial number + Disk size GPT Signature + WMI Model + Disk size GPT Signature + Disk size MBR Signature + WMI Serial number + Disk size MBR Signature + WMI Model + Disk size MBR Signature + Disk size See the change log for more info on what this change entails. I hope that you give the new build a try.
This post has definitely been on the technical side, but that's what this forum is all about
Let me know if you have any comments or suggestions (or can find a better way to identify disks uniquely).
-
Alex reacted to mrbiggles in StableBit DrivePool Per-Folder Duplication
I agree with DrParis - per-folder duplication is a must have of DP, along with >2 duplication counts.
For me, the simplicity of managing one pool, with variable duplication counts depending on the importance and volume of my data, is the whole attraction of DP and the thing that makes it stand head and shoulders above the others. I never have to worry about (manually) juggling data between individual disks or backup schemes or complex raid / parity schemes or any of that tedium again. For me it's the perfect balance between efficient storage and reliable resiliency to disk failures (and I've had a few). And I don't have to worry about my future needs, I can just adjust a duplication count here and there, add some storage and grow my pool reliably and smoothly.
To explain my rationalle..
I have lots of disks, large volumes (90%+) of low priority data (TV recordings etc), and small volumes of very high priority data (family pictures etc) - and I can't imagine I'm alone in this balance. I love the fact that I don't have to duplicate the low priority data (wasting precious and expensive space), yet can keep lots of copies of my important docs and photos and never worry about another hard drive failure again. I can just throw in another disk when I run out of space and add it to the pool. A marvellous, almost maintenance free, reliable and efficient system - with one big simple pool.
On parity - Parity wouldn't be any good to me as I'd waste space a large amount of space adding lots of parity data for data I don't care for much, and it will waste my biggest (and generally newest) hard drive as that's the one required for parity. It assumes all your data is equally important. So in my PVR machine for example, where I have lots of odd disk sizes it becomes complicated and inefficient. I'd much rather just pool together the mismatched disks into one lovely simple space for my unduplicated recordings, and have some other folders duplicated 3 or more times for important files on that computer (so I can use PVR as a network backup for important stuff). And whilst I have the space, I can duplicate my low priority stuff also - and then just remove the duplication as I start to run out of space, or just add another disk or two to the pool, change a duplication setting and voila it all just gets rebalanced in the background. So perfect and simple! Not to mention wonderfully scalable and future proof.
On using multiple pools for differing redundancy - definitely not. DP doesn't allow me to add multiple pools to the same set of disks, and even if it did this approach would be a real pain for me. I'd end up having to setup a different pool for each type of data which I might conceivably want to vary the duplication for - photos, TV, docs - so would end up being a cumbersome mess. Else I'd have to start manually shovelling data between pools to manage things when I change a duplication count, and that would be so messy.
Ps. I acknowledge that per folder parity system, with variable parity, would be (architecturally) possibly the perfect solution - but I'm more than happy to waste a bit of space for the simplicity and reliability of the DP per-folder file duplication approach. If I could trust a parity implementation, and all my disks were the same size, and all my data was the same priority, and I knew exactly what my future redundancy requirements are, and I knew that they'd never change, I'd consider parity, But this is not the case!
In short - please don't remove these two fabulous features!
-
Alex got a reaction from Christopher (Drashna) in Beta Updater
I've been debating how to handle automatic updates while we're in BETA.
There are 2 issues I think:
Too many updates are annoying. (E.g. Flash) Because this is a BETA there is the potential for a build to have some issues, and pushing that out to everyone might not be such a good idea. So I've come up with a compromise. The BETA automatic updates will not push out every build, but builds that have accumulated some major changes since the last update. Also, I don't want to push the automatic update as soon as the build is out because I want to give our users the chance to submit feedback in case there are problems with that build.
Once we go into release final, then every final build will be pushed out at the same time that it's published.
-
Alex got a reaction from Christopher (Drashna) in Beta 320 BSOD
I've looked at the dump that was referencing this thread and this has already been fixed and the fix will be available in the next public build.
-
Alex got a reaction from Henrik in Server Backup and duplication question
If you are looking for a way to backup only the "master" copy of a file and not the duplicated part, then that is not possible.
DrivePool has no notion of a master copy vs. a backup copy. Each copy of the same file is exactly identical, and the copies can reside on any pool part.
If you want to backup your duplicated pooled files without backing them up twice, you will need to use a backup solution that does deduplication (E.g. the client computer backup engine part of the Windows Home Server 2011 / Windows Server 2012 Essentials). Alternatively you can use a file based backup system to backup directly from the pool (such as SyncBackSE)
-
Alex got a reaction from Christopher (Drashna) in Symbolic Link Support
Thanks for testing it out.
My initial implementation in build 281 above was based on the premise that we can reuse the reparse functionality that's already present in NTFS.
I've been reading up some more on exactly how this is supposed to work and playing around with some different approaches, it looks like the entire concept of reusing NTFS for this is not going to work.
So don't use build 281
I'm going to take the current reparse implementation out and rewrite it from scratch using a different approach.
Using this new approach, reparse points (or symbolic links) will appear as regular files or directories on the underlying NTFS volume, but will work like reparse points on the pool. This will also eliminate the burden of accounting for reparse points when duplicating or rebalancing, since they will be regular files on the underlying NTFS volume.
Look for that in a future build. I don't think that it will make it into the next build because that one is concentrating on updating the locking and caching model, which is a big change as it is.
-
Alex got a reaction from Shane in Questions regarding hard drive spindown/standby
This is actually a fairly complicated topic.
Let's start by talking about how normal standby works without the StableBit Scanner getting involved.
Windows "Put the Disk to Sleep" Feature
Normally, Windows will monitor the disk for activity and if there is no disk activity for some preset amount of time it will put the disk to "sleep" by flushing all of the data in the cache to the disk and sending a special standby command to it. At the same time, it will remember that the disk is asleep in case any other application asks.
Shortly after the disk goes to sleep, the StableBit Scanner will indicate the fact that the disk is asleep in the Power column. Normally, the Scanner gets the power status of the disk by querying Windows and not the disk.
It does not query the disk directly for the power state because Windows considers this power query disk activity and wakes up the disk as a result.
Now, things get a bit more complicated if you want to include the on-disk power management in this picture.
Disks can optionally support these features, which can put them to sleep without Windows knowing it:
Advanced power management. Standby timer Advanced Power Management
This is a technology that implements power consumption profiles. For instance, if you don't care about performance but want maximum power savings, then you can tell your disk just that. Simply set the Advanced Power Management to Minimum Power Consumption. Or you can do the exact opposite by setting it to Maximum Performance (which guarantees no standby).
With Advanced Power Management you don't concern yourself with "sleep timeouts", like in Windows. You simply state your intent and the disk will adjust various parameters, including the standby time, according to your setting.
The implementation of Advanced Power Management is completely up to the manufacturer of the drive, and there are no specifications that explicitly state what each power mode does. This entire feature may not even be supported, depending on the disk model.
Standby Timer
The Standby timer is more widely supported because it is an older feature. You simply specify after how much disk inactivity you would like the disk to be put to sleep. This is similar to how things work in Windows, except that the low power mode will be initiated by the disk firmware itself.
Again, the implementation of this is up to the manufacturer of the drive.
StableBit Scanner "Put into Standby"
In the StableBit Scanner, you can right click on a disk and put it into standby mode. What this does is send a power down command to the disk. This type of power down is equivalent to what Advanced Power Management or the Standby Timer would do.
More importantly, when a disk is powered down in this way, Windows will not be aware that the disk is in a low power state, and will continue to report that the disk is still powered up. This is not an issue because the disk will simply spin up the next time that Windows tries to access it.
But this leaves the StableBit Scanner with a dilemma. If we can't query the disk for the power state directly, how do we report the true power state of the disk? What the StableBit Scanner implements is a power state in which it's not sure whether the disk is in standby or active, and this is what you were seeing.
Forcing the StableBit Scanner to Query the Power Mode from the Disk
If you want to use on-disk power management exclusively, and you don't care about Windows putting your disks to sleep, you can instruct the StableBit Scanner to query the power mode directly from the disk.
When this is enabled, you will no longer see the standby or active message, but Windows will never try to put that disk to sleep. That's why this is off by default.
SMART
And just to make things even more complicated, sometimes a disk will wake up when it's queried for SMART data.
To this end the StableBit Scanner implements some more settings to deal with this:
I hope that this clears things up.
-
Alex reacted to Shane in First OFF TOPIC! New competition is coming to Town!
Thanks for the explanation, Alex. It's a shame that shell extensions are so inefficient; if I have to choose between "works well" and "looks slick" I'll pick the former every time.
-
Alex reacted to saitoh183 in First OFF TOPIC! New competition is coming to Town!
Nothing can stop you from adding it as a option down the line
-
Alex reacted to Doug in RocketRAID 2760 PCI-Express 2.0 SATA III (6.0Gb/s)
Specifications:
Speed: SATA 3 (6.0Gb/s) Ports: 6x Internal Mini SAS SFF-8087 Slot: PCI-Express 2.0 x16 Chipset: See Marvell 9485 SAS/SATA Controller Chip Firmware
Firmware: 1.3AHCI compatible: No (proprietary driver required) Link: http://www.highpoint-tech.com/USA_new/CS-PCI-E_2_0_x16_Configuration.html Driver: 1.2.12.1023 (10/23/2012)rr276x.sys Link: http://www.highpoint-tech.com/USA_new/CS-PCI-E_2_0_x16_Configuration.html Performance
SATA III SSDBurst: 481 MB/s Drive: Intel SSDSC2CT180A4 OS Tested: Windows Home Server 2011
SATA II HDDBurst: 171 MB/s Drive: Seagate ST3200045AS OS Tested: Windows Home Server 2011
SATA III HDDBurst: 459 MB/s Drive: Seagate ST2000DM001 OS Tested: Windows Home Server 2011
-
Alex got a reaction from Shane in 2.x BETA - "Duplicate" default is off?
To be honest, I wasn't 100% comfortable with the arrow not having any text next to it, but I doubt that any designer ever is 100% satisfied with their design. You always want to keep tweaking it to make it perfect, but there are time constraints (plus, we can't exactly afford Johny Ive here).
I decided to ship it and listen for feedback, and based on that feedback I've slightly modified the pool options menu.
It looks like this now:
Let me know what you think.