Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by Alex

  1. First off... great start!   Been waiting for this for some time since posting a while back and so happy to see it.  Now to get more drives to make room for this to work ;)


    So to my questions and suggestions.


    In the Balancing > File Placement > Folders section I see the list of folders and then the list of available drives on the right for creation of that relationship...

    1.  Would it be possible to add the actual drive model/size to the list of drives?  

    2.  If I remember correctly you said you do some level of performance measuring in DP (not sure to what extent).   Does this info lend itself to applying a "good/better/best" color flagging to the drives in this list to gauge relative actual performance?

    3.  If not (or maybe as an enhancement to 2 above) have you thought of adding the ability to, on a schedule or manually, "test" all drives for performance indexing that can be applied to my suggestion 2 or some other way?

    4.  If you are selecting folders/drives in the folders tab and you hit "save" the entire dialog closes.   Please add an "apply" button to create the actual rule rather than having every click create a rule.   The problem here is you can easily end up with rules that don't automatically remove themselves if you "uncheck" what you changed.  

    5.  When navigating down folders in the tree child folders should always display the drive selection of it's parent folder unless explicitly changed. 

    6.  Along the lines of the apply/save comment maybe it could be "apply to rules" in the folders tab for the current selection(s) and then a ".   Of course I know buttons should usually have single word but just stating as for examples sake.




    Have you thought of creating more traditional rules that can be grown/expanded as needed? For example I might have a rule called "HD movies" and within that rule are all the folders and drives that apply to that rule.   So I can have one rule for a given set of folders and assigned drives.  If I delete that rule I remove all those custom relationships.  If I ever want to assign a new folder to the rule I can just add the folder to the rule without having to create an entire new set of folder/drive assignments per folder.   Assuming this makes sense.    So in your Rules pane you'd have:




    -- Folder 1

    -- Folder 2

    -- Folder 3


    -- Drive 1

    -- Drive 2

    -- Drive 3




    -- Folder 5

    -- Folder 6


    -- Drive 2

    -- Drive 4

    -- Drive 5



    Questions/comments aside thanks for such a great product.  I don't know what I'd do without it... maybe use raid again /shudder.


    1. Yep, we can add that.
    2. We don't measure actual physical disk performance as of right now in a way that we can use like that.
    3. I would hate to make the user run through a manual test, it really should be automatic.
    4. Ah, if I understand you correctly, they actually do remove themselves. For folder based rules, "meaningless" rules are not saved. So a rule with all drives checked will never be saved, even though it is temporarily created while you are in the Folders interface.
    5. I'll think about changing the code to respect this.
    6. IMO, apply causes confusion I try to avoid that paradigm.




    Have you thought of creating more traditional rules that can be grown/expanded as needed? For example I might have a rule called "HD movies" and within that rule are all the folders and drives that apply to that rule.   So I can have one rule for a given set of folders and assigned drives.  If I delete that rule I remove all those custom relationships.  If I ever want to assign a new folder to the rule I can just add the folder to the rule without having to create an entire new set of folder/drive assignments per folder.   Assuming this makes sense.    So in your Rules pane you'd have:




    -- Folder 1

    -- Folder 2

    -- Folder 3


    -- Drive 1

    -- Drive 2

    -- Drive 3




    -- Folder 5

    -- Folder 6


    -- Drive 2

    -- Drive 4

    -- Drive 5



    Questions/comments aside thanks for such a great product.  I don't know what I'd do without it... maybe use raid again /shudder.


    If I understand you correctly, you mean setting up "rule groups", so a set of rules that place file on the same pool parts. Yes, I have though about it. I've even imagined a UI for it, we can have color coded rectangles around sets of rules representing the fact that they are in the same group. I think that this is too complex for where we are today. We need to take this one step at a time, first make sure that the new system works reliably and then we can add on to it.

  2. Multiselect works brilliantly. Saved a bunch of time setting up my rules again.


    Thank you, it was a bit of work but worth it.


    Re-ordering rules still doesn't seem to work for me. I can drag and drop and sort them exactly how I want, but when I hit save and re-open the rules list they have completely reverted. I reset all settings, de-activated my license, uninstalled, rebooted, deleted C:\ProgramData\StableBit DrivePool, reinstalled, reactivated, and set back up my rules. Same behavior. Not sure what's going on?


    It could be a bug. Can you take some screen shots before you click save and what you see after?


    Needless to say, I don't see that over here.

  3. Alright guys, starting with build 511 file placement rules will no longer include added drives by default. There is now a new option to enable that on a rule by rule basis.


    I've also added the ability to rearrange folder based rules, as long as you don't break folder depth rules. See attached image to illustrate what this looks like now.


    Download: http://dl.covecube.com/DrivePoolWindows/beta/download/


    Edit: Multiselect implemented in build 512.


  4. I just updated to build 510. The order of the rules now looks to be a pretty random jumble and when I manually position them the order reverts to the jumble after I hit save and re-open the list of placement rules. Any ideas?


    By default, the rules are now ordered like this (top being highest priority):

    • Manually entered rules are inserted at the top.
    • Folder based rules look for a place to insert themselves starting at the bottom and moving towards the top. Once a folder place rule encounters another folder placement rule with the same path depth or a higher path depth it inserts itself right under that rule. If it doesn't find such a rule then it inserts itself at the top of the topmost folder placement rule.

    The system tries to put any existing rules (prior to priority being implemented) into an order that fits the rules above, but I haven't tested that part thoroughly yet.


    If you're having trouble with your existing rules in build 510, remove all of them, hit save and redefine them.


    Another question on File Placement Rules!!


    I set all my folders to the specified drives with File Placement Rules. All went well and everything is on the drives that I want it to be on.

    However I added a new drive earlier and Drivepool is set to allow anything and everything to go on it. So If my Music is set to be only on Mount point 4 adding another drive (Mount Pont 8) will automatically allow music to be placed on Mount Point 8 after going to all the trouble of making sure that the music was only on Mount Point 4.

    Multiply this by about 20 server folders and its a pita.


    Can I suggest that the default for new drives is that all the Folder Placement checkboxes are unticked and not ticked. I know adding a drive is not an every day event but I still think the default should be to allow nothing on the drive until you allow it. Had I not realised in advance and then hit balance all my neat and tidy music would have again been scattered over 2 drives. :lol:


    Ok, I'll add a checkbox for each rule that will say something like "Place files on new drives added to the pool." It will be unchecked by default.


    I noticed this as well. It would be nice to be able select multiple placement rules and select/deselect drives for all of those rules at the same time. Something a little like setting access permissions on a group of folders. Drives that differ in a group would be in a semi-selected state unless you tick/untick it. Drives that are common between the rules would appear normally.


    That way adding a new drive to the pool would be a fairly trivial matter to add to a group of placement rules.


    Multiselect, hmm... Perhaps. I'll see how difficult it would be to adjust the existing code.

  5. That sounds like an excellent way to handle this on all counts


    I've just finished implementing priorities for file placement rules in an internal BETA (build 510). Available here: http://dl.covecube.com/DrivePoolWindows/beta/download/


    I've done some preliminary testing and the new priority based rules seem to be working pretty well. I'm going to run this build through the official rounds of testing and release it as a public BETA, if no issues arise.

  6. consider it a "feature" request... for me at least that would be the best way to do it. I suppose though that in a way that is already built in, if you just physically remove the drive, it would then be "missing". If you were to then remove that "missing" drive from the pool then drivepool would re-duplicate what was needed at that point - is that correct?


    Yep, that's another way to remove a drive and it's also a "very quick" way. BUT, you have to be absolutely sure that duplication is consistent when you do that. If you physically unplug a drive before every file on the pool is duplicated, some files on the drive that you pulled might not be on the pool. So overall, that's a more "manual" way to do it. When using "Duplicate later" you don't run that risk.

  7. I wish it was possible to specify MB instead of GB because I have many small <1GB files that are left sitting around on the feeder disk and I only have my archive disks configured for backup, so I have opened a ticket with the developer to give us a better option to be able to flush the feeder disk completely.


    Balancing can be complicated. Because of all the difference balancing scenarios that are possible it's difficult to understand what settings to use for each particular scenario.


    Christopher has suggested this, and I agree, we need some kind of guides for each particular setup.


    As for flushing the disk completely, I've implemented that and it's available in the latest BETA (CCR: https://stablebit.com/Admin/IssueAnalysis/2166).


    Dane, we can do remote support if you'd like, because I'm not entirely sure what's going on here.

  8. Just wondering, if one would check the "Balance immediately" option under "Automatic balancing", would that note do what you want? I never did actually understand why anyone would want anything else but I am a rather limited user of DP.


    Balance immediately is now the default on new installations of 2.X. Overnight balancing started being the default in 1.X because that version was exclusive to WHS 2011 and those machines, I'd imagine for the most part, tend to be on 24 hrs. a day.

  9. Yes, this has come up before, but it would be impractical for me to implement new features into both versions. Even as it is now, with the existing bug reports and feature requests it's taking much longer than I would like to get release final builds out.


    But I can offer you my assistance in helping you upgrade to v2 on WHS 2011. If you'd like, and this is entirely up to you, we can set up a remote support appointment and I'll take a look at any issues that you're encountering with the upgrade. Just open up a contact request @ http://stablebit.com/Contact and mention this thread.

  10. The way that it's implemented right now is that the quick removal process checks whether the file is on the pool by comparing the file size and the last write time of the file on the pool to the file being removed. If those match the file on the drive being removed is deleted. Obviously hashing the file on the pool would not be so quick and so that would negate the quick removal aspect.


    I could easily change it to the way that you suggest, but in the past people have been very confused as to what to do with the files left behind by the removal process, so I've tried to minimize that whenever possible.


    I can definitely see your point though, leave the files there, just in case.

  11. If your entire pool is duplicated, then the quickest way is to remove the first drive is with the Duplicate files later option. This will remove the drive but it will leave duplication process for later.


    But if you remove the second drive before the background duplication pass completes, with the same option, then it may take a bit longer because some files that may not have been re-duplicated yet will have to be migrated to the pool.


    Short answer: Use Duplicate files later. But it does carry some risk, if at the time of drive removal, the single copy that is still left on the pool is corrupt.

  12. Hi,


    Forgive the question but a little confused by how Drivepool add-ins work.


    I have a system drive (SYSHD) and a second drive (HD2)


    The second drive (HD2) has no drive letter and has been made part a Drivepool to become drive D:

    The system drive (SYSHD) C: has also been joined so that it can use some of the spare space on the drive.


    The idea is I want to ensure that the second drive (HD2) D: has all the files placed on it and then the system drive (SYSHD) is only used for duplicates.


    I have downloaded the ordered file placement plug-in 




    Thanks in advance


    I think that you should remove the ordered file placement plug-in (or disable it by unchecking it) because it's not really doing what you want and will confuse you.


    From what I understand, to achieve what you want, all you have to do is uncheck Unduplicated under the File Placement Limiter for your C:\ drive. That way D:\ will store all of your unduplicated files and C:\ will store only the duplicated file parts.


    The other plug-ins can be left at their defaults.

  13. Wildcards are definitely useful in building a rule. Are there any other matching parameters? Could I do a more robust pattern match like:


    \Media\TV\[0-9]* (Limit to Drive 1)

    \Media\TV\[A-L]* (Limit to Drive 2)

    \Media\TV\[M-Z]* (Limit to Drive 3)


    If not, I'd love to see that added


    As far as regular expression matching, well, first of all it's going to be slower, which is ok for the background balancing pass because that's only done when you change the rules. But we also pass these rules down to the file system driver and it has to evaluate them every time a file is created on the pool. Right now we use a built in function in the Windows kernel that is able to very quickly evaluate whether a path matches a pattern. Moving to regular expressions would mean putting a regular expression parser in the file system driver code. Not an easy task, and it will be slower.


    It "could" be done, but let's start with these simple ones and see where that goes. This is the first version to support file placement rules, I'd like to flesh out any issues and get them working well.

  14. The new file placement rules are fantastic and a feature I've been awaiting with much anticipation. I have a few questions on their behavior.


    Will the movies directory be placed on Drive 1 because it is matched by the first rule? Or will it end up on Drive 2 because the second rule is more specific? If a movie ends in mkv will it be placed on Drive 3? Is there any way to set the priority of the file matching rules?


    No, actually the rules are combined right now. So if you have a file \Media\Movies\MyMovie.MKV, that will match all the rules. The system now combines the rules in a restrictive way. In other words you've just told the system that you don't want that file on Drives 1, 2, 3. If you have no other drives in the pool it will continue on as if there are no restrictions.


    And you've kind of hit on something that I've been thinking of since I published build 503. I've talked to Christopher about this over here and I think that we have a nice solution to this.


    It will work like this:

    • Rules defined in a folders tab cannot be combined with any other rules. So one folder based rule will apply at all times. In the case of when multiple folder based rules match a path, the rule with the highest priority wins.
    • Folder based rule priority is automatic. Rules with more "\" characters in them get higher priority. In other words, rules on deeper directory structures win.
    • Pattern based rules will have an explicit priority that you define by moving the rule up or down in the list, kind of like we already have with the balancers. So if you place your *.MKV rule above the other folder rules then it will win, otherwise the folder rules will win.

    I think this makes sense and I'm still thinking about whether there should be a way to combine pattern based rules. Right now I'm leaning towards a no.


    Expect to see these changes implemented soon. I've already caught a file pattern balancing bug and fixed it in the latest internal BETA. Sometimes it was violating the rules even though the settings told it not to.

  15. As far as performance, I did some informal testing today with a RAM disk, comparing file copying speeds of pool vs. direct to RAM disk. The numbers didn't show much of a difference when comparing pool I/O vs. non-pool I/O (about 700 MB/s). I tested both local file access and UNC file access. UNC access was slower both on the pool and direct to disk (~500MB /s). But this was with unduplicated files.


    I'll try to do some more formal performance testing with duplicated files and see what I can come up with. Perhaps we can optimize duplicated files further.

  16. Not to bother and I know you're very busy but how would reading _another_ file from the faster drive cause file corruption?


    Basically you might get into a race condition, where the state of all the files are not the same, and another operation might get data that is inconsistent. So you want to make sure that reads / writes are synchronized properly, and you also need to make sure that the cache is consistent, these are probably some of the most complicated things to code when writing a file system.

  17. But... while the slowest drive is writing, new commands may be issued? So you could subsequently read another file and it would be read from the fastest drive?


    Very astute. You've just struck a very important aspect of disk I/O, it's called synchronization (by us programmers). Obviously, if such a thing were allowed then you would end up with file corruption.


    StableBit DrivePool has a well defined locking model to prevent such things from happening (as does the rest of the Windows kernel), and frankly this is most of the work of writing a reliable pooling solution.

  18. Is there a way to force a consistency check?  If not, feature request please.  :-)


    There is currently no way to force a full duplication consistency check in StableBit DrivePool 2.X (as there is in 1.X). Is this what you mean?

  19. As far as duplicated file performance, here's how it works:


    The Windows NT kernel is inherently asynchronous. In other words, performing an operation does not block the caller. You may have experienced this in other Windows applications, you click a button and then the whole Window freezes until some operation completes. This is not how the Windows kernel works (where our pooling is done). The original designers had the foresight to make the whole thing asynchronous. Which essentially means that nothing waits for something else to complete.

    For example, if I need to read a file I issue a READ IRP (a command) to some driver (some code that someone else wrote). If the driver can't service the read request immediately it tells us "Ok, great! I'll get back to you when the read is complete. Do you have any other requests?". This is how everything works under the hood.

    We take advantage of this and basically for duplicated files we issue multiple write requests in parallel. This means that the total time that it takes to complete the request is the turnaround time of the slowest drive.

  20. Ok so I have just started expirimenting with this and its got me wondering!

    If for example a folder on the pool is set to duplicate (In my case all my music files.) and I set my music to wholy reside on "mount point 1" (ie uncheck the boxes for all the other drives) will it still duplicate to another single drive?


    Also having selected some actions to limit placement the Dashboard shows icons on the very far right against each drive in the disks section. 

    Mousing over it says "File placement balancing rules need to be applied." So have I missed a step somewhere that is stopping the actions from taking place?

    • Yes, folder duplication fundamentally overrides balancing rules. StableBit DrivePool's architecture demands it. In your example, enabling folder duplication will try to respect file placement rules, but ultimately if the rules have to be broken it will break them in order to duplicate your files. This was one of the many changes necessary to implement file placement. The background duplication module has to be aware of file placement rules and has to try to follow them.
    • The little file icon simply represents that a full file pattern balancing pass needs to take place on those drives as a result of your balancing rule changes. This is a completely new balancing algorithm and is only run once after altering file placement rules. I'm going to be posing a detailed blog post on all of this in the next few days (well, it's already written actually, just working on testing the release). I've also started a new topic in this forum explaining a bit about how this all works.

    Expect a new public BETA in the next few days and a comprehensive blog post about the new file placement rules. We are at build 502 as of today.

  • Create New...