Jump to content
Covecube Inc.

Alex

Administrators
  • Content Count

    249
  • Joined

  • Last visited

  • Days Won

    44

Everything posted by Alex

  1. Thank you, it was a bit of work but worth it. It could be a bug. Can you take some screen shots before you click save and what you see after? Needless to say, I don't see that over here.
  2. Alright guys, starting with build 511 file placement rules will no longer include added drives by default. There is now a new option to enable that on a rule by rule basis. I've also added the ability to rearrange folder based rules, as long as you don't break folder depth rules. See attached image to illustrate what this looks like now. Download: http://dl.covecube.com/DrivePoolWindows/beta/download/ Edit: Multiselect implemented in build 512.
  3. By default, the rules are now ordered like this (top being highest priority): Manually entered rules are inserted at the top. Folder based rules look for a place to insert themselves starting at the bottom and moving towards the top. Once a folder place rule encounters another folder placement rule with the same path depth or a higher path depth it inserts itself right under that rule. If it doesn't find such a rule then it inserts itself at the top of the topmost folder placement rule. The system tries to put any existing rules (prior to priority being implemented) into an order that fits the rules above, but I haven't tested that part thoroughly yet. If you're having trouble with your existing rules in build 510, remove all of them, hit save and redefine them. Ok, I'll add a checkbox for each rule that will say something like "Place files on new drives added to the pool." It will be unchecked by default. Multiselect, hmm... Perhaps. I'll see how difficult it would be to adjust the existing code.
  4. I've just finished implementing priorities for file placement rules in an internal BETA (build 510). Available here: http://dl.covecube.com/DrivePoolWindows/beta/download/ I've done some preliminary testing and the new priority based rules seem to be working pretty well. I'm going to run this build through the official rounds of testing and release it as a public BETA, if no issues arise.
  5. Yep, that's another way to remove a drive and it's also a "very quick" way. BUT, you have to be absolutely sure that duplication is consistent when you do that. If you physically unplug a drive before every file on the pool is duplicated, some files on the drive that you pulled might not be on the pool. So overall, that's a more "manual" way to do it. When using "Duplicate later" you don't run that risk.
  6. Balancing can be complicated. Because of all the difference balancing scenarios that are possible it's difficult to understand what settings to use for each particular scenario. Christopher has suggested this, and I agree, we need some kind of guides for each particular setup. As for flushing the disk completely, I've implemented that and it's available in the latest BETA (CCR: https://stablebit.com/Admin/IssueAnalysis/2166). Dane, we can do remote support if you'd like, because I'm not entirely sure what's going on here.
  7. Balance immediately is now the default on new installations of 2.X. Overnight balancing started being the default in 1.X because that version was exclusive to WHS 2011 and those machines, I'd imagine for the most part, tend to be on 24 hrs. a day.
  8. Yes, this has come up before, but it would be impractical for me to implement new features into both versions. Even as it is now, with the existing bug reports and feature requests it's taking much longer than I would like to get release final builds out. But I can offer you my assistance in helping you upgrade to v2 on WHS 2011. If you'd like, and this is entirely up to you, we can set up a remote support appointment and I'll take a look at any issues that you're encountering with the upgrade. Just open up a contact request @ http://stablebit.com/Contact and mention this thread.
  9. The way that it's implemented right now is that the quick removal process checks whether the file is on the pool by comparing the file size and the last write time of the file on the pool to the file being removed. If those match the file on the drive being removed is deleted. Obviously hashing the file on the pool would not be so quick and so that would negate the quick removal aspect. I could easily change it to the way that you suggest, but in the past people have been very confused as to what to do with the files left behind by the removal process, so I've tried to minimize that whenever possible. I can definitely see your point though, leave the files there, just in case.
  10. If your entire pool is duplicated, then the quickest way is to remove the first drive is with the Duplicate files later option. This will remove the drive but it will leave duplication process for later. But if you remove the second drive before the background duplication pass completes, with the same option, then it may take a bit longer because some files that may not have been re-duplicated yet will have to be migrated to the pool. Short answer: Use Duplicate files later. But it does carry some risk, if at the time of drive removal, the single copy that is still left on the pool is corrupt.
  11. I think that you should remove the ordered file placement plug-in (or disable it by unchecking it) because it's not really doing what you want and will confuse you. From what I understand, to achieve what you want, all you have to do is uncheck Unduplicated under the File Placement Limiter for your C:\ drive. That way D:\ will store all of your unduplicated files and C:\ will store only the duplicated file parts. The other plug-ins can be left at their defaults.
  12. The rules that we're talking about here are a brand new feature of StableBit DrivePool 2.1.0.503 BETA. They are not available in any previous version. See my blog post for full details: http://blog.covecube.com/2014/04/stablebit-drivepool-2-1-0-503-beta-per-folder-balancing/ Download the latest BETA here: http://stablebit.com/DrivePool/Download
  13. As far as regular expression matching, well, first of all it's going to be slower, which is ok for the background balancing pass because that's only done when you change the rules. But we also pass these rules down to the file system driver and it has to evaluate them every time a file is created on the pool. Right now we use a built in function in the Windows kernel that is able to very quickly evaluate whether a path matches a pattern. Moving to regular expressions would mean putting a regular expression parser in the file system driver code. Not an easy task, and it will be slower. It "could" be done, but let's start with these simple ones and see where that goes. This is the first version to support file placement rules, I'd like to flesh out any issues and get them working well.
  14. No, actually the rules are combined right now. So if you have a file \Media\Movies\MyMovie.MKV, that will match all the rules. The system now combines the rules in a restrictive way. In other words you've just told the system that you don't want that file on Drives 1, 2, 3. If you have no other drives in the pool it will continue on as if there are no restrictions. And you've kind of hit on something that I've been thinking of since I published build 503. I've talked to Christopher about this over here and I think that we have a nice solution to this. It will work like this: Rules defined in a folders tab cannot be combined with any other rules. So one folder based rule will apply at all times. In the case of when multiple folder based rules match a path, the rule with the highest priority wins. Folder based rule priority is automatic. Rules with more "\" characters in them get higher priority. In other words, rules on deeper directory structures win. Pattern based rules will have an explicit priority that you define by moving the rule up or down in the list, kind of like we already have with the balancers. So if you place your *.MKV rule above the other folder rules then it will win, otherwise the folder rules will win. I think this makes sense and I'm still thinking about whether there should be a way to combine pattern based rules. Right now I'm leaning towards a no. Expect to see these changes implemented soon. I've already caught a file pattern balancing bug and fixed it in the latest internal BETA. Sometimes it was violating the rules even though the settings told it not to.
  15. As far as performance, I did some informal testing today with a RAM disk, comparing file copying speeds of pool vs. direct to RAM disk. The numbers didn't show much of a difference when comparing pool I/O vs. non-pool I/O (about 700 MB/s). I tested both local file access and UNC file access. UNC access was slower both on the pool and direct to disk (~500MB /s). But this was with unduplicated files. I'll try to do some more formal performance testing with duplicated files and see what I can come up with. Perhaps we can optimize duplicated files further.
  16. Basically you might get into a race condition, where the state of all the files are not the same, and another operation might get data that is inconsistent. So you want to make sure that reads / writes are synchronized properly, and you also need to make sure that the cache is consistent, these are probably some of the most complicated things to code when writing a file system.
  17. Sounds like we can start testing with ReFS. We can surely add support, provided that there are no showstoppers. You would not be able to mix ReFS / NTFS on the same pool though.
  18. Very astute. You've just struck a very important aspect of disk I/O, it's called synchronization (by us programmers). Obviously, if such a thing were allowed then you would end up with file corruption. StableBit DrivePool has a well defined locking model to prevent such things from happening (as does the rest of the Windows kernel), and frankly this is most of the work of writing a reliable pooling solution.
  19. Alex

    Force consistency check?

    There is currently no way to force a full duplication consistency check in StableBit DrivePool 2.X (as there is in 1.X). Is this what you mean?
  20. Build 2.1.0.503 with this feature is now live on stablebit.com: http://stablebit.com/DrivePool/Download
  21. As far as duplicated file performance, here's how it works: The Windows NT kernel is inherently asynchronous. In other words, performing an operation does not block the caller. You may have experienced this in other Windows applications, you click a button and then the whole Window freezes until some operation completes. This is not how the Windows kernel works (where our pooling is done). The original designers had the foresight to make the whole thing asynchronous. Which essentially means that nothing waits for something else to complete. For example, if I need to read a file I issue a READ IRP (a command) to some driver (some code that someone else wrote). If the driver can't service the read request immediately it tells us "Ok, great! I'll get back to you when the read is complete. Do you have any other requests?". This is how everything works under the hood. We take advantage of this and basically for duplicated files we issue multiple write requests in parallel. This means that the total time that it takes to complete the request is the turnaround time of the slowest drive.
  22. Yes, folder duplication fundamentally overrides balancing rules. StableBit DrivePool's architecture demands it. In your example, enabling folder duplication will try to respect file placement rules, but ultimately if the rules have to be broken it will break them in order to duplicate your files. This was one of the many changes necessary to implement file placement. The background duplication module has to be aware of file placement rules and has to try to follow them. The little file icon simply represents that a full file pattern balancing pass needs to take place on those drives as a result of your balancing rule changes. This is a completely new balancing algorithm and is only run once after altering file placement rules. I'm going to be posing a detailed blog post on all of this in the next few days (well, it's already written actually, just working on testing the release). I've also started a new topic in this forum explaining a bit about how this all works. Expect a new public BETA in the next few days and a comprehensive blog post about the new file placement rules. We are at build 502 as of today.
  23. How it All Works Since File Placement is now implemented in the latest BETAs of StableBit DrivePool 2.1.0.X, I'm going to give you a short overview as to how the whole system works and what the impacts on performance are. When you create a new rule, the pool that hosts that rule gets marked with a special flag indicating that a full background pattern based balancing pass is required. On the next balancing pass, StableBit DrivePool goes over every file on the drives that have any pattern rules defined and moves files matching those rules to their destination. Conflict resolution occurs at this time and any conflicts are saved for display to the user. At the same time, your file placement rules are sent down to the file system, telling it to respect those rules on any new file creation. Real-Time Rule Violations Sometimes a file placement rule is violated because a new file had to be created on disks other than the ones that you've designated for that file. In those cases the file system notifies the service that file pattern rules were broken. At this point a balancing pass is scheduled to fully assess the situation. Performance This is an important point to make. How do the new file placement rules affect performance? First of all, when you have no file placement rules defined, then performance is not affected at all. Having one or more file placement rules enabled has these performance impacts: For the StableBit DrivePool service: After making any changes to file placement rules, a full file placement balancing pass is scheduled for the next automatic run. For automatic balancing, the regular balancing pass may not run if there is no need to do so. Manually initiating a balancing pass using the "rebalance" command will always run the regular balancing pass and perform a full file placement balancing pass, if there are any conflicts or you've made changes to the file placement rules. Overall, the balancing pass will take longer if you've made changes to the file placement rules or if there are file placement conflicts. Real-time performance impacts on the file system: Whenever a new file is created on a pool with one or more file placement limits, the decision where to place that file has to take those limits into account. For performance reasons that decision is made by the kernel file system driver with no involvement from user system service. There is no impact on Read / Write I/O performance as a result of having file placement limits defined.
  24. Setting the slider to 100% should accomplish exactly that. It basically means "balance if any data needs to be moved". Also, the latest internal BETAs do a better job of moving every single file off of a disk when it's being emptied. In particular, here is info on that code change: https://stablebit.com/Admin/IssueAnalysis/2166
×
×
  • Create New...