Jump to content

Alex

Administrators
  • Posts

    253
  • Joined

  • Last visited

  • Days Won

    49

Posts posted by Alex

  1. Lee,

     

    In StableBit DrivePool there is no notion of "main" file parts and "duplicated" file parts. We just have some number of identical file parts for each file. Unduplicated files have one file part, and duplicated files have 2 or more. Where those file parts are placed can be controlled with file placement.

     

    If you tell the balancing system to place a 2x duplicated folder on one drive, it will show you a warning that some file parts could not be placed on that drive. For a 2x duplicated folder, you need to select at least 2 drives in file placement in order to be able to satisfy those rules, or else the system will have to violate the rules.

     

    But I still think that you can accomplish what you want.

     

    If I understand you correctly, you want all of your music to be duplicated and to play from only the "first" file part. You can achieve this by using file placement to restrict the music folder to at least 2 drives and then to turn off Read Striping (under Pool Options -> Performance -> Read striping).

     

    I've mentioned above that all file parts are the same, so how could there be a "first" file part?

     

    For the purposes of read striping, when dealing with files that have > 1 file part, StableBit DrivePool's file system (CoveFS) does maintain a deterministic pool part order. In other words, the disks part of the pool are ordered in an invisible order and this order persists even after a reboot.

     

    The point being, when read striping is off, and you open a duplicated file, the "first" file is always opened.

  2. This has been an open known issue for a while (https://stablebit.com/Admin/IssueAnalysis/82), but I haven't been able to address it yet. I'm setting up a brand new testing lab right now with all new hardware just for such difficult cases and will address this in the future. I've tried to tackle this in my current lab and it has proven to be most difficult so I've opted to wait until the new lab is set up. The new lab is coming along nicely and I'll get to resolving this once the new lab is up and running.

     

    TL;DR: It may take a few months.

  3. Yeah, that's the problem with Windows 8.1's .NET implementation, it's not entirely clear what exactly it reinstalls. Based on my research it doesn't look like it reinstalls .NET 4.0 / 4.5, which is what the StableBit Scanner uses and the only way to get that repaired is with a refresh.

  4. I've just put up a new blog post that introduces some of the functionality of the next StableBit product, dubbed "Product 3".

     

    Blog post: http://blog.covecube.com/2014/05/stablebit-drivepool-2-1-0-553-rc-file-placement-and-product-3/

     

    I'll quote the text that is the most relevant to Product 3:

     

    Right now, when you add a disk to the pool, a hidden “PoolPart” folder is created on that disk. Any pooled files that need to be stored on that disk are simply stored in that hidden Pool Part folder. So in reality, when you add a disk to the pool, you’re actually adding a Pool Part to the pool, and that Pool Part happens to be stored on a local disk.
     

    I hope that you see where I'm going with this. Product 3 will allow you to add Pool Parts to the pool that are not necessarily stored on physical disks. This is going to open up a whole range of very exciting possibilities.

     

    It’ll be possible to store Pool Parts on virtually anything that you can imagine that can store persistent data. Email servers, FTP, UNC shares, cloud storage might be some examples, all you’ll need is a plugin for which there will be an open API.

     

    In this context, StableBit DrivePool’s file placement will gain a whole new use. It will allow you to define which folders on the pool are stored on which mediums. Moreover, with per-folder duplication, you will be able to specify which specific mediums will store the duplicated file parts, of each folder.

     

    As we approach the first public BETA I'll be posting some more technical detail here about how Product 3 will work and what unique features it will offer.

    1. Unfortunately not yet, but it's on the todo list and we'll add it eventually.
    2. As soon as that file is opened, the balancer aborts and rolls back whatever it was doing, and the application proceeds to open the file as if nothing else had the file open. The application never gets an error.
    3. Background duplication / rebalancing cannot move files that are in use. However, background duplication typically only runs when changing the duplication level of existing files.

       

      Once the duplication level change is complete, any further real-time changes to those files are simply written to all file parts at the same time. This means that once background duplication completes you don't have to worry about files that are in use.

  5. I can think of 2 possible explanations:

    • Build 420 has an issue where if you would keep the performance pane open for an extended amount of time, and the CPU was on the lower end, it would erroneously turn off performance gathering in order to save CPU cycles. This has since been fixed in the latest builds.
    • Your Windows performance counters are corrupt. We've seen this before, and it does cause the performance UI to stop working, Not much we can do about it, except to instruct you to repair those performance counters.
  6. Do I need to enable duplication on the respective drivepool's folder first before moving the files from the old WHS DE directory (I think this is the case if I'm reading it correctly)? or should I disable duplication on WHS first, make sure there's only one copy of files, then move the files?

     

    Thanks,

     

    Before moving anything, set up the folder structure in DrivePool as it is in WHS v1 (including folder duplication). Then just move your files into the folders that you've created.

     

    Having multiple file parts in the same path is not a problem for DrivePool, as long as the folder is set to be duplicated.

  7. I run VMWare / Virtualbox and have run Hyper-V on and off from the pool. I test and develop DrivePool on the pool.

     

    There have been issues in the past that were causing problems with virtualization products, but those have since been resolved. The latest 2.1 builds work beautifully for me.

     

    For writes there is no "read striping", writes go to all the disks at the same time (I think you may be getting confused between reads / writes).

  8. Thanks. Which version (will) has (/have) the fix?

    2.1.0.541 from today?

     

    Yes, and for future reference, the link to the IAR always contains the version number under "CCR Implemented in". So that's why I post that link as well. For every code change request that I complete, I make a build and put the build number in the IAR.

  9. I'm trying to clearly define the problem here. As far as I understand, when you access any share on the pool (from a remote client computer?), all of the drives spin up?

     

    As far as having a server side cache, we already do, and a client side cache in the form of oplocks (this means that the server isn't even contacted over the network for some I/O operations).

  10. I've just finished coding a new balancing plugin for StableBit DrivePool, it's called the SSD Optimizer. This was actually a feature request, so here you go.

     

    I know that a lot of people use the Archive Optimizer plugin with SSD drives and I would like this plugin to replace the Archive Optimizer for that kind of balancing scenario.

     

    The idea behind the SSD Optimizer (as it was with the Archive Optimizer) is that if you have one or more SSDs in your pool, they can serve as "landing zones" for new files, and those files will later be automatically migrated to your slower spinning drives. Thus your SSDs would serve as a kind of super fast write buffer for the pool.

     

    The new functionality of the SSD Optimizer is that now it's able to fill your Archive disks one at a time, like the Ordered File Placement plugin, but with support for SSD "Feeder" disks.

     

    Check out the attached screenshot of what it looks like

     

    Notes: http://dl.covecube.com/DrivePoolBalancingPlugins/SsdOptimizer/Notes.txt

    Download: http://stablebit.com/DrivePool/Plugins

     

    Edit: Now up on stablebit.com, link updated.

    ssd_optimizer.png

  11. ...

     

    However if the MUSIC folder was "assigned" to say D7 and D8 then the OFP wouldn't start to place non-MUSIC files on D7 and D8 until the pool was nearly full. So the question then is, would the OFP operate in this circumstance, when you are attempting to place files partway along the priority chain, rather than starting at the beginning.

     

    Thanks - Nigel

     

    I see what you're saying, I was thinking about the first case specifically. You're right this would not work.

     

    With this setup you would end up with conflicting rules that will cancel themselves out. I'll think about how we can solve this.

  12. I think the Ordered File Placement plugin could be used to do this. Setup your folder rule to place all music on drive 1 & 2. Then use the OFP plugin to designate drive 1 with a higher priority than drive 2.

     

    Yeah, that should work. The Ordered File Placement plugin uses real-time file placement limits to determine which disks get new files and the File Placement balancer will respect those limits by default (unless you change that).

     

    The file system should respect both rules as well.

  13. From a technical point of view:

    • For reads and writes, CoveFS just forwards the I/O down to NTFS and the request continues down the driver stack as normal. There should be close to zero overhead.
    • The only long running background task that CoveFS performs is a "measuring" pass. It will enumerate all the files on the pool and compute size statistics. When this is occurring, you will see that in the UI under the "Pool Organization" bar as "Measuring...".
    • The most expensive operation that CoveFS performs, in terms of CPU cycles, is a file open / create. So theoretically, if you had a few processes enumerating and opening every file on the pool then that could lead to some significant CPU load (just enumerating files without opening each one is fast).

    I used to run StableBit DrivePool 1.X on an Atom D525 (1.8 Ghz) and it was very usable.

  14. The .MSI installers are there for managed deployment scenarios. They shouldn't be used normally.

     

    The .EXE is really a wrapper over the .MSI (using WiX Burn, which is used by Microsoft as well).

  15. Good to know that a Refresh worked.

     

    The problem was technically with .NET, and unfortunately on Windows 8+, .NET is part of the OS and you can't Repair it. If the same issue occurred on Windows 7 you would simply run the .NET installer in repair mode. But because .NET is integrated into the OS on Windows 8, you have to repair the whole OS.

  16. Oh and I should add that every update requires me to first uninstall SB, Reboot, and Re-install the new version.  I have been unable to simply install over top an existing version, nor have I been able to uninstall and avoid a reboot.

     

    That's the problem.

     

    When you boot without StableBit DrivePool installed the pool drive is not there and NFS can't find its shares.

     

    We do support upgrading by simply installing over an existing version (except when upgrading from 1.X to 2.X). Can you elaborate as to why you can't upgrade by simply running the new installer? Is there an error that comes up? What do you see?

     

    Thanks,

  17. Will 2.0 get the same level of integration?

     

    We're trying to get there. My goal is to get 2.X to a similar level of Dashboard integration as 1.X. It's a lot more difficult though because 2.X supports many more Operating Systems.

     

    But yes, that's the goal. For now, 1.X is still being made available and supported.

  18. The pool is simply a combined view of all the hidden "PoolPart..." folders on each drive that is part of the pool.

     

    If you go into each hidden PoolPart... and delete the folder from there then StableBit DrivePool will have no issue with that.

     

    However, if the folder contains files then you will want to perform a "Re-measure" task afterwards. That's available under pool options, and it will update the statistics.

     

    But I'm curious how you managed to create a "phantom folder" in the first place?

    1. The first time that you turn on duplication on a folder StableBit DrivePool will go through each file in that folder and make a duplicated copy, in the background. It cannot make background duplicated copies of files that are in use, so I suggest that you don't run your torrent client until this is complete.

       

      Once the initial background duplication pass is complete, any further changes to those files and any new files created in that folder will be duplicated in real-time. From this point on you don't have to worry about in-use files, as that has no effect on real-time duplication.

    2. No, to StableBit DrivePool the size of the file doesn't matter. It duplicates exactly what is changed, byte by byte. In this case, StableBit DrivePool would take on the characteristics of NTFS, so you might ask, does NTFS handle small files better? And my answer would be no, at least I can't think of any reason why smaller files would be better.
    3. Typically the pooling kernel driver uses just kilobytes of memory, so it really doesn't require lots of RAM. The service will use, maybe up to 50 MB. As far as getting ECC, I personally don't for my home server, but I probably would if I was setting up a business server with mission critical data.
    4. Once a file becomes corrupt on a disk, a portion of that file will become unreadable. I.e. you will get an error reading that file. StableBit DrivePool doesn't do any kind of propagation, so in the worst case, where you don't do anything about that error, the file will simply remain unreadable.

       

      When used together with the StableBit Scanner, you will not only get the benefit of the StableBit Scanner "refreshing" the drive, it will also detect unreadable sectors and notify you about them. At the same time it will also notify StableBit DrivePool and it will begin an immediate file evacuation process. Any pooled files still readable on that disk will be evacuated to a known good drive.

       

      Once you remove a damaged drive from the system, a background duplication pass is run on the pool and any files that should be duplicated but are not will be reduplicated. This process is run over the live pool, so there is no downtime due to "rebuilding" necessary (like you see with some RAIDs).

       

      In addition StableBit DrivePool stores all of your files as plain NTFS files, so if anything at all goes wrong with the process above you can simply plug in your pooled disks into any Windows machine, whether it has StableBit DrivePool installed or not, and gain access to your files.

×
×
  • Create New...