Jump to content

Shane

Moderators
  • Posts

    756
  • Joined

  • Last visited

  • Days Won

    70

Posts posted by Shane

  1. Hi revnull, you can remove drive letters from existing drives in a pool without affecting access to the pool itself.

     

    You can also use Windows's Disk Management to mount drive volumes as if they were folders in another drive (e.g. c:\mounts\g -> g:\).

  2. What should happen is this:

    • Files written to the pool land on the feeder disk(s); at least two feeder disks are required if duplication is enabled.
    • The files remain on the feeder disk(s) and the pool condition is reduced below 100% until the next scheduled balancing run. Incidentally, this does indeed mean the feeder disk(s) act like a cache for recent files.
    • The balancing run moves any files it finds on the feeder disk(s) to the archive disk(s) and the pool condition should return to 100%.

    If it's not happening like this, it's not working as intended, and like Salty says hopefully we can fix it.

     

    For whatever it may be worth, the "notes" for the Archive Optimizer plugin do suggest it should be used by itself (i.e., try turning off any other balancers you might be using, in case they are conflicting).

  3. While I don't understand why you'd want to do things that way - I can only presume your work involves creating lots of read-only data that never gets renamed, edited or deleted, and that you don't have the budget for both backups and duplication - I think you would need to at least implement the following:

    select "do not balance automatically" (so that DrivePool does not move files around the disks without your knowledge)
    untick "allow balancing plug-ins to force immediate balancing" (ditto)
    untick all balancers except "ordered file placement"
    arrange the list of drives and fill limits in "ordered file placement" to your liking

    And yes, if a disk in the pool corrupted/died and you replaced it with your robocopied backup disk, DrivePool should recognise the hidden poolpart folder. Although, you might have to be careful about preventing DrivePool from seeing both the original and backup at the same time in case it gets confused - perhaps robocopy the original into a subfolder, e.g X:\poolpart to Y:\backup\poolpart where X: is the original disk and Y: is the backup disk, and thus when replacing a bad original you would move the poolpart out of the backup folder into the root folder as the last step - don't know if this is necessary, but prevention is better than cure here.

     

    Another thing to be aware of would be in case of moving the pool to a new machine, making sure the anti-balancing configuration was in place. Hmm. Might need to ask Alex/Drashna about where the balancing configuration is stored.

  4. Note that DrivePool must be told to "remove" the drives - which means it moves the pool content off those drives onto the remaining drives - before you actually physically remove the drives.

     

    If you add your external drives to the pool (a good idea), don't worry about letting it rebalance before removing the faulty drives, let DrivePool take care of it during/afterward. The priority should be getting rid of the bad drives.

     

    I can't see any attachment to your last post; I'd guess that means the pool has either run out of space to keep everything duplicated or it's detected it's lost/losing duplication integrity (i.e. files) on the faulty drives.

  5. Silly question time: after moving the slider, did you click the "Save" button?

     

    Excerpting from Alex's response to my ticket on another issue with the OFP balancer, with emphasis mine, "So if you add more than one new drive to the pool, all of those new drives will be added to the bottom of the existing list that you've Saved, but sorted by percent used. And more importantly, if this is a new pool and you've never clicked Save to commit your drive order, then there really is no order that you've defined for the balancer and the list is just built by percent used."

     

    Alex went on to say that the drive order issue will be fixed in a new build, but when I tested with Save-ing the order to see the difference, I also tested your slider issue and noticed that, while the main status screen did not alter its markers to anything other than 0% or 90%, clicking Save did change those markers from 90% to 0% or vice versa corresponding to whether the changes in the OFP list should affect the balancing for those drives!

  6. It's third-party, but you can use "Everything" by VoidTools to quickly see which disks a duplicated file resides on.

     

    When duplication is set to real-time, it is supposed to be immediate, though you won't see the pool condition change to "Duplicating" when copying a file into the pool since it's happening simultaneously with the copying.

     

    If it's not actually duplicating in real-time when it's set to do so, and you can confirm this independently (e.g. with "Everything"), then that's a critical bug and please report it to Stablebit.

     

    For example, when I have duplication on and real-time, and I copy a 10GB project file to P:\work (where P is the pool drive), "Everything" should (and does, for me) show something like:

     

    G:\PoolPart.string\work\project.dat

    K:\PoolPart.string\work\project.dat

     

    where G and K are drives in the pool.

  7. The default is that all the data gets read and written (move-via-copy) rather than a straight move operation, as Windows sees the pool drive as being a different disk.

     

    Like drashna says though, you can take advantage of the hidden PoolPart folder on the same drive to do a straight move operation.

  8. Under 1.3, the Dashboard addon currently does not provide a way to set/view which sub-folders have custom duplication settings. You use the shell command to set/view the count on sub-folders.

     

    Under 2.0, the GUI allows you to set/view which sub-folders have custom duplication settings (Pool Options, File Protection, Folder Duplication), and in that menu you can expand the entire tree by highlighting the root of the drive and tapping the asterisk key.

  9. Protip: installing unofficial betas without the developer giving you the go-ahead is a quick way to end up in the scary end of "big red button" territory.  :blink:

     

    protip: you can get every version, even the unofficial releases here:

    http://dl.covecube.com/DrivePoolWindows/beta/download/

     

    Try this link for DrivePool removal instructions: http://wiki.covecube.com/StableBit_DrivePool_Q8964978

     

    Note that it is for the 1.x version, but it can be applied to the 2.x version as well (just ignore the Remote Desktop and Dashboard references).

  10. Hi Blake :)

     

    Dane's beaten me to most of your questions, so I'll tackle the migration part. Yes, the safe way is to simply cut and paste. Due to the way Windows operates, it doesn't automagically realise your V drivepool is physically located on your G,H,I,J etc drives, so this can take quite some time if you've got a lot of data.

     

    If you want it done in a hurry, it can be worth knowing that DrivePool pools are formed from the collective contents of each pooled physical drive's hidden root PoolPart.guid folder - so you could manually cut and paste each drive's content into the corresponding hidden folder and then tell DrivePool to do a remeasure so it detected the "new" folders/files. However, if you've got different files on the drives that share the same folder/filename structure, or if some of those folders are shared or special or carrying unusual access control metadata, you risk DrivePool and/or Windows getting confused, so this is not recommended unless you know what you're doing.

  11. As a mainly 1.3.x user, based on past experience and posts I expect Alex to continue to port bugfixes - and features - from 2.x to 1.x, as parts of the codebase are shared between them. He is however only one person, and I imagine debugging kernel/filesystem code ain't easy.

     

    Also, build 320 of the 2.x tree has just been released which fixes a problem introduced in 312, so if you install 320 and it works (or doesn't work), that would presumably help further narrow down the cause.

  12. Hi hzz, TrueCrypt encrypts at the physical block level, not at the file system level (let alone a virtual pool file system).

     

    So: if you want it to have any chance of working, you need to FIRST encrypt your physical drives, THEN mount those encrypted drives, THEN pool the mounted encrypted drives, in that order. Does that make sense?

     

    Caveat #1: I've not actually tried to pool TrueCrypt volumes (it's been on my "get around to it someday" list for a while), but the above is the only way it's going to work - assuming it works at all.

     

    Caveat #2: DrivePool will want to see those drives when you next boot, so until you re-mount the TrueCrypt volumes DrivePool will be complaining about them being missing (if it works in the first place).

     

    Sidenote: last I looked, dynamic volumes were not compatible with DrivePool anyway.

  13. I had something similar last week, except that my problem went away once I replaced the drive. Scanner emailed me that one of my server's drives had bad sectors. When I went to check on it, the server was frozen. I had to reboot it, and it worked for a while then froze up again, until it was at the point that even just trying to open DrivePool's UI (since that queries all drives in the pool) would freeze the machine again. Since I had everything duped, I physically pulled the drive. No more freezing. The drive's SMART had noted two bad sectors but felt the drive was otherwise fine. Obviously, SMART isn't perfect.

     

    But as to your situation... what happens if you disconnect the new replacement drive? It could be the controller, port or cable being used that has a problem?

  14. Hmm. I do like the idea of giving DrivePool the ability to cooperate with natively fault-tolerant hardware to avoid "double-duplicating" data. However, fault-tolerant hardware often obtains that tolerance via closed or poorly documented formats.

     

    XAlpha, a couple of questions:

     

    Q1. What is the entry for your Drobo array in the Fault Tolerance column in Windows's Disk Management? 

     

    Q2. If a Drobo dies, can the disks inside be mounted as standard readable NTFS volumes by a non-Drobo device?

  15. You've neatly summed the "pros" of aggressive equalizing - improved I/O and read striping, etc. However, if all of your drives are equally accessed and suffering equal wear, then (at least in theory) you have an increased chance of all of your drives failing at the same time. That's obviously not desirable.

     

    Another "con" is that any single given file cannot be split across disks, so  if your aggressively balanced pool is close to full you might not have enough space to write a particularly large file even though your total free space is many multiples larger than what you need.

     

    While both of these "cons" are unlikely (but as I can personally attest, not impossible), having the default settings be conservative is good protocol. Let the user decide what is acceptable risk. :)

×
×
  • Create New...