Jump to content

Shane

Moderators
  • Posts

    756
  • Joined

  • Last visited

  • Days Won

    70

Posts posted by Shane

  1. You might want to try new SATA cables anyway; they're cheap and while it's very rare they can still go bad.

     

    I'm under the impression that DrivePool's communication with the physical drives is strictly via the NTFS subsystem of the OS, thus its not going to cause SMART errors other than via wear (like any other app) because SMART operates at the hardware level. So your problems starting with a drive throwing up SMART errors is a bit of a sign.

     

    If you have spare SATA ports, try avoiding the port you had the faulty drive plugged into; if you have two SATA controllers on your board, try avoiding the controller you had the faulty drive plugged into. See if those make any difference.

     

    Has anything else updated recently - the VM software or the board drivers? And if you're running Windows 10 check if it's updated a driver behind your back, because that's something I've had to deal with repeatedly.

  2. "edit2 (newest): If I can trust that my 2x Duplication works, is it fair to assume that I can simply pull out the drive, let DrivePool re-balance everything with the remaining drives in the pool (there is enough space) - implying that I am temporarily in a state of no duplication for all the files on the drive I just pulled out?"

     

    Yes, it is fair to assume that with global duplication enabled you can just pull the drive. If you want DrivePool to double-check, tell it to Remove the drive (tick both "Duplicate files later" and "Force damaged drive removal") first.

  3. 1&2) when you remove a drive from a pool, it attempts to move all the files on that drive to the remaining drives in the pool. If there are no other drives left in the pool, it leaves the files intact, closes the pool and sets the normally hidden PoolPart folder - that contains all your files - visible.

    3) not 100% sure what you're asking here.

    4) You can use the Balancers and Placement rules (e.g. "keep anything in the Videos folder on local drive 1, local drive 2 and GSuite"). Otherwise, DrivePool will decide for you according to the Balancer rules: the default setting is to put new files on the drive(s) with the most free space.

     

    Note that if you want to have 4 local drives with x2 local duplication plus 1 cloud drive with everything duplicated to it too, you would need to set up as follows:

     

    * Pool Options > File Protection... > Pool File Duplication... > 3x Duplication (since you want two local copies plus one cloud copy)

    * Pool Options > Balancing... > Balancers > Drive Usage Limiter > Drive limit options > local drives only "Unduplicated" ticked and cloud drive only "Duplicated" ticked.

     

    This will should result in DrivePool keeping two duplication instances locally and one duplication instance on the cloud drive; because the File Protection rule is more important than the Drive Usage Limiter rule, the cloud drive gets a copy but DrivePool will still put the remaining two copies on the local drives. I have tested this myself, but you should test it yourself too.

     

    Also note that if you add another drive to the pool, you will have to update the Drive Usage Limiter rule to account for the new drive.

  4. It almost sounds as if you're saying you're relying on the Balancer to remove the files from the old drive rather than using the Remove button?

     

    If you mean that the Remove process is still using the engine and thus being slowed that way - have you tried turning off Automatic balancing before starting the Removal process, ticking "Duplicate files later", and/or setting a global file replacement rule to prioritise the new drive above all others?

  5. Your description reminds me of a file server where the fault was eventually traced to the hotswap rack the drives were kept in. Nothing showed up in event viewer (other than SMART reporting that yet another drive was dying) because the cause was below the OS level, and it killed several drives before everything else (including motherboard) was finally ruled out and the Rack Of Death was consigned to the trash.

     

    So if there's anything - and I mean anything - between your drives and the motherboard, consider it a possible suspect.

  6. Indeed, the scenario you describe is one of the reasons why the mantra "RAID is not backup" exists.

     

    Q. Does DrivePool have a way of keeping track of what files are on what drive, so in the event of a failure, I could figure out which sets of files (or albums) were affected?  Or do I need to be keeping track of that on my own?

     

    A. No - it does not maintain such a list. Yes - you would need your own method of keeping track.

  7. Q: Can I copy a bunch of files directly to a poolpart folder on one drive and would DP do balancing _and_ duplication (if set)?

     

    A: Yes, although the balancing and duplication would only occur when DrivePool next performs its balancing and duplication consistency checks.

  8. Just for reference:

     

    1. There's a privilege level a step beyond Administrator: the SYSTEM account. It is possible to launch applications (including Explorer windows) as SYSTEM, but it involves a bit of shenanigans.

     

    2. Even if you are logged in as an Administrator, there is a feature/bug of Windows (at least as of v7, haven't tested v8) that your Explorer windows default to running with only Standard user privileges. I don't recall the specifics, since I'm quite sick at the moment.

     

    EDIT: for a workaround to #2, see http://superuser.com/questions/58933/how-do-i-run-the-windows-7-explorer-shell-with-administrator-privileges-by-defau

  9. Long story short: folder placement should come "off" by default, and it should be completely up to you whether you want the extra administrative responsibility of using it to get that "extra mile" out of your pools.

     

    Long story long: folder placement is primarily a tool for (1) pools with large number of drives and/or (2) pools that are not using full duplication.

     

    1. As the number of drives grow, so does the impact of physically scattered files on power consumption (inefficiently waking multiple drives from standy) and latency (waiting while those drives come out of standby) and bandwidth (if multiple users are accessing different areas of the pool, physical scattering increases the odds of any two users competing for the same physical disks) from negligible to significant.

     

    2. In dealing with storage, there is a mantra ignored at peril: "RAID Is Not Backup". Since drives are not free, sometimes we have to choose between having enough drives for a fully duplicated pool and enough drives for the backup(s) - and for vital data you should always choose the latter. However, some backup software may lack certain features, such that performing a partial restore of physically scattered files is awkward (or to be more blunt, a right pain in the butt).

     

    Also, 3. Properly implemented, it provides an efficient alternative to the current awkward workaround of splitting files into multiple pools (which has its own drawbacks).

  10. Hi James. Yes, if a trial version of DrivePool expires, it does become read-only (so your files are still intact and accessible) until you re-activate it with a paid licence.

     

    If you open the user interface for DrivePool, it should display how many days you have left in the trial or that it has expired.

  11. I have both read reports of, and experienced myself, drives "fixing" their own SMART records re bad sectors. Whether this is because the drive is correcting a false positive, or because the drive firmware thinks you don't need to know that it used up some undocumented "slack space" to cover for the bad sectors, or some other reason, I've no idea and can only speculate.

  12. #1 - For future reference re MatBailie's query: if you physically remove a drive without removing it from the pool first, DrivePool will complain about the missing drive and switch to read-only recovery mode until you either plug the drive back in or tell DrivePool that it should drop the missing drive.

     

    DrivePool, like RAID, is not designed as a backup solution. It is designed as a redundancy solution. As Drashna suggests, for backing up a pool of drives you would use a second pool of drives and a sync tool (my current personal favourite is FreeFileSync, but that's just personal preference and your mileage may vary).

     

    But what would the behaviour of DrivePool be if we tried this?  For example...

     

    Initial Setup

    > I have 3 drives (1.5TB, 1TB, 1TB)

    > I create a pool with the 1.5TB drive and  1TB drive

    > I move my Photos folder to that pool and set it to duplicate

    > After duplication has finished I remove the 1TB drive from the server and take it to work

    > I add the other 1TB to that pool

     

    At this point I should have three copies of my family photos, one copy being at work for disaster recovery.

     

    #2 - Hi Dunedon, just a caution, there are certain conditions (involving the pool being near-full or being moved to a new OS) where disk content might NOT remain static.

     

    Drashna: this gives me an idea for a DrivePool suggestion - is it possible, or could it be made possible, for a user to see when DrivePool last wrote (including as part of a balancing operation) to a particular drive in the pool?

     

    I'm not expecting to have the hot-swap drive as part of the Pool ... and I agree that the SyncToy isn't going to work for what I'm contemplating.

     

    I think I'll setup the system to fill one drive at a time to 95%, and then when it flips over to the new drive just copy that first one (copying the hidden folder that contains all the files in the pool) to the hot-swap for storage.

     

    Since the drives won't have any R/W data (it's only media) I shouldn't have any update issues and drives that have been archived will always stay "current".

  13. Hi, Vortex12.

     

    Because under normal conditions a file system will try to avoid using damaged parts of a drive, and the whole point of File Recovery is to deliberately access those damaged parts of the drive to attempt the recovery of broken files, it's unavoidable that the Read Error rate is going to go up fast - and any decent OS that monitors SMART is going to flag this as a Bad Thing.

     

    All that SMART knows is that there's a sizable number of pending dubious sectors and the read error rate is going up fast. Frankly I'd be a LOT more upset if it wasn't warning me about probable imminent drive failure!

     

    (like, coincidentally, last night, when the OS drive on my home server died without any warning at all... :angry::wacko: )

  14. Q: Hi, Is it possible to set DrivePool to duplicate a folder to all disks available in the pool regardless of their number?

     

    A: You would have to set that folder's duplication level so that it was equal to the number of disks in the pool. For example, if your pool consisted of ten disks, you would set that folder's duplication level (number of copies) to 10.

     

    Q: And another question - is it possible to set DrivePool that way, that it keeps one copy of a folder on one specific disk? For example I have 3 disks and I want one folder to be duplicated twice and one copy I need on one specific disk and the other can be on any of that two remaining disks.

     

    A: Not yet. It's a planned feature. http://community.covecube.com/index.php?/topic/208-stablebit-drivepool-controlling-folder-placement/

  15. Yes, if you don't use the pool late at night, that would be a good time to have CrashPlan do the backups.

     

    As drashna notes, if automatic balancing is on but not immediate, it defaults to 1am - but since in our scenario we're not using automatic balancing that's not a concern (the idea is that we want the pool to just do the bare minimum of balancing, and do so immediately as required, so that CrashPlan is not wasting bandwidth on files that have been moved between drives - though things might get tricky if we get close to filling all of the drives).

  16. If our intent is to backup the entire pool via CrashPlan, and if we are NOT using duplication, then I might go with something like:

     

    1. Set DrivePool to "do not balance automatically" and TICK "allow balancing plug-ins to force immediate balancing".

     

    2. Turn off the "Volume Equalization" and "Duplication Space Optimizer" balancers.

     

    3. Turn on the "Ordered File Placement" balancer.

     

    4. If feasible, set CrashPlan to do its backups when you are not using the pool, so that it does not read files while DP is moving them around.

     

    5. Then tell CrashPlan to back up the hidden poolpart folders on each drive that forms the pool.

     

    This should minimise the amount of wasted traffic. There might be more optimization we can do, too.

     
    If we ARE using duplication, then things get messy.
     
    Frankly, it would be a lot easier if CrashPlan just implemented a "skip existing files" option.  :rolleyes:
  17. For what it's worth, I also use FreeFileSync to backup to/from pools, without issue (if you are using NTFS permissions to secure parts of the pool on a per-user basis, remember to tick the relevant box in FFS's global options), and Safely Remove Hardware releases the drives, pretty much exactly as you describe (though minus the fireproof data safe, something I should remedy).

     

    My bugbear is Microsoft's search indexing service, which does not seem to respect Safely Remove (or anything else, really, besides a reboot).

     

    My personal experience is that DrivePool itself only holds pool drives/files/folders open when: you are actively writing to the pool; you are actively reading from the pool; the pool is being balanced, duplicated or remeasured; or (sometimes) you have the DrivePool GUI open.

×
×
  • Create New...