Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Posts posted by Umfriend

  1. OK, the Drive Usage Limiter suggestion was in relation to your 2nd post, the on on removing two HDD. Totally unrelated to SSD. On the removal case, there you would uncheck both duplicated and unduplicated and DP will bascally move all files on those disks to disks that may have files.

    On SSD, I am not sure I understand. I don't think it'll work the way you suggested it. You can not have a single (set of) SSDs act as SSD for two Pools. In your proposal where you would have Pool C = 2xSSD + Pool A + Pool B, the SSDs would not work as write-cache for any I/O you direct at Pool A or Pool B directly. You *could* of course move all the contents of Pools A and Pool B to the new Pool C-related poolpart.* folders but then all that data is in Pool C and it would be Pool C duplication/placement rules that apply so files may move from Pool A to Pool B and vice versa.

    Important to note that when you add Pool A and Pool B to a different Pool C (Hierarchical Pooling), the files in Pool A and Pool B are not in Pool C. Just like when you add a HDD to a Pool, any files that were on that HDD prior to adding, are not in the Pool (until you move/seed them).

    Hope this helps.

     

  2. Yes, for multiple drive evac, use of the Drive Usage Limiter balancer is the better way.

    I don't think you can add a single Pool as an SSD to two Pools.

    Rather, you could partition the two SSDs, say 1TB each partition, pool them together as two 2x1TB SSD pool (so Pool C and Pool D) and then add Pool C as SSD for Pool A etc.

  3. No. DP does not make copies, it manages "duplicates". Delete a file, DP deletes all duplicates.

    Windows File History should work on a Pool however so if you just use File History on a Pool like you would use it on a drive, then you should have what File History is offering you anyways.

  4. Yeah, so the D drive has files that are on F as well, right? Move everything over from D and you have two copies of the same file on one drive (won't work actually, it'll then overwrite or not move). So this idea leave you with a partial x1 duplication that then DP has to solve for you.

    How much slower is E:? I would seriously consider just using DP. It is solid SW. Maybe spend a bit of time but avoid headaches?

  5. AFAIK, copying, even cloning does not work.

    The easiest way is:
    1. Install/connect new HDD
    2. Add HDD to the Pool
    3. And now you can either click on Remove for the 6TB HDD or use Manage Pool -> Balancing -> Balancers -> Drive Usage Limiter -> uncheck Duplicated and Unduplicated -> Save -> Remeasure and Rebalance.
    4. Wait. Until. It. Is. Done. (though you can reboot normally if you want/need to and it'll continue after boot. Not entirely sure if you go the Remove route.

  6. AFAIK, but others may confirm, using a 250GB cache drive does indeed not work when writing a 300GB file. I think it is something that Windows/NTFS/DrivePool doesn't know in advance what the file size will be.

    I would recommend using 2 cache SSDs. However, there is a workaround to use one physical SSD as 2 SSDs for caching purposes. It would involve setting up 2 unduplicated Pools, each using 1/2 of the SSD (which you would have to partition into two volumes, one for each Pool) and then combining the two x1 Pools into one x2 Pool.

    It has its advantages (e.g., backing up using VSS becomes possible) but you lose a little bit of flexibility/efficiency and run the risk of the SSD actually failing. I have read here though that there is at least one other who does exactly this.

  7. I wouldn't know but I highly doubt Drivepool pins MFT data and so, even a simply directory listing would require access to the USB drives.

    I would not want that to change either. In my experience, DP is very fast at dealing with changes to the Pool even when made directly to the underlying disk.

  8. Why not simply power down -> disconnect from sata ports -> connect to SAS? I am assuming it is an HBA in IT-mode.

    Otherwise, if you must, it is easier to use the Drive Usage Limiter. Uncheck both Duplicated and Unduplicated for the drives you want to empty, then hit rebalance. The big advantage is that no files will be placed on the other HDDs that you want to empty. When using Remove, some of the 1st HDD may be written to the other 3 HDD you want to empty and that is a waste of time.

  9. Yes, that is a tad annoying. I would consider connecting them to another PC (via USB even), then try to salvage what I can over the network. Also, because I'd prefer to run HDD recovery software on a client.

    How do you know they have been broken for a while and why hasn't Scanner tried to evacuate?

  10. 19 hours ago, shrydvd said:

    Thank you Umfriend.   I actually do have everything duplicated, but I believe your answer explains either scenario.  If there is no difference in performance, and the larger pool offering better chance of success if scanner finds issue, I'll stick to what I'm doing with the one large pool.

    Thank you again!

    David

    Yes, duplication does not matter, unless, you want to use a backup solution that uses VSS (like Windows Server) or some sort of images and want to avoid backing up both duplicates. But if not, then it does not matter.

  11. I am assuming no duplication, no backups and stable connections to the hard drives. In that case, I would think a single large Pool makes most sense. The reason being that should scanner find an issue and trigger DP to evacuate a drive, you have the best chances the other drives will be able to take on the load. With smaller pools, that is a question for each pool separately.

    Performance wise? No issue or noticeable difference.

  12. What have you tried? I note that you now only have other on 2 drives.

    You could look at the PoolPart.* folder on G:\ with properties through explorer and see if the used space agrres with the none-other data.

  13. I don't really know. I speculate that MB-SATA may not be designed to be optimal. For instance, I do not know whether they can actually deliver 600MBps on all ports simultaneously. I guess it can be found through Yahoo but not sure. PCIe SATA cards, I have read that they can deliver lackluster performance, as in shared bandwidth dependent on the actual number of PCIe lanes it uses, but never that they'd interfere with MB-Sata. Again, I don't actually know and I am sure that there may be different qualities out there.

    But really, I think your PCIe SATA card will be fine and give no issues. It should work for your transition. I'd leave the card in the PC once done so that, AIW you need it, you have two ports readily available. The SAS HBA route is one I would recommend if you expect storage to grow as measured by number of drives. For me, it works like a charm and as these are, AFAIK, all made with enterprise servers in mind, I am pretty comfortable about performance, compatability and endurance.

  14. I am sure it is possible to do it that way but it is, IMHO, messy and prone to issues. If you physically remove a drive, the Pool would become read only and I am not sure you could add another drive and then transfer, even if you did it outside of DrivePool.

    I would certainly recommend going the PCI SATA card (or better yet, install a SAS HBA like an IBM M1015/Dell Perc H310 and enjoy an abundance of SATA ports) route.

    Personally, I always ensure I have at least one SATA port available for upgrading/troubleshooting.

    Having said this, for this upgrade, you could also use a USB3 drive adapter (although I do not recommend USB for long time use).

×
×
  • Create New...