Jump to content

Ned

Members
  • Posts

    6
  • Joined

  • Last visited

Everything posted by Ned

  1. Ned

    Balancing hardlinks

    Not an answer to the hardlink issue, but just a suggestion for another (possibly far simpler) way to fix some of these duplicate file issues such as with torrents - enable data deduplication (requires Server 2012 or newer). Then you can just copy the files - no need for hardlinks and DrivePool can keep treating all your files as normal files correctly - and Windows will handle the data duplication for you. Note that this will result in a highly fragmented hard drive, which may have some performance issues depending on your use-case - and can result in the counterintuitive situation where your physical drive is storing more data than can fit on it (Possibly by double, in the case of something like the torrent scenario described above), so can no longer be duplicated by DrivePool to another same-sized drive.
  2. Note that the number of parity disks is a statistical risk thing - 1 parity disk can recover from 1 other disk failure, 2 from 2, etc. The risk of more disks failing increases the more disks you have but if you're happy with the risk, you don't have to have multiple parity disks. The parity will always be equal in size to the largest data disk you're protecting - so while it doesn't have to be the biggest disk you have, it can't be smaller. Note that you can have multiple different SnapRAID sets configured at once with different numbers of parity etc if you want to get complicated about it - a similar concept to having some more valuable things duplicated more than other less important things in DrivePool I suppose.
  3. Unfortunately at this stage dynamic disks is the only software-RAID in Windows that supports redundancy for the boot volume - a scenario which Microsoft continues to support and recommend when a redundant boot volume is required. This is something that can't be (officially) done using Storage Spaces, and as far as I'm aware, Microsoft has no plans to change this. Probably not an issue for those with large numbers of drives, but for those of us with smaller systems with limited numbers of drives, having a small redundant OS volume on larger dynamic disks, with the rest used for various levels of data - some can be mirrored, some can be not - is a nice efficient storage solution. It would be really beneficial to be able to use something like DrivePool in conjunction with this.
  4. Thanks. One of the main reasons I was considering switching to DrivePool was for the greater flexibility - currently as it's quite a small system, I have multiple VM's and the OS all on a single mirror - this is obviously bad for performance. I figured with DrivePool I may be able to put the busier VM's on a different disk to each other and to the OS, which should improve performance. However I've just found out that you can't use dynamic disks in DrivePool (why? their explanation of the disks being more complex makes no sense - surely drivepool operates at the volume level, so who cares about what's below that?) so that sort of shoots my plans in the foot... my mirrors are dynamic disks (it's the only way to do so in Windows software RAID - Storage Spaces doesn't support booting), which means they won't work in drivepool apparently, and also makes it nearly impossible to easily convert from a dynamic disk based system to a drivepool based system without a huge amount of work and copying and temporary spare storage space. Is there any way to get DrivePool to work on volumes on dynamic disks?
  5. WizTree reads the MFT directly rather than scanning the filesystem, hence why it's so fast. I'm guessing as DrivePool isn't an actual on-disk filesystem it doesn't have an MFT. To get WizTree to work, it would need to know that the DrivePool drive is not actually NTFS. WinDirStat and other similar products that scan the filesystem rather than read the MFT, should work I'd assume.
  6. Hello all, I have a Windows 2012R2 server with a 4TB RAID1 mirror (2 x 4TB HDD, using basic Windows software RAID, not storage spaces - can't boot from storage spaces pool) and some other non-mirrored data disks (1x 8TB, 2 x 3TB etc.) This is configured with the OS and "Important Data" on the 4TB mirror, and "Less Important Data" on the other non-redundant disks. The 4TB mirror is now nearly full, so I intend to either replace this mirror with a larger mirror, or replace this mirror with two larger non-mirrored drives, and use DrivePool to duplicate some data across them. I'm evaluating whether the greater flexibility of using DrivePool is worth moving to that, instead of using a basic mirror as I am currently. Current config is this: C: - redundant bootable partition mirrored on Disk 1 and Disk 2 D: - redundant "Important Data" partition mirrored on Disk 1 and Disk 2, and backed up by CrashPlan E:, F:, etc. - non-redundant data partitions for "Less Important Data" on Disks 3, 4, etc. If I moved to DrivePool I guess the configuration would be this: C: - non-redundant bootable partition on Disk1 D: - drivepool "Important Data" pool duplicated on Disk1 and Disk2, and backed up by CrashPlan E: - drivepool "Less Important Data" pool spread across Disks 3, 4 etc. - or - C: - non-redundant bootable partition on Disk1 D: - drivepool pool spread across all disks, with "Important Data" folders set to only be stored and also duplicated on Disk1 and Disk2, and backed up by CrashPlan (or something similar using hierarchical pools) I have a few questions about this: Does it make sense to use DrivePool in this scenario, or should I just stick to using a normal RAID mirror? Will DrivePool handle real-time duplication of large in-use files such as 127GB VM VHD's, and if so, is there a write-performance decrease, compared to a software mirror? All volumes use Windows' data-deduplication. Will this continue to work with DrivePool? I understand the pool drive itself cannot have deduplication enabled, but will the drive/s storing the pool continue to deduplicate the pool data correctly? Related to this, can DrivePool be configured in such a way that if I downloaded a file, then copied that file to multiple different folders, it would most likely store those all on one physical disk so that it can be deduplicated by the OS? CrashPlan is used to backup some data. Can this be used to backup the pool drive itself (it needs to receive update notifications from the filesystem)? I believe it also uses VSS to backup in-use files, but I as this is mostly static data storage I think files should not be in use so I may be able to live without that. Alternatively, I could backup the data on the underlying disks themselves? Are there any issues or caveats I haven't thought of here? How does DrivePool handle long-path issues? Server 2012R2 doesn't have long-path support and I do occasionally run into path-length issues. I can only assume this is even worse when pooled data is stored in a folder, effectively increasing the path-length of every file? Thanks!
×
×
  • Create New...