Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Posts posted by Umfriend

  1. But that is my point, unless I am missing the obvious: One file should never be stored on G:\ _and_ on H:\. A file _may_ be stored on E:\ and F:\. So, total data on E:\ and F:\ should be GE then G:\ and H:\ (because if one dupl;icate of a file is stored on either G or H then the other duplicate MUST be stored on E or F). So, I am looking at E+F (2 HDDs) compared to G+H (2 partitions on ONE HDD).

  2. But then I thought **** it!, these guys don't put out beta's that easily and for now, yep, unduplicated is gone. But now something else seems weird to me:

    1. The two 2TB HDDs (E:\ and F:\) show 985GB and 1004GB duplicated (1,989GB)

    2. The two 2TB volumes (G:\ and H:\) on the 4TB HDD show 1,016 and 1,018 duplicated (2,034).

     

    This seems odd to me as the 4TB HDD should, at most, have one duplicate/copy only. It could be zero as the two duplicates could be on E:\ and F:\. So I would expect the sum of duplicated for E:\ and F:\ to be equal or greater than G:\ and H:\.

     

    Ran dpcmd (nice tool although the output structure might be improved somewhat as others have indicated. For instance to be able to import into excel or a database) and it showed no errors so that is nico but still...

  3. Hi,

     

    So I had to rearrange storage a bit and currently it is like this:

    2 x 2TB HDDs

    1 x 4TB HDD, partitioned as 2 x 2TB volumes.

    Pool file duplication x2 (no folder duplication)

    DrivePool version 2.1.1.561, only the default balancers and all at default values.

    OS is WHS 2011

     

    Statistics show 17.8MB as Unduplicated. I do not understand why any data should not be duplicated.

     

    I would have thought all would be duplicated and that I could be assured that any file would at least be stored on one of the two 2TB HDDs (as it should not duplicated on 2 x 2TBB volumes of the single 4 TB HDD only).

     

    I have tried re-measuring.

  4. Is your Pool organization 100%? If not, then you can instruct it to re-balance. Perhaps you experience this behaviour because you do not have automatic balancing and, though I am by no means certain, I think you need balancing for the Balancer Plug-Ins to actually work. I have Balance immediately and not more often than every is unchecked.

     

    Edit: Never mind, that won't be it. Christopher will come and help you out but, well, he may be enjoying some time off given the season.

  5. You have triple duplication? Wow...

     

    Anyway: "general purpose like storing family pictures/movies that can be sometimes deleted, created."... but pictures will, I assume, be "small"? In the MBs, not tens of them? Movies, say a decent .mkv rip might be what, 4GB at most? Writing those occasionally would not cause the write penalty. You would need to write at east 20GB in one go before you might experience it and I *think* that the PMR-cache gets written to the SMR-arrea pretty quickly once there is no I/O.

     

    Wrt. power consumption, those are rated numbers and given how close they are, actual measurement would be nice. But yes, if you have a lot of writes then you'd need a more difficult statistic as the Archive will suffer from a kind of. I guess we can call it, write-amplification indeed.

  6. I would say yes. I can not find a "normal" 6TB HDD that is cheaper than these 8TB HDDs and they read like crazy (for spinners, certainly at 5.9Krpm). And if you write about 1 movie a day (assuming they are less than say 25GB) then you won't even suffer a write-penalty). Or per hour. Or per ten minutes I would think. And if you have a number of them in a Pool and don't use file-placement rules it gets even better still I would speculate.

  7. As an example, a recent Server Backup wrote 1.53TB in 3:34 hrs. That is, OTOH, about 120MB/s.

    I just realised this is simply false. The backup "transferred" 1.53TB but that does not mean that amount of data was written to the HDD, that would have been far far (FAR!) less. Sry.

     

    I still agree with everything else I wrote here.

     

    The SSD optimizer plug-in would indeed help should you encounter write-performance issues. Of course, only until writes exceeds the size of the SSDs + HDD cache and PMR cache. But I am pretty sure you could use a 4TB HDD you already own as cache as well with that plug-in. That might actually be the best setup of all.

  8. Pretty much every time i wrote more than 50gb. I repeated this multiple times on 2 different drives with the same results.

     

    This is exactly what i have seen which averaged to a little lower than 30mb/s if i recall correctly.

    Oh, I know that. I was more wondering about when do you, as a user, have I/O in excess of 50GB that you are actually waiting for to complete? I mean, I use them as Server Backup HDDs and I am certain I get that write penalty over and over but I don;t care because the backup process is automated and does not affect my user experience (restoring is different of course). Strangely enough, these backups run way faster on the 8TB HDD then on a 4TB WD Red...

  9. Actually, there is a non-SMR cache of 20GB, so writing more might incur a write penalty. However, that only goes when areas of the HDD are re-written. If it is just write-once and no deletions then it should be fine AFAICS.

     

    As an example, a recent Server Backup wrote 1.53TB in 3:34 hrs. That is, OTOH, about 120MB/s. I have yet to come across a use-case where it would hit you noticable, let alone "hard". In what circumstances have you experienced that?

  10. The only issue with WHS2011 Server Backup and AF drives I am aware of is in cases where the _backup_ HDD is AF AND connected through a SATA-USB hub that lies about the HDDs properties (many do to help compatability of 2.2TB+ HDDs with Windows XP). I am backing up to AF HDDs (4TB WD and 8TB Seagate Archives) connected directly to SATA and my 2TB Server HDDs are AF.

  11. I can only think of a few drawbacks partitioning 2x2TB HDDs but can hardly imagine these to be relevant IRL:

    1. You can not store single files > 2TB

    2. Assuming you would actually write to both partitions (instead of filling up one first), one partition will perform better than the other because one partition will use the inner cylinders where performance is lower.

     

    I do remember a time when the latter issue was relevant. You'd get better performance out of a 1TB HDD partitioned as a 200GB HDD then a plain 200GB HDD, simply becuase the heads never had to move as much. But nowadays?

     

    Personally, as a WHS2011 user, I would always partition in 2TB HDDs (at least until DP gets the grouping/string functionality) but then I do do a full Server Backup. In your case, you could simply do 4TB and hustle around a bit as, if and when you do want to use Server Backup.

×
×
  • Create New...