Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Posts posted by Umfriend

  1. Hi,

     

    So I started to play around with Bitflock on my lappy. This one has a Plextor M5-PRO 512GB SSD. Two statistics I am worried about:

    1. Wear Leveling Count: 81702

    2. Total LBAs Written: 210,728,448 B (201 MB)

     

    After running an I/O intensive SQL script, these changed to:

    1. Wear Leveling Count: 81898 (+196)

    2. Total LBAs Written: 211,411,968 B (202 MB)

     

    Could it be that the interpretation rules are not fully correct ("Using Plextor / Marvell SSD specific rules.")? Should I be worried? Not sure the SSD should last for 80K P/E cycles and the idea that only 1MB was written during this operations, let alone that 202 MB was written over 2 years is absurd.

     

    Int a grey-ish font it says with all statistics "Value: 100 (Was 100)"...

     

     

    .

  2. I should probably stress it with some of my SQL stuff but I don't like to spend the time right now. On my lappy I have an SSD for the DBs but I had the tempdb on the spinner. When I moved it to the SSD as well it boosted a script by 40%. Hate to think what would happen on the Seagate Archive HDDs, I imagine queue depths well in the hundreds.

     

    Anyway, love the wbadmin.msc, it there something similar for client backups in WHS2011?

  3. So I had a terrible backup with 183.6GB written in 4:22 hrs, that is 11.7MB/s. But then I had one of 228/1GB written to my 4TB WD Red in 4:08 so 15.3MB/s.

     

    All in all, for now, my conclusion is those Seagate Archive HDDs may be well suited as backup drives.

  4. Roy, how much data do you plan to have backed up how exactly? If you're going to have a Pool of 30TB, no connected backup device I know of will be able to cope.

     

    What OS are you running again?

  5. WHS2011 so .vhd files, not vhdx unfortunately. I'll check for a while but overall. In two weeks my WD Red Backup HDD is up in the roster, I'll take a look at that as well and compare. So far, I am happy. They may not be the best write-performers due to SMR (still unsure) but they get the backup job done it seems, I can grow my server to about 8TB in unduplicated backupable data at low cost and they read like crazy.

     

    It is a ST8000AS0002-1NA17Z and the ID number is J1PO37PU

  6. Never been there before, thanks! 30 mins for 41GB, that is 22.8MB/s. Of course, this is including preparing/comparing/finalising so a bad benchmark. To be sure, an older backup wrote 25GB in 37 mins but the target was a ST2000DM001 (Seagate Barracuda 7200?).

     

    Edit: So I am looking at Scanner & Resource Monitor while a Server Backup is being made and, grab something to hold on to, I get 'high' scores like between 130 and 460KB/s. Not MB, KB! Response time, 3,621ms! Disk Queue runs as high as 87.16. Oh wait, all of a sudden numbers start to improve, 3MB/s, 170ms, rising as I write to 5, 6, 8, 12MB/s. I'm fine with that as long as backups succeed and take less than 12 hours (before the next one starts) but oh boy, does it seem SMR can BITE! Now it's at 30MB/s.

     

    Anyway, it finished in 1:22 hrs, wrote 127.83GB according to wbadmin.msc, 26MB/s average. That's _write_, total I/O is higher as backup reads a lot to. A better benchmark/stress test I know not how to do (aside from running some SQL Server DB on it). Once my WD Red is up for Server Backup duty I'll have a look at that one too.

  7. Not sure about the data loss argument. That would go for defrag as well.

     

    Anyway, first results are in: 1.6TB backup in 5:52:58 which is about 75MB/s. MInd you a virgin backup HDD this was. But, client backups did run in the meantime and skewed the results (clients backups are part of the Server Backup). Also, I am including time for preparing/finalising backup which is not actual I/O time.

     

    A clean Server Backup on the other 8TB Seagate ran in 4:40:40, getting to 94MB/s, no client backups in this timeframe.

     

    A last clean backup (had re-orged servers by including the two old 2TB Server Backup HDDs and reformatted one of the two 8TB drives) however took 6:11:02, 71MB/s and there was no Client backup running... I wonder whether this ran into SMR already.

     

    Does Scanner start on the inside of the drive? It starts reading at about 85MB/s but, being 30% done, it does mostly at 130 MB/s with some short drops to 90MB/s. And the throughput keeps on rising.

     

    It also runs warmer than all other drives, it can get to 40C!

     

    Edit/Update: With Scanner at 95%, Scanner indicates a performance of, mostly, higher in the 180MB/s. Of course, these are all _reads_. On occasion, it does drop to as low as 40MB/s, which I find a bit strange. Seek Error Rate is at 7861198. Not a rate at all. I wonder whether that is because Scanner can not yet interpret SMART data correctly for this drive yet? Any questions on this drive, feel free to ask and I'll see what I can do.

  8. If you are a programmer then you could perhaps write a bit of code yourself in, say VBA (what I would do as I only know VBA and SQL) that would check the folder/file structure of all underlying drives. Come to think of it, if there would be a demand for it I might give it a go myself (although, it would require Excel as the VBA shell and Excel typically will not be installed on servers I would think...).

     

    FileSafe, AFAIK, is a possible new product that will check whether duplicates actually contains the exact same data (to spot possible corruption of one or more of the duplicates).

     

    They make a point of it to ensure duplicates are not stored on the same physical HDD, not even if a HDD has more than one partition.

  9. Ah yes, as an OS drive, it may well do horrible. So that leaves defragmentation and temp storage. But defrag is something that should be doable in the background, suspending in case of real I/O, I guess the FW may also do something while idle, a bit like TRIM on SSDs. E.g. writing new data to clean bundles of tracks and then re-organising when idle. Anyway, one I finally get them I'll see (and tell) how well they do as backup drives.

  10. I remember on another forum people were asking how the heck one would backup 8TB of data. The obvious answer is, of course with another 8TB HDD LOL. But yeah, that will take quite some time. I'm actually going to use these as backup HDDs for 2 2TB HDDs and perhaps later even 3 2TB HDDs.

  11. Not sure it'l' be that soon. WD Greens seemed to do OK but many give out (anecdotal) after about a year when used in a NAS/Server. Endurance is very hard to estimate based on a few months testing (unless they crap out within that period). But they are cheap per TB so you might take two and duplicate. They may fail close to one another but possibly not *that* close. Anyway, for reasonably static data, assuming endurance, these are a good deal ATM IMHO.

  12. duelistjp, those 8TB seagates are not called "Archive" for nothing. The thing with these is that re-writing a sector may cause more sectors to have to be re-written. That's due to the SMR tech in these. I will get 2 soon but I intend to use those as backup drives, not actual server storage. On the other hand, the data on my server is mostly static anyway so it could well do the job. I'd just hate to have, say, a larger database on those.

  13. That is correct. But...if these are important and unrecoverable in case of disaster, consider making offsite backups. after x2 duplication by far the best security IMHO.

  14. OK, I run my server 24/7/366 for shares/client backups/torrent/streaming/Rosetta@HOME. I run all this on about the cheapest MB I could find, an ASRock B75-Pro3M. It's been going steady for 1 year and 2 weeks now.

     

    The _only_ thing I can imagine it would bring possibly is stability/longevity. As an anecdotal example, my first server build was using an ASRock H67M-GE. After slightly over 18 months, it started crashing and it turned out it suffered from a cascading USB port malfunction (they'd go kaputt one after the other) which was an issue as I used the USB ports for Server Backup (and I still suspect it is what has killed the USB ports, all that I/O). But he, that's the risk you have with a cheapo MB (on thew other hand, you run that risk with any MB, it's just that I expect Supermicro to have a smaller chance of such things happening).

     

    In short, I do not think you're better of with a more expensive MB. IMHO< you need to look at functionality you'll use: SATA ports, PCIE slots, memory slots and that's basically it. Other than that, any cheapo should do.

     

    And I am not even sure you'd need 16GB. I have 8 but I run 8 Rosetta Threads as well which take between 3 and 4.5GB. Never an issue.

×
×
  • Create New...