Jump to content
Covecube Inc.
  • 0
Yekul

Seagate Archive drives - Any issues?

Question

Hi all,

 

Just wondering if anyone is using the new Seagate Archive drives yet?

 

The 8TB options are extremely competitively priced here in Aus (were down to 35c/GB other day). The drives certainly have their limitations however, mainly that their sustained write speeds are horrendous. They have a cache which will work very well up to 20GB, then slowly degrade till ~80GB or so where it becomes atrociously slow; this is just the nature of the SMR technology being used in a consumer grade product. I personally have no qualms with this. Yes, it will take forever to copy to initially, and would be quite frustrating for RAID due to the rebuild times.

 

However for someone who typically does a one time write and multiple read (ie WORM), they are fantastic value drives and aside from the initial population this issue would likely rarely occur. I usually only copy over batches of 10-20GB files at a time, normally batches off the DSLR.

 

Looking at grabbing 2-3, but currently only have the currency for 1, so looking to combine with my existing WD greens for the time being.

 

What I am wondering though, is would any of this interact strangely with Drivepool? Are there any limitations I should be aware of or need to address? In particular mixing the archive and green drives in a pool? I pressume the green drives will be limited somewhat by the archive drives if I chew through the archive drives cache.

 

Anyways, any opinions welcome, just works out a crapload cheaper than purchasing WD red drives here in Aus.

 

 

Cheers,

Luke

Share this post


Link to post
Share on other sites

17 answers to this question

Recommended Posts

  • 0

I have two in my pool, and they work fantastically!

 

 

As for the write issues, I've not see it that bad, but I've seen issues definitely. 

However, if you use the "SSD Optimizer" balancer, you can circumvent that issue for the most part. 

Use a SSD or otherwise fast drive (like a WD Black, or any 7200 or higher RPM drives) for the "SSD" drive. This way, data is written to those drives first, and then moved off to the Seagate Archive drive. 

https://stablebit.com/DrivePool/Plugins

 

And no, no issues with "mixing" drives, other than occasional bad write performance.  I'm also mixing Seagate NAS, and WD Reds with my Archive drives. 

Share this post


Link to post
Share on other sites
  • 0

For home use I'm not sold on SMR.  For those not familar with it take a read here: http://www.extremetech.com/extreme/207868-hgst-launches-new-10tb-helium-drives-for-enterprise-cold-storage

Forget the brand but just read it to understand what SMR is and when it causes write problems.

 

I have a slight personal bias and don't care for Seagate to much.  For about the same money I'd rather buy an HGST or WD drive.

Don't know how much faith you put in this but have a read here: http://www.extremetech.com/computing/198154-2014-hard-drive-failure-rates-point-to-clear-winners-and-losers-but-is-the-data-good

 

For me it comes down to cost per TB and how much more I'm willing to spend to get the technology I'd prefer for flexibility.

As an example these prices are from NewEgg (rounded up to nearest dollar)

 

$280 Seagate Archive HDD v2 ST8000AS0002 8TB 5900 RPM

$250 WD Red WD60EFRX 6TB IntelliPower 64MB Cache SATA 6.0Gb/s 3.5" NAS Hard Drive

 

Seagate at $35 per TB vs $41.66 for WD RED NAS drive per TB.

 

So the WD RED NAS drives will cost $6.66 more per TB or 19% more per TB.

The Seagate Archive drives hold 25% more data per drive but that's the only advantage.

 

The Archive drives aren't going to be compatible in the same way as convential drives.  They really aren't going to be suitable for anything other than JBOD type use.  Forget using them in any type of RAID or similar situation.

You also don't want to use the Archive drives for storing info that changes often as they aren't designed for that.

 

So while I certainly don't want to persuid anyone not to try atleast one of them, just make sure you understand the technology first and know how to properly use them in your setup.

 

For me personally at this point in time I give them the PASS and will continue to order standard NAS type drives like the WD REDs.  For a few bucks more per TB of space I have drives I can repurpose for just about any other use including standard RAID without worry, issue or compatibility problems.

 

Carlo

Share this post


Link to post
Share on other sites
  • 0

Actually, I do use 8TB Seagate Archive HDDs but for my Server Backups and data there does change. Never a problem and I did track average backup performance for a while and rarely did it do worse than my 4TB WD Red Server Backup HDD. I guess OLTP databases may not be a good idea though. For home use, I can hardly imagine the 20GB buffer would not take 99.9% of the write I/O.

 

Having said that, I have had one of the two Archive HDDs fail. I still need to look into this but it got re-allocation errors (pending and the permanent type) and Scanner in fact can not scan the whole HDD but locks up the Server completely. In its defense though, I had a sudden power-out while a backup was being written to it and that may have been a bit out of spec for the HDD ;)

Share this post


Link to post
Share on other sites
  • 0

 

For me it comes down to cost per TB and how much more I'm willing to spend to get the technology I'd prefer for flexibility.

As an example these prices are from NewEgg (rounded up to nearest dollar)

 

$280 Seagate Archive HDD v2 ST8000AS0002 8TB 5900 RPM

$250 WD Red WD60EFRX 6TB IntelliPower 64MB Cache SATA 6.0Gb/s 3.5" NAS Hard Drive

 

Ok, but just to give you an idea i'm after 8TB of usable space. I prefer to duplicate my data "just incase", as large chunks of it are photos/videos and I wouldn't want to lose them. So i'm looking at 2x8TB Archive drives.

 

Cost:

2x8TB Archive drives $630

4x4TB WD Red drives $908

(Cheapest prices here in Aus right now, but still like ~45% more expensive for the reds)

 

Then you have the extra power consumption (albeit minor, but this is a HTPC/content server specifically made for low power usage).

 

 

The Archive drives aren't going to be compatible in the same way as convential drives.  They really aren't going to be suitable for anything other than JBOD type use.  Forget using them in any type of RAID or similar situation.

You also don't want to use the Archive drives for storing info that changes often as they aren't designed for that.

 

Without being rude... Are you sure? What would be wrong with using them for data that changes often? Seagate themselves advertise a 180TB a year workload for them, which would tend to assume you are writing data more than just once.

 

 

I have two in my pool, and they work fantastically!

 

 

As for the write issues, I've not see it that bad, but I've seen issues definitely. 

However, if you use the "SSD Optimizer" balancer, you can circumvent that issue for the most part. 

Use a SSD or otherwise fast drive (like a WD Black, or any 7200 or higher RPM drives) for the "SSD" drive. This way, data is written to those drives first, and then moved off to the Seagate Archive drive. 

https://stablebit.com/DrivePool/Plugins

 

And no, no issues with "mixing" drives, other than occasional bad write performance.  I'm also mixing Seagate NAS, and WD Reds with my Archive drives. 

 

Oh this is super cool. I actually have two 2.5" western digital blacks (750GB) that would probably work perfectly for this, would it make it overly complicated needing to RAID them or do I simply attach them both, and then specify in the add-on these are the drives and it will treat them like an SSD-drivepool within the main Drivepool? And if one fails, i'd just swap it out and replace like the others etc? This would essentially give me maxed out 7200rpm write speeds I pressume, which would align perfectly with the Archive drives... Is it possible to use the files whilst this process is occuring?

 

Sorry for all the questions!

Share this post


Link to post
Share on other sites
  • 0

I'm in the USA so I know nothing of AUS pricing.  Difference in pricing could surely weigh in BIG TIME of course.

Have you compared pricing on 6TB WD RED drives?

 

Yes, I'm sure.  They are not general purpose drives but are meant for "cold storage" or archiving where the data is basically written once and not modified often.  You can surely add/modify data any time you want but you can take a big write penalty in doing so.  Read #2 here: http://www.extremetech.com/extreme/207868-hgst-launches-new-10tb-helium-drives-for-enterprise-cold-storage to get a better idea how SMR drives work.

 

Here are a few of the highlights from that article:

The disadvantage is that rewriting tracks will damage the data on subsequent tracks. The only way to prevent this is to read the entire data block, modify the necessary section, and then re-write the rest of the track. This can lead to huge write amplification — if a 4KB update needs to be performed to a 64MB area of track, then the entire 64MB has to be read into RAM and laid back down with the modified 4K section.

 

It’s important to avoid random reads and writes as much as possible when using SMR, however, which is why HGST is marketing these drives as “cool” to “cold” storage — meaning data that’s written very few times and accessed only on occasion. If the drive is being regularly written and re-written, the performance penalties will quickly become severe.

 

The HGST drives also require extensive modification and drive software in order to operate properly. Hitachi has an open-source project, libzbc, that can be integrated into Linux to implement support for its now Ha10 drives. Absent such support, the drives can’t function — these aren’t your typical plug-and-play hard drives.

 

So in a nutshell these drives use SMR to pack more data onto the platters then convential drives do.  These drives work similar to SSD in that they try and use all the free space first before going back re-writing existing data.  Once you have to start re-writing data it can't just change a sector but has to re-write the whole section which can cause delays.

 

Also of note and most people miss this info, is that these SMR drives aren't replacement for convential HDDs.  The operating system has to know how to use them.  This will be less an issue as time goes on but for now you couldn't just take these drives and drop them in any old NAS or put them in an USB enclosure and plug them into your Routers USB port for example and expect it to work.

 

Due to the way they work they aren't suitable for random data or for RAID use or typical NAS use which should be obvious by now.  BTW, if the workload is 180TB a year and the drives are 8TB then that's 22.5 rewrites a year which is nothing!

 

Now with all that said, these drives could be very good to use if done properly.  For example don't use balancing in DrivePool with these drives. I wouldn't even let DrivePool write to the drives (idealy).  In the most IDEAL circumstance you would treat these drives as a write once read many times READ-ONLY device, not unlike DVD or Blu-Ray disks.  So for example you use convential disks to build up your data until you have 8TB of media ready to "archive" then transfer this info to the Archive drive.  You can then add this drive to your pool.  If you were to do this then the Archives drives are a good solution to storage at a great price.

 

Again, read up on them a bit to understand the hows and why of SMR and you'll get a better feel for the intended use of them.  My example above was over-the-top a bit, but I was trying to make a point of the indented/ideal use of these drives.  That is not to say they have to be used this way!

 

Carlo

Share this post


Link to post
Share on other sites
  • 0

That may all be true for the HGST drives but that does not apply to the Seagates. The Seagates actially come with firmware that optimises write-behaviour. It's "the other approach"  that that article refers to. There are no compatability issues with the Seagates.

 

And yes, due to SMR you may suffer write penalties but if we are realistic, you would have to have quite some heavy I/O to actually suffer/notice this. Opening a word doc, changing it and saving it? No issue. Movie editing? Not sure, perhaps. OLTP-databases? You typically simply do not want to risk degraded performance so no. But I really believe the use-cases where write performance would be an issue are very limited.

 

 

 

BTW, if the workload is 180TB a year and the drives are 8TB then that's 22.5 rewrites a year which is nothing!
OK, but who writes 100TB a year (and wants to keep only 8TB of that)?

 

A good review was done here: http://www.storagereview.com/seagate_archive_hdd_review_8tb

 

In the use case of OP, writing in batches of 20 to 30GB at a time, mostly to retain, these are a great deal IMHO. Oh, and read speeds are crazy.

Share this post


Link to post
Share on other sites
  • 0

Ugh, I hate when people use the BackBlaze data as anything but anecdotal.  It's not a good statistic. There are no controls, there are way too many variables (including sourcing ... and they "shelled" a lot of externals)... etc.

 

 

As for the Seagate drives, most were the ST3000DM001 line. IIRC, if you remove that line of drives from the statistics, you get something much closer to the other two manufacturers.

 

 

As for SMR, I believe that a lot of the same concerns were brought up with PMR (perpendicular magnetic recording) was introduced in drives.  However, only time can really tell.

But for mostly static content, the drive should be good for home usage. 

 

 

As for usage, a RAID is a very different beast. For StableBit DrivePool, everything is file based. RAID everything is block based.  Just that difference allow changes how the drives are used.  Block based solutions will change data on all drives frequently. Where on DrivePool, that is not necessarily the case, at all.

 

 

 

Oh this is super cool. I actually have two 2.5" western digital blacks (750GB) that would probably work perfectly for this, would it make it overly complicated needing to RAID them or do I simply attach them both, and then specify in the add-on these are the drives and it will treat them like an SSD-drivepool within the main Drivepool? And if one fails, i'd just swap it out and replace like the others etc? This would essentially give me maxed out 7200rpm write speeds I pressume, which would align perfectly with the Archive drives... Is it possible to use the files whilst this process is occuring?

Sorry for all the questions!

No problem, with the questions!

 

You'd want to NOT RAID them actually. If a folder you write to is duplicated, it will write to two disks.  So you'd want two "SSD" drives in this case. Otherwise, it will default to writing to the "archive" drives.

So just adding the two disks as is, and setting them as "SSD" drives in the balancer should be perfect. 

And yeah, if one fails, you can just add a replacement. And yes, should get much better write speeds (sustained even).

 

And yes, you can definitely use the files. Just note that we won't be able to move the files off of the drive if they're in use. But that should be fine.  (we respect file locks)

 

 

 

Yes, I'm sure.  They are not general purpose drives but are meant for "cold storage" or archiving where the data is basically written once and not modified often.  You can surely add/modify data any time you want but you can take a big write penalty in doing so.  Read #2 here: http://www.extremetech.com/extreme/207868-hgst-launches-new-10tb-helium-drives-for-enterprise-cold-storage to get a better idea how SMR drives work.

 

Since I wanted to keep it short, I cut down the quoted text.

 

However, for a lot of people StableBit DrivePool is used to store data. While not "cold" per say, for a lot of people the data just sits there, not being modified, or rewritten.  
Clearly, this isn't the case for everyone, but for a lot of people.

 

Coupled with the "File Placement Rules" feature, you can keep frequently modified data off of these drives (such as WHS or WSE's client backup database, or any other database, for that matter).

 

 

Additionally, I suspect that larger NTFS cluster (allocation unit size) should also help prevent some of the write amplification issue (purely based on how it's described in the articles that talk about it, though I could be wrong). 

 

 

 

However, there should be absolutely no issue with frequent reads from these drives (any more than any other drive).  They're "Write Once, Read Many" type drives.  Hence the "cold storage" phrase that keeps on getting tossed around.

 

 

Is the performance writing to these drives great? No, it absolutely suffers from issues.  Is the performance reading from these drives good? Yes, exceptionally so, in my experience with them.  

 

 

 

Are they good for use with StableBit DrivePool?  Most likely, yes.  Depending on your specific usage of the pool and it's contents, they're probably fantastic. With some "tweaking" depending on the OS you're using.

Share this post


Link to post
Share on other sites
  • 0

Ok, so ignoring the slow potential write issues, which as i've said likely will not affect me... And with the solution above of using "SSD Optimizer" shoudl negate that if it happens anyway.

 

So in a nutshell these drives use SMR to pack more data onto the platters then convential drives do.  These drives work similar to SSD in that they try and use all the free space first before going back re-writing existing data.  Once you have to start re-writing data it can't just change a sector but has to re-write the whole section which can cause delays.

 

Also of note and most people miss this info, is that these SMR drives aren't replacement for convential HDDs.  The operating system has to know how to use them.  This will be less an issue as time goes on but for now you couldn't just take these drives and drop them in any old NAS or put them in an USB enclosure and plug them into your Routers USB port for example and expect it to work.

 

Again, fine, yep slow write speeds once the buffer/cache/whatever is used up. No problem at all, it's not being used in the middle of the night so it can sit there and re-write whatever it pleases in the background. My data usage will still not go near the rated 180TB in a year. I'd be lucky to do 20% of that. Read speeds are unchanged here, so no problems so long as I have something like SSD Optimizer to ensure the initial speeds are good (for example if I dragged 100GB off a USB drive, I wouldn't want it to slow down to crawling speed whilst USB3 still has tonnes of bandwidth available).

 

As for the OS issue, I am not sure you're correct here. From what I had read it sounded like the major reason the Seagate Archive 8TB had such slow write speeds is because it DOESNT use the OS for processing. They are drive managed SMR, meaning you can use them with any OS, and similarly any enclosure. After all, that's what the Seagate 8TB external drive is, the archive drive just plonked inside an enclosure with a power/usb connector.

 

I am not trying to discredit everything you say, I am considering it. It's just, you come across very paranoid in the way you present your information. It makes it difficult for me to put much faith into it, just because it has an air of "arghhhh the world is ending if we use SMR".

 

Western Digital Reds in the 6TB variant are even more expensive here, the 3TB being the cheapest (about $150-160 per drive). But with only 8x3.5" bays I dont want to use anything less than 4TB drives to ensure I have enough room down the line should I ever get a half decent internet connection.

 

Can appreciate the major con though, that these are early drives. And quite often they can be a bit "meh". Mmm lots to think about, unless a really nice sale happens...

Share this post


Link to post
Share on other sites
  • 0

Ugh, I hate when people use the BackBlaze data as anything but anecdotal.  It's not a good statistic. There are no controls, there are way too many variables (including sourcing ... and they "shelled" a lot of externals)... etc.

 

<trimmed>

 

No problem, with the questions!

 

You'd want to NOT RAID them actually. If a folder you write to is duplicated, it will write to two disks.  So you'd want two "SSD" drives in this case. Otherwise, it will default to writing to the "archive" drives.

So just adding the two disks as is, and setting them as "SSD" drives in the balancer should be perfect. 

And yeah, if one fails, you can just add a replacement. And yes, should get much better write speeds (sustained even).

 

And yes, you can definitely use the files. Just note that we won't be able to move the files off of the drive if they're in use. But that should be fine.  (we respect file locks)

 

Since I wanted to keep it short, I cut down the quoted text.

 

However, for a lot of people StableBit DrivePool is used to store data. While not "cold" per say, for a lot of people the data just sits there, not being modified, or rewritten.  

Clearly, this isn't the case for everyone, but for a lot of people.

 

Coupled with the "File Placement Rules" feature, you can keep frequently modified data off of these drives (such as WHS or WSE's client backup database, or any other database, for that matter).

<trimmed>

 

Is the performance writing to these drives great? No, it absolutely suffers from issues.  Is the performance reading from these drives good? Yes, exceptionally so, in my experience with them.  

 

Agree entirely about that Backblaze data, read it and laughed. I get what they were trying to do, but it was always going to be fairly average data from the outlay.

 

Ok too easy, the 2x750GB WD Blacks sound like the go then, I might actually implement this regardless. Might be reading into this too much, but.... If I were to say download and want to watch something immediately and it hadn't had a chance to copy over yet, added it to a library to watch, and then after watching it was moved onto the proper section of Drivepool storage (archive? not sure on exact term sorry) you wouldnt end up with invalid path info etc right? This is all just magically done behind the scenes and the OS still thinks it is residing in the same place, it's just being shuffled round on the drives by Drivepoool in the background? I am 99.9% certain that's the case and feel silly for asking it but just want to verify this would not cause issues as it may happen every so often (working on/editing video files before they have been fully transferred to the archive drives for example).

 

The file placement rules sound perfect for my local computer backups (only ~300GB total), will store those onto a seperate drive which is regularly cloned most likely.

 

Thank you for providing some feedback of the drives in actual use, that's primarily what I was looking for. I understand the first iteration of these had fairly horrible error rates, but the ST8000AS0002 version of the drives fixed these issues correct?

Share this post


Link to post
Share on other sites
  • 0

Can appreciate the major con though, that these are early drives. And quite often they can be a bit "meh". Mmm lots to think about, unless a really nice sale happens...

I think that's honestly a lot of the hesitance about these drives.  IIRC, the same thing happened with the PMR technology, and that's ... well standard now.

 

 

Agree entirely about that Backblaze data, read it and laughed. I get what they were trying to do, but it was always going to be fairly average data from the outlay.

 

Ok too easy, the 2x750GB WD Blacks sound like the go then, I might actually implement this regardless. Might be reading into this too much, but.... If I were to say download and want to watch something immediately and it hadn't had a chance to copy over yet, added it to a library to watch, and then after watching it was moved onto the proper section of Drivepool storage (archive? not sure on exact term sorry) you wouldnt end up with invalid path info etc right? This is all just magically done behind the scenes and the OS still thinks it is residing in the same place, it's just being shuffled round on the drives by Drivepoool in the background? I am 99.9% certain that's the case and feel silly for asking it but just want to verify this would not cause issues as it may happen every so often (working on/editing video files before they have been fully transferred to the archive drives for example).

 

The file placement rules sound perfect for my local computer backups (only ~300GB total), will store those onto a seperate drive which is regularly cloned most likely.

 

Thank you for providing some feedback of the drives in actual use, that's primarily what I was looking for. I understand the first iteration of these had fairly horrible error rates, but the ST8000AS0002 version of the drives fixed these issues correct?

I get what BackBlaze is trying to do (more transparency about drive health and failure), but .... they should have included a big disclaimer first, emphasizing that their data is anecdotal.  I've see people use it to justify their arbitrary hatred of Seagate. 

 

While I've had a bunch of ST3000DM001 drives and they've all died... my Seagate NAS drives actually run cooler and out perform the WD Red drives that I have. 

 

 

 

As for the SSD Optimizer, "SSD" and "Archive" are the drive types. You need one "SSD" for each "duplication" level you plan on using (normal duplication being 2x, so 2x "SSD" drives).  

Since this is a balancer, it uses the balancer settings, so it depends on how that is configured.

 

However, if you write a file, and it's on the cache "ssd" drive, and you immediately access it to watch it, it will not be moved until the next balancing pass occurs where the file is not locked (open for reading). It will sit on those drives until it can be moved during a balancing pass.

 

 

The default settings are to to basically run a pass every 12 hours, as long as the ratio has been exceeded (setting to 100% may help ensure it always runs when needed). 

 

 

 

 

As for the Archive Drives, yup.  These are what I have in use, specifically. I've not had any issues with them (aside from previously mentioned write performance).

Share this post


Link to post
Share on other sites
  • 0

I suspect ST3000DM001 might be a particularly bad drive, seem to have seen a few threads on reddit (/r/techsupportgore) lately showing piles of dead ones.

 

I'm interested to know how people handle redundancy though as I was tempted to get some of these 8TB drives, at the moment I'm trying snap-raid but I'm worried a small edit to a file may corrupt a lot of parity data until it re-syncs so it's not ideal.

 

Then again neither is using drive-pools duplicate feature as that wastes a lot of space.

 

I'm also hoping developments in SSD's cause them to get much larger and cheaper.

Share this post


Link to post
Share on other sites
  • 0

Yes, these drives are especially terrible!  I had 20 of them in my Norco 4220 at one point.  After repeatedly having to RMA them for SMART warnings, errors, and dead drives, I started to replace the out of warranty ones with Toshiba 3TB drives and most recently with HGST 6TB drives.  If the problem Seagate is still under warranty, then I do the warranty replacement.  

 

At any rate, I am now down to 8 of these Seagate drives, only one of which is over 2 years old and some of which are RMA'ed drives, left in my system, 9 Toshibas, and 3 HGST.  I have not yet had to RMA any Toshiba or HGST drives.  They say the warranty on the Seagates is 2 years, but they consider this from time of manufacture, not the time you purchased it.

 

I suspect ST3000DM001 might be a particularly bad drive, seem to have seen a few threads on reddit (/r/techsupportgore) lately showing piles of dead ones.

 

I'm interested to know how people handle redundancy though as I was tempted to get some of these 8TB drives, at the moment I'm trying snap-raid but I'm worried a small edit to a file may corrupt a lot of parity data until it re-syncs so it's not ideal.

 

Then again neither is using drive-pools duplicate feature as that wastes a lot of space.

 

I'm also hoping developments in SSD's cause them to get much larger and cheaper.

Share this post


Link to post
Share on other sites
  • 0

i had 12 of the ST3000DM001 drives.  Aside from the couple that I sold. literally all of them started to fail.  Uncorrectable sector counts, and equal number of unreadable sectors in StableBit Scanner's surface scan.

This was also AFTER they passed the warranty expiration .... :(

 

 

However, these are the ONLY Seagate drives that I've EVER had a problem with. There is definitely an issue with that line of drives.  Something new they tried or something. But yeah...

Share this post


Link to post
Share on other sites
  • 0

I've been burned too many times on Seagate drives to feel comfortable buying anything new from them without them being on the market long enough to get a good read on reliability. Seagate used to be by favorite drive manufacturer until 1TB+ drives became common. I bought five 1TB Seagate drives over the course of a couple of months only to have all five of them fail with a year and a half. All were RMA'd and half the refurbs died within six months so I stopped using the 1TB drives alltogether. I had a couple of 1.5TB Seagates that got bit by the firmware bug that required me to wire up a JTAG like tool to unbrick them long enough to recover my data. The 3TB drives I skipped because I heard quality control after the flooding was particularly shoddy. Then 4TB drives started appearing and the Seagate drives were dramatically cheaper than any other brand so I rolled the dice again and had a 4tb start throwing SMART errors within a month. A few months ago a coworker bought several 5tb Seagate externals so we could archive some backups at work. They work fine, except for some strange reason the last 500-750GB I write on any of the drives copies at USB 1.1 speed. Obviously, all of this is anecdotal and doesn't necessarily reflect everyone's experience with Seagate, but it's made me swear off of Seagate at least until I see them put out a generation or two of drives that are widely regarded as being solid and reliable. I just feel like their quality control is weak at best. I'm sticking with HGST. I have at least a dozen 1TB, 2TB, and 4TB HGST drives in service and the only failure I've had was one drive that overheated when a fan in my external enclosure died. Again, just anecodotal experience, but so far HGST has earned my trust.

Share this post


Link to post
Share on other sites
  • 0

Again, just anecodotal experience, but so far HGST has earned my trust.

 

I'm definitely not discounting your experience, but every manufacturer has had their share of issues.  Aside from the WD Reds I have, I've had issues with just about EVERY WD drive I've owned.  

But everyone has had their own experiences.

 

 

Also, HGST are now WD drives. Hitatchi sold their HDD division to Western Digital. 

Share this post


Link to post
Share on other sites
  • 0

Also, HGST are now WD drives. Hitatchi sold their HDD division to Western Digital. 

 

A fact which made me extremely wary of the newer Hitachi/HGST drives. WD was my least favorite until this run of luck on Seagate, but so far I haven't gotten burnt on the newer HGST/WD drives. As you say, all the manufacturers have had their share of issues. I just try to keep a mixture of drives and make sure to scatter my purchases across multiple manufacturing lots to hedge my bets. That and make liberal use of data duplication on my pool to save me when something goes sideways!

Share this post


Link to post
Share on other sites
  • 0

A fact which made me extremely wary of the newer Hitachi/HGST drives. WD was my least favorite until this run of luck on Seagate, but so far I haven't gotten burnt on the newer HGST/WD drives. As you say, all the manufacturers have had their share of issues. I just try to keep a mixture of drives and make sure to scatter my purchases across multiple manufacturing lots to hedge my bets. That and make liberal use of data duplication on my pool to save me when something goes sideways!

Exactly. My entire pool is duplicated, just because it's such a pain to deal with a bad drive.

And unfortunately, it's a matter of "when", not "if". 

 

 

Though, one thing I've found is that the Warranty period is probably the most important spec on the drive.  Not it's size, price, estimated life time, etc.  Because if the drive is under warranty, you can get it RMAed.  That ... makes a huge difference, IMO.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...