Jump to content
  • 0

The Largest Stablebit Drivepool In The World!!


RFOneWatt

Question

Someone has to have it, right?

 

How about we start with the largest pool of the members that participate here?

 

I'm taking an uneducated guess that theoretically Drivepool should scale indefinitely (or to some insane limitation imposed by the O/S, hardware or something else) however we all know the real world is where it's at, yes?

 

I'm sure I'm nowhere near the largest but I've maxed out my Norco 4220, and then some.

 

It's NOW time to start building the successor! 

 

dp.6-2015.JPG

 

 

 

Would love to see what everybody else has going on!

 

~RF

 

 

Link to comment
Share on other sites

Recommended Posts

  • 0

How did you configure Scanner to have them sorted by Rail?  I assume that's just what you call each backplane on the case right?  So did you provide each drive with a location as "Rail # and Bay #" ?  Now that I've got that case too, I would like a similar organization.  Currently I'm sorting by controller, and had each drive label in Windows identify each drive.

Link to comment
Share on other sites

  • 0

 

 

...........The airflow isn't optimal though (the PSU is sucking hot air back into the system and spewing it at the CPU....)

 

Drashna:  JUST DO IT! You know you wanna:  http://goo.gl/gaVWb8

 

I doubt there is anything else that would work better than two of those in the back of your Norco.  Didn't you say you had one around the house?

 

and some SCREENSHOTS

 

This is WOPR.

 

WOPR.Sysinfo.JPG

 

WOPR_DrivePool.JPG

 

 

Ambient Temp 75-80F Norco 4224

 

WOPR_Scanner1.JPG

 

I still say that after 30 years of doing this stuff DrivePool is one of the very few pieces of programming that I've been thoroughly personally impressed with.   I remember the first night I found DrivePool I wrote a HUGE first impressions review and I can't remember where I saved the darn thing.  Derp!!!

 

Anyways, when I put WOPR online I raped my existing server (AP1, Dual Xeon 5630, 64G RAM) of most of its 4TB drives so that WOPR was fully populated.

 

It is now time to start repopulating AP1. 

 

It makes no sense to just buy drives to sit and spin but AP1 has a couple more years of usefulness to warrant the electricity used so I just ordered three of those Seagate Archive 8TB drives to slap in WOPR.  

 

I'll put three of the 4TB drives from WOPR back in AP1 and call it a day. 

 

These archive drives seem to be the perfect fit for Drivepool when it comes to semi-cold storage. (if that's a thing)

 

~RF

Link to comment
Share on other sites

  • 0

I'm assuming those prices are for the external drives? :)

 

And as for the desktop retail drive, I avoid them for my pool. I will only by NAS or enterprise drives, anymore (eg, stuff that's rated for 24/7 runtime, and that has an actual warranty, in case the drive does fail)

 

As for the drives, 34-38C, for the most part.

attachicon.gifStableBit Scanner.jpg

 

The airflow isn't optimal though (the PSU is sucking hot air back into the system and spewing it at the CPU....)

 

Yes, externals. Seagate externals have one year warranty, the Samsung [same Seagate drive inside] three years. Why the difference? I suspect the Seagate case have a higher return rate due to poor cooling, they aren't willing to take the hit on returns. When they are removed from the case, the cooling is on you. Given proper cooling they should last as long as the desktop versions, it is exactly the same drive.

 

Is a desktop drive warranty not real? Really?

 

A NAS drive warranty from the same drive manufacturer is more real?

 

All your drives listed are desktop drives, with a "NAS" label stuck on them.

 

I don't want to get into a flame war here but I have problems following your financial and warranty logic, after looking at your temp charts.

 

Rail 1: The 4TB Seagates at the top of you temp chart are "NAS" - and have the same 3 year warranty as the desktop drives.Certainly not an Enterprise class drive. No real enterprise would use these drives for anything mission critical.

Rail 2, 3 The 4TB WDs  are also called "NAS" - and have the same 3 year warranty as the desktop drives. Certainly not an Enterprise class drive.No real enterprise would use these drives for anything mission critical.

Rail 4 The 8 TB Seagates also have a "NAS" label stuck on them, and have exactly the same warranty as a desktop drive. Certainly not an Enterprise class drive.

 

No real enterprise would use these drives for anything mission critical, if at all. Plus the known issues with the technology [8TB] would prevent any enterprise implementing these drives for the near future until it is sorted out down the road - keep in mind, you might get an pat on the back for buying cheap "prosumer" drives, but you will be fired for lost data because of it or performance hits that have an effect on production. I know, I have seen it. That's "reality" and what "Enterprise" really means. Jobs are at stake. Nobody wants to hear you saved $500 bucks when a plant shuts down for a hour or two. They want you gone. These drives you have are aimed directly at the home market for what we are using them for. I refuse to pay big money for the same warranty because they put a NAS label on it. I am not saying they are BAD drives, I am saying they created this classification to squeeze more money out of consumers.

 

I won't dig any deeper, maybe you have an "Enterprise" drive not in the chart, but I do know what actual Enterprise drives are and what they cost [i'm sure you do too] - and I certainly wouldn't buy one and stick it in a notoriously bad cooling NORCO case with the power supply blowing air into the case. I would buy drives with the same performance [purchase an extended warranty if you want 5 years], use the savings to get a professional case and proper power supply setup. Or correct the issues with the NORCO and the power supply.

 

What does it all boil down to? With Enterprise drives, the hope is it will be more robust that plain old ordinary desktop drives. My experience with servers is that they are, meaning they last longer, but how much of that is due to good case engineering and cooling based on years of engineering experience that the major manufacturers have? I suspect it makes the difference. We will see with these so-called NAS drives aimed directly at consumers two or three years down the road. I suspect the failure rate will be the same as desktop drives. The drive manufacturers obviously know this, hence the same warranty as ordinary desktop drives. As for the price premium, what did you gain by paying more for a "NAS" drive - the warranty is the same. If you do have real Enterprise rated drives not on your chart, well, they are again another step up in price from the consumer "NAS" drives - and you get an extra two years warranty. You also get a performance increase in general over these NAS drives.. So you pay a hundred or more extra for speed you don't need to stream and store video - and an extended warranty. Five years down the road the drive will be outdated and practically worthless, so if it fails on year 4.5, you get a replacement drive [with Seagate it may be a "refurbished replacement drive"] that has outlived it usefulness in size and performance.

 

As for data integrity, you are the same as me - you use drivepool and scanner, both great tools for monitoring and finding problems, and duplicate everything. Hence I can save money on drives using these tools and still feel safe.

 

Suspenders with a belt. Very expensive suspenders. When thousands to millions of $ ride on it, I get it. Suspenders and a belt with a rope backup. Go Enterprise and go real Enterprise casing / cooling. When it is movies and tv shows I don't.get it.It does not compute as they say.

 

Or maybe I'm full of "it". But my brain does not allow me to see it your way, using logic. As they say, different strokes for different folks.Please point out where I am going wrong so I can make sense out of this.

Link to comment
Share on other sites

  • 0

How did you configure Scanner to have them sorted by Rail?  I assume that's just what you call each backplane on the case right?  So did you provide each drive with a location as "Rail # and Bay #" ?  Now that I've got that case too, I would like a similar organization.  Currently I'm sorting by controller, and had each drive label in Windows identify each drive.

Right click on a disk, select "Disk Settings". 

At the bottom is a "location" section. The first entry is "Case". This is where I set the "Rail #" at. The second entry is "Bay". This is where I put "Bay #".  I did this for each and every one of the disks in the system (you can use the "Burst Test" option to light up the activity light if you need to figure out which disk is which). 

 

After doing that, right click on the column header in the main UI, and select "By Case". This sorts the drives by case, and is how I got it nice and organized. 

 

(you can also do this by controller, as well, and you can enable the other columns). 

 

 

Additionally, you can see that I've mounted each disk to a folder. This isn't random either. Each one is the specific slot.  So "i16" is the 4th rail, 4th bay (the 16th slot).  This is for easy reference later on, or if I need to run chkdsk. 

 

 

Also, StableBit DrivePool does actually use this location information, as well. 

 

Drashna:  JUST DO IT! You know you wanna:  http://goo.gl/gaVWb8

I'm seriously considering getting 6.... one of the fans at the back (exhaust at the rear) has recently died. It spins but.... just barely, and half the time I have to manually start it up...

 

And the the middle fans are having some issues. I've had 1 completely fail, and I think a couple are starting to go.

 

And since the 1U server I just got (for hyperV) is really loud, I don't think there will be much of a difference in sound level .... :P

I doubt there is anything else that would work better than two of those in the back of your Norco.  Didn't you say you had one around the house?

 

and some SCREENSHOTS

 

This is WOPR.

  

 

Ambient Temp 75-80F Norco 4224

 

 

 

I still say that after 30 years of doing this stuff DrivePool is one of the very few pieces of programming that I've been thoroughly personally impressed with.   I remember the first night I found DrivePool I wrote a HUGE first impressions review and I can't remember where I saved the darn thing.  Derp!!!

 

Anyways, when I put WOPR online I raped my existing server (AP1, Dual Xeon 5630, 64G RAM) of most of its 4TB drives so that WOPR was fully populated.

 

It is now time to start repopulating AP1. 

 

It makes no sense to just buy drives to sit and spin but AP1 has a couple more years of usefulness to warrant the electricity used so I just ordered three of those Seagate Archive 8TB drives to slap in WOPR.  

 

I'll put three of the 4TB drives from WOPR back in AP1 and call it a day. 

 

These archive drives seem to be the perfect fit for Drivepool when it comes to semi-cold storage. (if that's a thing)

 

~RF

And very nice!

And glad to hear that you very happy and impressed with StableBit DrivePool. And if you can find that review, let me know, and we'll see about getting it posted in the testimonial section. :)

I prefer the 4TB drives, but ... the price per TB on the 8TBs and my drastically shrinking number of available bays .... :)

 

And yeah, they work very well with StableBit DrivePool.  Though... it looks like you've already got that set up... feeder drives are very useful if you're using them in the pool.  The main issue with them is poor write performance. But by using feeder disks.... problem solved. 

Link to comment
Share on other sites

  • 0

Yes, externals. Seagate externals have one year warranty, the Samsung [same Seagate drive inside] three years. Why the difference? I suspect the Seagate case have a higher return rate due to poor cooling, they aren't willing to take the hit on returns. When they are removed from the case, the cooling is on you. Given proper cooling they should last as long as the desktop versions, it is exactly the same drive.

 

 

For the Seagate Backup Plus drives that I shelled.... Seagate won't honor the warranty on the bare drives. They have to be the whole unit. 

 

And removing the drives from the enclosure (which is fairly obviously done for these drives) voids that warranty.

 

What does that mean? That these drives could not be RMAed, because they weren't under warranty. That's the difference, and that's my point here. 

 

By shelling the drives, I basically screwed myself over. So, for me, the warranty for the drives is much more important than saving a bit of money... because by trying to save that money, it costed myself a lot more in the long run. 

Link to comment
Share on other sites

  • 0

I'm the same... I previous shelled drives when the externals were inexplicably cheaper than the internals; despite it being the same disk with some additional haradware, but then you lose the warranty

 

I've got 12 enterprise 4TB (I got a good deal)

 

about 14 Red 6TB - but i've got 4 year warranties on those

 

and a few Seagate 8tb ones

Link to comment
Share on other sites

  • 0

No real enterprise would use these drives for anything mission critical, if at all. Plus the known issues with the technology [8TB] would prevent any enterprise implementing these drives for the near future until it is sorted out down the road - keep in mind, you might get an pat on the back for buying cheap "prosumer" drives, but you will be fired for lost data because of it or performance hits that have an effect on production. I know, I have seen it. That's "reality" and what "Enterprise" really means. Jobs are at stake. Nobody wants to hear you saved $500 bucks when a plant shuts down for a hour or two. They want you gone. These drives you have are aimed directly at the home market for what we are using them for. I refuse to pay big money for the same warranty because they put a NAS label on it. I am not saying they are BAD drives, I am saying they created this classification to squeeze more money out of consumers.

 

I won't dig any deeper, maybe you have an "Enterprise" drive not in the chart, but I do know what actual Enterprise drives are and what they cost [i'm sure you do too] - and I certainly wouldn't buy one and stick it in a notoriously bad cooling NORCO case with the power supply blowing air into the case. I would buy drives with the same performance [purchase an extended warranty if you want 5 years], use the savings to get a professional case and proper power supply setup. Or correct the issues with the NORCO and the power supply.

 

What does it all boil down to? With Enterprise drives, the hope is it will be more robust that plain old ordinary desktop drives. My experience with servers is that they are, meaning they last longer, but how much of that is due to good case engineering and cooling based on years of engineering experience that the major manufacturers have? I suspect it makes the difference. We will see with these so-called NAS drives aimed directly at consumers two or three years down the road. I suspect the failure rate will be the same as desktop drives. The drive manufacturers obviously know this, hence the same warranty as ordinary desktop drives. As for the price premium, what did you gain by paying more for a "NAS" drive - the warranty is the same. If you do have real Enterprise rated drives not on your chart, well, they are again another step up in price from the consumer "NAS" drives - and you get an extra two years warranty. You also get a performance increase in general over these NAS drives.. So you pay a hundred or more extra for speed you don't need to stream and store video - and an extended warranty. Five years down the road the drive will be outdated and practically worthless, so if it fails on year 4.5, you get a replacement drive [with Seagate it may be a "refurbished replacement drive"] that has outlived it usefulness in size and performance.

 

As for data integrity, you are the same as me - you use drivepool and scanner, both great tools for monitoring and finding problems, and duplicate everything. Hence I can save money on drives using these tools and still feel safe.

 

Suspenders with a belt. Very expensive suspenders. When thousands to millions of $ ride on it, I get it. Suspenders and a belt with a rope backup. Go Enterprise and go real Enterprise casing / cooling. When it is movies and tv shows I don't.get it.It does not compute as they say.

 

Or maybe I'm full of "it". But my brain does not allow me to see it your way, using logic. As they say, different strokes for different folks.Please point out where I am going wrong so I can make sense out of this.

 

 

No, none of these drives are enterprise grade drives. The cost is just too high for me, and no, none of the data I store is "mission critical". 

 

As for NAS vs Desktop drives, I suspect that we'll have to agree to disagree here. 

 

In my experience, the NAS drives run significantly cooler in the same setup. With the ST3000DM001 drives that I had (and some ST4000DM001's), they ran 45-50C in same system. Additionally, I see better performance out of the NAS drives than I did with these drives. (~10-20MB/s more on large files)

So yes, I do think there is more than just a higher price tag slapped onto these drives. 

 

 

 

As for the Norco comment, what case would you recommend then?  

I've looked at the Supermicro 4U cases, chenbro cases, iStar cases, and a few others. The designs are about the same. (if not nearly identical). 

However, I will say that I like the SuperMicro drive trays a lot better than the Norco ones.

 

Though, I won't be able to beat the price. In case, you're curious, I got the Norco for $55 from a good friend. Say what you will about it, but it was a very good deal.

 

 

As for enterprise drives, from everything I've seen/heard/talked to others about, the main advantages to them are a) the firmware features (including on disk encryption, I believe it was), and B) the longer warranty periods (which is used for frequent batch RMAs of "good" drives)

Link to comment
Share on other sites

  • 0

I was trying to find.. but cant:

 

When (recently) WD released the new 6TB Red Pro, one of the review sites made a great table using the (sometimes unpublished) information about the various WD drives and comparing the Blue/Black/Green/Red/Red Pro/Re+/Re/Se models and what you got extra at each step

 

aside from TLER and the NAS oriented firmware, the Red's do have better/more efficent motors than the green / blue (which are designed to run 8 hours a day, not 24)

 

While you need Red Pro or high to get actual hardware vibration sensors on the circuit board, the Reds are able to measure and adjust for the kind of vibrations you get in an array, which the desktop models cant do

 

there is more to it than just the warranty

Link to comment
Share on other sites

  • 0

For the Seagate Backup Plus drives that I shelled.... Seagate won't honor the warranty on the bare drives. They have to be the whole unit. 

 

And removing the drives from the enclosure (which is fairly obviously done for these drives) voids that warranty.

 

What does that mean? That these drives could not be RMAed, because they weren't under warranty. That's the difference, and that's my point here. 

 

By shelling the drives, I basically screwed myself over. So, for me, the warranty for the drives is much more important than saving a bit of money... because by trying to save that money, it costed myself a lot more in the long run. 

 

 

You didn't put it back in the case? Why not? There is no seal of any kind. The Samsungs are even better, they are more flexible plastic and go right back like new.

 

Well, for each his own. I haven't had any drive failures at all with clamshells so if I do I'll report back and let you know what happens.

Link to comment
Share on other sites

  • 0

I've got 30 disks

 

I'm not about to start storing 30 empty cases..., or trying to keep a record of which disk belongs in which case

 

I'd imagine he simply threw away the case


and he says in his post that its obvious that the drive has been removed.. so i suspect that there was some damage done to the case

Link to comment
Share on other sites

  • 0

Whatever.

 

All I know is I do not have drive failures and I save a lot of money. I toss the boxes in the spare room. I'll fleabay them or toss them when the warranty is up.

 

It just seems to me that you guys are paying an awful lot of money on drives. Since you got money to burn, more power to you. Me, I'm cheap. I do a return on investment and decide what's good for me.

 

I know from experience, cooling is key, these drives [clamshells] work and all my other non-NAS desktop drives work great and don't fail, running 24 / 7 since WHS came out, and before that running 24 / 7 in workstations. I'm calling BS on that one - every drive manufacturer will honor a warranty no matter how many hours you use the drive. That's a fact.

 

NORCO is a poor design, esp. for cooling. I don't care if it's free, too much wrong to discuss here. Do a web search. Also look at HP equivalents, see the difference. They have to honor warranties on failed drives, they are engineered to keep the drives at optimal temperatures. NORCO doesn't sell drives, they could care less. I personally have never seen a Supermicro server in an Enterprise shop, they may be out there but I didn't see them. We had hundreds of servers, IBM, HP, and some Dells, but every time we tried a new generation of Dell, they still turned out to be sub-standard.

 

We can agree to disagree. I think you guys are caught up in hype and reading too much marketing.

Link to comment
Share on other sites

  • 0

I'm not sure what your actual arguement is?

 

That NAS drives are overpriced?

 

I've had maybe 20 drives fail on me over the last 10 years,

 

they are always about 40C...

 

I'd rather pay slightly more for the longer warranty - most consumer NAS drives have 3 years, most desktop drives 1 or 2

 

for the last year I was mainly using WD 6TB Reds (+£5 for an extra year warranty) - what cheaper "desktop" alternative would you suggest?

Link to comment
Share on other sites

  • 0

NORCO is a poor design, esp. for cooling. I don't care if it's free, too much wrong to discuss here. Do a web search. Also look at HP equivalents, see the difference. They have to honor warranties on failed drives, they are engineered to keep the drives at optimal temperatures. NORCO doesn't sell drives, they could care less. I personally have never seen a Supermicro server in an Enterprise shop, they may be out there but I didn't see them. We had hundreds of servers, IBM, HP, and some Dells, but every time we tried a new generation of Dell, they still turned out to be sub-standard.

 

We can agree to disagree. I think you guys are caught up in hype and reading too much marketing.

 

You, my friend have obviously never been in an extremely large datacenter (think: Google, Bing, Backblaze, etc.) where you'll find hundreds if not thousands of Supermicro cases and Mobo's.

 

It's apparent your experience is limited to Small/Medium/Large business' where HP, Dell, etc. are the norm.

 

What was sub-standard about your high end Dell servers? I've been using them exclusively in enterprise applications for 20 years so I'm a bit curious what you found that was sub-standard.

 

The biggest and most important issue with consumer drives in an enterprise is quite simply simultaneous access, not cooling. Keep your drives cool and they will last a long, long time. ALL of the drives discussed in this thread will choke with anything more than a couple of streams. 

 

This reminds me of a common flame war in the truck world:  Diesel vs Gas trucks. 

 

Bottom line is, if you need a diesel -- YOU'LL KNOW.  There isn't a question or debate. 

 

Same thing with drives:  Enterprise vs. Non.  You'll absolutely know (or find out the hard way) when you need enterprise drives.

 

As for the Norco, I own two. A 4224 and a 4220 and I just purchased another 4224 for my offsite "cold" storage duties at another location. 

 

Norco has to cut corners somewhere to make it affordable. I'm glad they did where they did - (drive trays, inadequate fans.)   

 

The drive trays work perfectly fine and I don't remove/replace drives often so that's not a big deal.

 

As for the cooling it was a simple matter to replace the stock rear 80MM fans. You'd be hard pressed to find an environment where this does not solve ALL all cooling problems so that's really a non issue.  BTW  Drashna: I can absolutely guarantee you don't need to replace the fans on the fan board. Just two of those Tornado 80's is more than enough to cool even the hottest enclosure.

 

The other place they cut corners that might be annoying is retaining the ability to use a standard power supply. On the flip side, if you really need redundant power supplies (and their expense) you're not running a Norco. However having the ability to stick a standard power supply isn't a bad thing for their targeted market, IMO.   If you do find that you need power supply redundancy you can find a couple different models that will work in the Norco.  (But at that point, because of the expense, you're better off just buying a Supermicro off the shelf.)

 

So, yeah, What's the argument?

 

~RF

Link to comment
Share on other sites

  • 0

 

This reminds me of a common flame war in the truck world:  Diesel vs Gas trucks. 

 

Or WD vs Seagate....  :)

 

 

And I'll second this. The 1U SuperMicro I've picked up from ebay (I don't' remember the model, but it's a X8DTU, with dual 5650's I think, and 16GBs of ECC (unbuffered)),  it actually came from SoftLayer, as it still has an asset tag on it (no HDD, obviously, but everything else).

 

As for the Tornados, I'm sure that I'd only need to replace a couple. :P

However, the issue is that I'd like to power them via 3pin connectors (as my PSU doesn't have a lot of molex, I lost the bag of cables for it, and OCZ sold the line to FirePower... and neither company will respond to my emails about this....)

 

NORCO is a poor design, esp. for cooling. I don't care if it's free, too much wrong to discuss here. Do a web search. Also look at HP equivalents, see the difference. They have to honor warranties on failed drives, they are engineered to keep the drives at optimal temperatures. NORCO doesn't sell drives, they could care less. I personally have never seen a Supermicro server in an Enterprise shop, they may be out there but I didn't see them. We had hundreds of servers, IBM, HP, and some Dells, but every time we tried a new generation of Dell, they still turned out to be sub-standard.

My biggest issue with your statement here is that you refuse to list any reasons why the Norco's are crap. "too much to discuss here". Then post a link from HardForums or another place where this topic has been beaten to death. 

I'm not trying to attack you, at all. But if you're going to state your opinion and refuse to back it up (which IS what you're doing), then please understand that it looks like you're getting on a soapbox because you had a bad experience.

 

I would love to debate the issue with you (or even read the proof... that I can't seem to find, so I can change my opinion on the topic), but you're providing absolutely nothing to debate over or review. Again, it comes off as just opinion pieces (and a somewhat hostile one, at that).

 

 

 

 

@RFOneWatt: It's been my opinion for a while that Norco cases are Pro-sumer cases, and for SOHO-small businesses.  In large companies, not as much, as buying complete systems (such as from SuperMicro, Dell, HP, IBM, etc) is much more a medium-large company solution.

 

But then again, large companies are very fold of cutting costs (corners) whenever possible. Especially when it comes to IT costs.

Link to comment
Share on other sites

  • 0

Interesting, the ST3000DM001 drives I shelled were exchanged by Seagate when they failed, outside of the case. I guess YMMV.

 

Being someone that's in very large data centers as part of my job, and someone that works with these very large companies, there is one more thing about enterprise drives that was left out. Now, I'm talking the drives you get when you purchase servers or SANs, not some company buying a bunch of bare drives. When these drives fail, the common practice is for them to be replaced and the failed drive NOT RETURNED. That way the customer may dispose of the failed drive as their policies require, and they have the ability to track the location of the drive all the way through destruction. 

Link to comment
Share on other sites

  • 0

As for the Tornados, I'm sure that I'd only need to replace a couple. :P

However, the issue is that I'd like to power them via 3pin connectors (as my PSU doesn't have a lot of molex, I lost the bag of cables for it, and OCZ sold the line to FirePower... and neither company will respond to my emails about this....)

 

 

These work well:  http://goo.gl/PqRJ2r   :D

 

Well, WOPR has been upgraded!

 

I swapped out two of my oldest 4TB Seagates and popped them in my backup server (AP1) and installed two new Seagate 8TB archive drives in WOPR. 

 

So far so good, I've got two more on the way.

 

Does anybody have any suggestions on the best way to break in / test new drives?  So far I've been damn lucky but it's only a matter of time. 

 

I was thinking of just letting the BURST TEST in Scanner run for a few days but I'm sure there are better ideas.  :D

 

~RF

 

 

 

WOPR_DrivePool.Upgrade.JPG

 

8TB.Seagate.Archive.Drive.jpg

Link to comment
Share on other sites

  • 0

Interesting, the ST3000DM001 drives I shelled were exchanged by Seagate when they failed, outside of the case. I guess YMMV.

 

Being someone that's in very large data centers as part of my job, and someone that works with these very large companies, there is one more thing about enterprise drives that was left out. Now, I'm talking the drives you get when you purchase servers or SANs, not some company buying a bunch of bare drives. When these drives fail, the common practice is for them to be replaced and the failed drive NOT RETURNED. That way the customer may dispose of the failed drive as their policies require, and they have the ability to track the location of the drive all the way through destruction. 

Did you call in or use the web portal?

That, and how long ago?

 

These work well:  http://goo.gl/PqRJ2r   :D

 

Well, WOPR has been upgraded!

 

I swapped out two of my oldest 4TB Seagates and popped them in my backup server (AP1) and installed two new Seagate 8TB archive drives in WOPR. 

 

So far so good, I've got two more on the way.

 

Does anybody have any suggestions on the best way to break in / test new drives?  So far I've been damn lucky but it's only a matter of time. 

 

I was thinking of just letting the BURST TEST in Scanner run for a few days but I'm sure there are better ideas.  :D

 

~RF

Let StableBit Scanner run a full surface scan. :)

 

Aside from that, there are a lot of anecdotal suggestions, but a full format (not quick) would be a good write test, as it would write to the entire drive.  And then let a full surface scan occur. 

Link to comment
Share on other sites

  • 0

hey Chris,

 

Since he's bought the Archive Drives, and we are talking about surface scans.. how does scanner handle the address virtualisation of archive drives (or doenst it?)

 

i.e. when you scan a certain block... how do you know if its the same block as last time? or if its been virtualised to the non-shingled area by the disk?

Link to comment
Share on other sites

  • 0

Did you call in or use the web portal?

That, and how long ago?

 

August 26, 2014, through the web portal, was the last one. I found the receipt in my email. :)

 

I helped a friend with his returns too, his were in a NAS, but Seagate said "no" on those, but the NAS manufacturer took care of it. Same time period.

Link to comment
Share on other sites

  • 0

hey Chris,

 

Since he's bought the Archive Drives, and we are talking about surface scans.. how does scanner handle the address virtualisation of archive drives (or doenst it?)

 

i.e. when you scan a certain block... how do you know if its the same block as last time? or if its been virtualized to the non-shingled area by the disk?

I'm not entirely sure, actually.

 

I'll have to ask Alex, and have him look into it. But it should work regardless (as it scans the entire disk). 

 

Though, technically, anything that's using LBA (vs CHS) is virtualizing the location on the drives, IIRC. 

 

August 26, 2014, through the web portal, was the last one. I found the receipt in my email. :)

 

I helped a friend with his returns too, his were in a NAS, but Seagate said "no" on those, but the NAS manufacturer took care of it. Same time period.

 

Then it may be that my drives were more than a year old and outside the warranty period anyways.  So double screwed, I guess.

 

And yeah, any drives including with hardware are ALWAYS taken care of by the company that sold the hardware. It's a PITA, but that's how both WD and Seagate handle them. 

Link to comment
Share on other sites

  • 0

true... but the 20Gb of space which isnt shingled.. doesnst actually count as part of the disk... in terms of capacity..

 

so if its already moved everything out of the 20GB and onto the rest fo the disk, then if you scan EVERYTHING.. it wont scan that 20GB... and vice versa.. if some stuff hasn't been moved to the shingled area.. then when you scan those blocks you will get the non shingled ones and it wont scan the shingled area

Link to comment
Share on other sites

  • 0

true... but the 20Gb of space which isnt shingled.. doesnst actually count as part of the disk... in terms of capacity..

 

so if its already moved everything out of the 20GB and onto the rest fo the disk, then if you scan EVERYTHING.. it wont scan that 20GB... and vice versa.. if some stuff hasn't been moved to the shingled area.. then when you scan those blocks you will get the non shingled ones and it wont scan the shingled area

 

 

I am not sure that that is true. If it is then I guess scanning SSDs is useless as well.

 

Regardless of the Shingling or not, scanning the drive should cause it to read the entire media. This applies for both the SMR drives and SSDs. So it's still a good idea.

In the case of SSDs, it may cause the controller to refresh the NAND if it detects an issue (or let you know if it's having problems).   Or it may let you know that the disk is having connectivity issues, or the SATA controller is having issues (as both of these can show up as unreadable sectors.... and a burst test will confirm that). 

 

So, regardless of the type of drive, yes, a surface scan is a great idea.

Link to comment
Share on other sites

  • 0

How can it scan the whole disk is it has areas unaddressable by the operating system though?

 

The Seagate archive drive is 8Tb shingled + an additional 20Gb non-shingled used for writing / temporary storage while re-shiningling

 

at any given time that 20Gb non-shingled slice could be mapped to any of the shingled blocks on the 8Tb disk.

 

so if you check a block is readable, how do you know whether you are getting the shingled block or the temporarily-mapped non-shingled one?  (and obviously missing the other)  since all of the block virtualisation is handled within the disk.

 

maybe an 8Tb block is unreadable .. but at the moment scanner asks for it , there is some hot data sitting in the 20gb storage space waiting to be written to the shingled area and it gets that instead?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...