Jump to content
Covecube Inc.
  • 0
acdcking12

Any recommendation advice on the new Seagate 8 TB SMR drive for Drive pool?

Question

So I bought 2 of these drives, and am interested in using both of them as 1 larger drivepool.

 

I do not care about performance at all. Have you guys done any research on these drives as to how well they would work in a drivepool?

Share this post


Link to post
Share on other sites

13 answers to this question

Recommended Posts

  • 0

acdc, in http://community.covecube.com/index.php?/topic/1025-budget-media-server-build/, some is already written. I use two of these as WHS 2011 Server Backup drives to my fullest satisfaction. Christopher (Drashna) has one or more in a Pool and is satisfied as well. If your data on that drive is mostly static then the possible write-penalty should not bother you and, even if you do not care about performance, they read like crazy.

 

There is also a review here: http://www.storagereview.com/seagate_archive_hdd_review_8tb

 

AFAICS, as long as you do not run transactional databases on these, they are excellent drives and at a very low cost per TB.

Share this post


Link to post
Share on other sites
  • 0

Yup. :)

 

I only have one in my pool at the moment. I had planned on buying more, but stuff came up (car repairs, broke the cap/crown off of one of my teeth, and vet trip for one of my dogs), I'm tapped out on my discretionary funds. :(

 

 

As Umfriend has said, the read speeds from the drives are fantastic (180MB/s reliably)

The write speeds where pretty good as well (~150MB/s), but occasionally stalled (dropped to 0 bytes/s).

 

If write speed is important, then I would recommend using the "SSD Optimizer" balancer and set the Archive drives to be ... well Archive drives (and no, this wasn't planned, just a coincidence for the names!).

 

 

Though, one thing here. If you're going to be using a lot of larger files (eg, not pictures, but videos/ISOs/etc), then I would highly recommend formatting the drive and setting the Allocation Unite size to 64k. 

When I was testing the drive out, I found that the larger Unit size gave about 20MB/s more read speed. At least for these larger files.

The reason I recommend this is that the larger allocation unit (cluster) size means larger sequential data (hence the faster read speeds), and reduced file fragmentation.  However, you lose more space to partially allocated clusters. This means you may lose some space on the disk overall, but if you're using large files, this should be minimal. Also, this "lost" disk space shows up as "Other" space in DrivePool, so you may see more of that in your pool.

 

 

 

Also, you may want to use the file placement rules to exclude the Client Computer Backup folder from the Archive Disks (if you're using Windows Home Server or Windows Server Essentials). As it is a database, and may severely impact performance.

Share this post


Link to post
Share on other sites
  • 0

Since you brought it up, any plans to implement a balancer by file size?  That would allow DP to automatically shift large files to archive-oriented drives...

Share this post


Link to post
Share on other sites
  • 0

Since you brought it up, any plans to implement a balancer by file size?  That would allow DP to automatically shift large files to archive-oriented drives...

Not currently, but I'll pass the suggestion/request along to Alex (the Developer)

Share this post


Link to post
Share on other sites
  • 0

IMHO, the perfect match might be a combo of those SMR drives (large capacity at low cost per TB) with SSDs as landing drives. I am just wondering how that would work with databases (say SQL Server DBs) but, if you are looking for performance, you would probable want those on SSDs only anyway (e.g. if x2 duplication then do 2x 120GB SSDs, partition both as 40GB for the landing SSD/Pool and 80GB for databases, perhaps even unduplicated if your backup strategy is sound).

Share this post


Link to post
Share on other sites
  • 0

I mostly agree. 

Mostly because the performance was fine most of the time. But because of the periodical stalling... Yeah.

 

As for "how would that work", it depends on the size of the database. If it's going to stay fairly small (less than 10GBs), then I'd say use a file placement rule to lock the files down to the SSD drives. :)

Share this post


Link to post
Share on other sites
  • 0

Since you brought it up, any plans to implement a balancer by file size?  That would allow DP to automatically shift large files to archive-oriented drives...

Alex  has posted a somewhat technical description of why this would be difficult to implement:

https://stablebit.com/Admin/IssueAnalysis?Id=14535

 

But basically, it would require a significant change in the code to implement.

We may add this latter, but not right now.

Share this post


Link to post
Share on other sites
  • 0

 

Though, one thing here. If you're going to be using a lot of larger files (eg, not pictures, but videos/ISOs/etc), then I would highly recommend formatting the drive and setting the Allocation Unite size to 64k. 

 

 

Hello,

 

just bought myself a Seagate 8 TB Archive disk. Just wondering about changing the default allocation unit size to 64k.

 

Would a single small file use one allocation unit? Or could several small files share an allocation unit?

 

I am asking since (my) video library consist of large (video) files together with small meta data files (.nfo, .jpg ..)

 

Would you still recommend an allocation unit size of 64k?

 

Thanks in advance

 

Per

Share this post


Link to post
Share on other sites
  • 0

For the allocation unit size:

 

Only one file can be assigned per unit (or cluster).  However, larger files are assigned multiple clusters.  So, the 1k file will take up the full cluster, and have a lot of "wasted space" (depending on the size).  However, this happens regardless, as a file that is 68k will take up two clusters/units, where one is partially wasted.  This is normal, and happens for every size.  

 

The advantage of the larger cluster is that you can guarantee larger sequential chunks on the disk, meaning that you'll get better read and write speeds depending. 

In fact, I noticed that the speed jumped from 160MB/s to 180MB/s reads on large file transfers when I switched the allocation unit size to 64k.

 

 

So, if you're storing large video files and metadata, it's definitely worth using the larger allocation unit sizes.

And if you're especially worried, you could use the File Placement Rules feature to limit the metadata to specific disks (add a rule that includes "\*.nfo", "\*.jpg", etc, and limits to a specific drive, such as a SSD).  That would prevent the files from being placed on the Archive drive.

Share this post


Link to post
Share on other sites
  • 0

Well, please feel free to test out the drive performance on your own.  Just keep in mind, you have to reformat the drive to change it, so you'd want to test it out with "dummy files" first (eg, copy files to and from it to see how well it performs.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...