Jump to content
Covecube Inc.
  • 0
Penryn

What does "unusable for duplication" mean?

Question

I recently had to remove a drive from a pool when I realized it was failing during a duplication check. The drive was emptied OK, but I was left with 700MB of "unusable for duplication" even after repairing duplication and remeasuring the pool.

 

Then I decided to move a 3GB file off of the pool, and now it's saying it has 3.05GB of "unusable for duplication." But 700MB + 3GB != 3.05 GB...

 

I'm lost. What is going on? How can I see what is unusable for duplication? Did I lose data? How do I fix this?

 

EDIT: I also have 17.9GB marked "Other" but even after mounting each component drive as its own drive letter, no other files/folders are present than the hidden pool folder.

Share this post


Link to post
Share on other sites

14 answers to this question

Recommended Posts

  • 0

how many disks do you have in your pool?  If you just have two, then this can be caused by different disk size, the amount of used space, etc. It just means that it doesn't have enough room to add duplicates for that much space on one of the disks.

 

As for the "Other" space (and well, the unusable for duplication):

http://community.covecube.com/index.php?/topic/37-faq-unduplicated-vs-duplicated-vs-other-vs-unusable/

Share this post


Link to post
Share on other sites
  • 0

Ok, so here's what happened:

 

I had 3x ~250GB disks in a pool. One was dying, so I removed it. This left me with the 3.05GB of unusable for duplication.

 

I replaced that with a 480GB disk. It then read there were 17GB of "unusable for duplication."

 

So, freaking out, I did a balancing pass, and it went back down to about 3GB afterwards. I suppose there's some math that might leave 3GB of unduplicatable space, but given the slight variances in disk capacities (~240GB/~250GB/~480GB), I'll accept it.

 

No data lost and everything else appears to be in order. Thanks again, Drashna!

Share this post


Link to post
Share on other sites
  • 0

You're very welcome.

 

And the "Unsable for duplication" is any space that we couldn't use assuming that everything is 2x duplicated.  THis can occur because of files outside of the pool, because of "slack space" taking up more space on one drive than the others, etc.  

 

There is a pretty good size list of things we check for. And we do try to thoroughly check here. And the "Duplication Space Optimizer" helps to fix this issue, as well.  In fact, if you set the ratio slider to "100%", and set the balancing to occur immediately (and uncheck the "but not more ..." option), it may help get the number to "0". 

Share this post


Link to post
Share on other sites
  • 0

Drashna,

I know this is an old thread but ..... would cluster size have anything to do with "unusable for duplication 637GB"?

 

I pulled a 4TB WD RED from an existing NTFS pool on my win10pro workstation, formatted it ReFS (I can't remember what the cluster setting was, 64K or 4096 bytes) and started a new pool with it. I disabled duplication on both pools. Then I started moving files over to new ReFS pool. This created enough space to remove a 2TB drive from old pool. I formatted that drive ReFS and set cluster to 64K.

Then I removed my 2 SSDs from old pool, formatted them to ReFS (64k) and added to new pool.

I am now sitting at 637GB of unusable space for duplication. I haven't even turned duplication on yet as I have another 24 TB to move to new pool!

I installed drivepool version 2.2.0.906 when I first saw this problem and both pools are on the same system. I have included captures of old and new pool.

PS. Love Drivepool, can't wait for you guys to fully support ReFS!! 

Thanks for a great product!

 

 

 

Capture New ReFS Pool.PNG

Capture Old NTFS Pool.PNG

Share this post


Link to post
Share on other sites
  • 0

It's because of the layout of the data on the drives, the differing sizes, and the free space. 

 

But if you're not using duplication, you can ignore it, as it won't affect your free space. 

 

As for why the other pool, with more disks has no issue, is that it can place stuff more spread out.  

 

 

Share this post


Link to post
Share on other sites
  • 0

Thanks,

 

I was wondering about the free space on the new pool.  Since my original question, I formatted all the drives on the new ReFS pool with a cluster size of 4096 and with a 2nd 4TB drive added all my "problems" went away. I was getting worried for a bit there but thanks for the comment "But if you're not using duplication, you can ignore it, as it won't affect your free space. " I feel a bit better, I will re-enable duplication once I am sure I have enough disks to support my new pool. 

Wow, it takes a while to move 24TB of data!!

 

Thanks again everyone!

 

 

Share this post


Link to post
Share on other sites
  • 0

Yeah, it's mostly letting you know there is an issue.  The Duplication Space Optimizer balancer actually will actively move data around to eliminate the "Unusable for Duplication" space.  So over time, you'd see this drop down (hopefully)

 

As for moving the data .... 4 hours per TB, roughly. Longer if the pool is doing it and being used....  So that's 2-3 days ..... 

Share this post


Link to post
Share on other sites
  • 0

yep, that's about how long it is taking, the folders with larger files move a bit faster but the folders with smaller files takes a bit longer.

 

I know I am dragging this conversation out but I have one more subject; ReFS.

 

From all that I have read ReFS is better for data storage, even without Windows Storage Spaces. Is that correct from you guys point of view?  I have already started the move, obviously, and no one popped up and said "Hold on, that's not a good idea!" so I hope the answer is "Yes, ReFS is better for data than NTFS, especially if you use StableBits DrivePool!"

I have had bitrot happen to me too many times and I just got tired of it, I am hoping that ReFS will put a stop to it.

And are you all still working on adding more benefits to using ReFS to drivepool? Can we look forward to a Drivepool with Storage Spaces flavor added?

 

Your thoughts?

Share this post


Link to post
Share on other sites
  • 0

Yeah, that sounds right.  Which is part of why I try to give ballpark estimates, rather than definitive. Small files take longer to copy!

 

And no worries about the conversation length, it's what I'm here for. :) 

 

For ReFS, "yes and no".  Let me talk about the "no" first: 

  • If "bad things happen", then there is no (cheap) way to recover your data. Very few data recovery tools support ReFS yet.  So if you needed to run recovery, you're SOL (either having no options, or paying an obscene amount of money to get it recovered). 
  • If there is file system corruption, there is no equivalent to CHKDSK (this shouldn't be as much of an issue but it can still happen)
  • There is also a performance hit for using ReFS.  Not just the integrity streams... but because it's a COW file system (copy/allocate on write).  So it's not going to be as fast as NTFS, in most cases. 
  • And integrity streams can take up a significant portion of the disk, so you end up losing space. 

For the "yes":

  • However, because it's a COW file system, it's much less prone to data loss or corruption do to power loss (less likely to see "raw" partitions, for instance), since the data isn't allocated until the write is complete (IIRC).
    This alone makes ReFS very attractive for storage. 
  • Additionally, there is the file system integrity thing. However, this is NOT enabled for normal files, by default. It's only enabled for metadata, normally.  You'd have to use the "Set-FileIntegrity" command after formatting to enable this.  
  • Larger partition sizes supported out of box
  • Integrity checking

So, ReFS is good for storage.  But as for using it or not, .... that's a call that we can't really make.  Not a this point. 

But personally, I so use ReFS on my pool, but I also duplicate the entire pool. So if something does go wrong, it's a format/replace and reduplicate. 

Share this post


Link to post
Share on other sites
  • 0

Thanks again, might have to change the title of this discussion!

So, taking your points in order

On 3/30/2018 at 1:18 PM, Christopher (Drashna) said:

If "bad things happen", then there is no (cheap) way to recover your data. Very few data recovery tools support ReFS yet.  So if you needed to run recovery, you're SOL (either having no options, or paying an obscene amount of money to get it recovered). 

Well, I will enable pool duplication as you do. In fact I have started it back up as I continue to move files over..

On 3/30/2018 at 1:18 PM, Christopher (Drashna) said:

If there is file system corruption, there is no equivalent to CHKDSK (this shouldn't be as much of an issue but it can still happen)

Now this is irritating as I am can get a little OCD but I think I can get my head wrapped around it, especially if I can have file integrity. NO BIT ROT!!

On 3/30/2018 at 1:18 PM, Christopher (Drashna) said:

There is also a performance hit for using ReFS.  Not just the integrity streams... but because it's a COW file system (copy/allocate on write).  So it's not going to be as fast as NTFS, in most cases.

I have read that but usually my write speed is not so important as my read speed, except for moving files over from old drives! LOL Other than that not so worried.

On 3/30/2018 at 1:18 PM, Christopher (Drashna) said:

And integrity streams can take up a significant portion of the disk, so you end up losing space.

True but drive are cheap right? LOL!!  I am slowly upgrading all my drives to a standard of 4TB and specifically WD Red drives.

PROs ..

On 3/30/2018 at 1:18 PM, Christopher (Drashna) said:

However, because it's a COW file system, it's much less prone to data loss or corruption do to power loss (less likely to see "raw" partitions, for instance), since the data isn't allocated until the write is complete (IIRC).

I remember reading about this years ago, perhaps about ZFS?? Not sure but when I heard that ReFS would have it for Windows I was almost sold then and there. You are right, very attractive!

On 3/30/2018 at 1:18 PM, Christopher (Drashna) said:

Additionally, there is the file system integrity thing. However, this is NOT enabled for normal files, by default. It's only enabled for metadata, normally.  You'd have to use the "Set-FileIntegrity" command after formatting to enable this.  

This is a point I want to know more about .... 

On 3/30/2018 at 1:18 PM, Christopher (Drashna) said:

Larger partition sizes supported out of box

Absolutely, once I have standardized with 4TB then I will start upgrading those. Nice to have room to grow!

On 3/30/2018 at 1:18 PM, Christopher (Drashna) said:

Integrity checking

Okay, so you mention the "file system integrity thing" but say it is not enabled by default and you follow that up with a PS Command. Then your last line mentions "Integrity Checking". I need a break down on these two and the differences as I thought this ability was enabled by default...

Is there a link you can send me that I can read about, with the pros and cons???  I am assuming with file integrity checks there is more space lost ...

My irritation has been that I will save a file and then a month or more later I will go to open it and it won't. I will go to play a video and I will be told that the system is unable to render that file! Bitrot drives me crazy! I have, over time, upgraded my server and am now running server memory (E.C.C.) and I 'think' I have seen fewer errors and lost files but ...

Do you think "file system integrity" is worth it or is "integrity checking" enough?

Thanks Drashna.

 

 

 

 

Share this post


Link to post
Share on other sites
  • 0

Drashna,

If I do want to enable file integrity streaming I would need Storage Spaces, correct? Then I would run this command in PS Set-FileIntegrity H:\ -Enable

Since I don't want to enable storage spaces (I would rather you guys worked out a StableBit version and save some space. Pretty Please!!) will I gain any value from enabling file integrity streaming?

So I guess my basic question is this, will ReFS by itself (no commands ran other than a standard Quick Format as ReFS) stop bitrot?

Share this post


Link to post
Share on other sites
  • 0
2 hours ago, Scott said:

Well, I will enable pool duplication as you do. In fact I have started it back up as I continue to move files over..

Glad to hear it! 

2 hours ago, Scott said:

Now this is irritating as I am can get a little OCD but I think I can get my head wrapped around it, especially if I can have file integrity. NO BIT ROT!!

Yup, it can be. But the disk itself has some ECC, so bitrot shouldn't even be an issue, honestly. And if you have StableBit Scanner running, it should help the disk find and fix any potential issues before they become an issue. 

2 hours ago, Scott said:

I have read that but usually my write speed is not so important as my read speed, except for moving files over from old drives! LOL Other than that not so worried.

Glad to hear it.  And yeah, a lot of it is negligible for home use, unless you're "baby sitting" to transfers. 

2 hours ago, Scott said:

True but drive are cheap right? LOL!!  I am slowly upgrading all my drives to a standard of 4TB and specifically WD Red drives.

LOL, that depends on the amount of data you have, and the size of disks you're buying.  And .... electricity costs for running the server. :)

2 hours ago, Scott said:

I remember reading about this years ago, perhaps about ZFS?? Not sure but when I heard that ReFS would have it for Windows I was almost sold then and there. You are right, very attractive!

Yup. ZFS is also a COW file system.  So are a number of the newer file systems.  It has significant advantages, over more "traditional" file systems, like NTFS, and is a major selling point. 

2 hours ago, Scott said:

Okay, so you mention the "file system integrity thing" but say it is not enabled by default and you follow that up with a PS Command. Then your last line mentions "Integrity Checking". I need a break down on these two and the differences as I thought this ability was enabled by default...

Sorry, yeah, that was redundant.  It's the same thing.  And yeah, you want to run the powershell command before you start putting data on the drive, so that it's enabled for all files, by default. 

2 hours ago, Scott said:

If I do want to enable file integrity streaming I would need Storage Spaces, correct? Then I would run this command in PS Set-FileIntegrity H:\ -Enable

No.  However, without Storage Spaces, you can't repair any files that come back as corrupted. That's the advantage of running storage spaces. 

And yes, that's the command you'd run, but on each of the pooled drives, not the pool itself. 

Share this post


Link to post
Share on other sites
  • 0

Good points, I think I have my answers now.  I did some more reading as well and I have enabled file integrity on each of my drives within my pool. Any drives I add from now on out I will format with file integrity enabled from the start.

 

Thanks for the walk through!!

 

Best wishes,

Scott

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...