Jump to content
  • 0

Some questions about duplicating


Allineedis

Question

File limiter plug-in

 

I just got my first 8TB drive and since I am not sure how it will hold up and the supposed write penalty I thought why not only use it for duplicated data. My reasoning was duplication is written once, maybe twice in case something changes and thats it. So this would be optimal for this type of drive. Also if it dies prematurely, I only loose duplicated data.

 

So I ticked the box "duplicated" on the 8TB drive and all others I only ticked the "unduplicated" boxes. So far so good, but I am also seeing duplicated data on other drives (light blu on the bars). Is this normal or not? I was expecting to only see duplicated data appear on the 8TB drive. Prior to adding this drive I had no duplication set.

 

Does this choice of limiting the duplication data to the 8TB drive make sense? Or am I better of to reset the file placement plug-in to its defaults and forget about it?

 

File corruption

 

What happens to a duplicated file when it becomes corrupt? For example, I copy a file to drivepool (100% OK). I let it duplicate, also fine. Now for some reason one of those files becomes corrupt. Will drivepool correct this by replacing the corrupt version with the second still OK version or worse case will it replace the OK version with the corrupt one, or a totally different scenario?

 

Just curious how this works and wondering if duplication not only safeguards against data loss, but also against degradation.

Link to comment
Share on other sites

3 answers to this question

Recommended Posts

  • 0

If it's called the "File Placement Limiter", then you're using an older version of StableBit DrivePool, and you may want to update.  We changed the name for it a while ago, to avoid confusion with the "File Placement Rules" feature we added.

 

If you're using the 1.3 version, then you're fine.

 

 

 

Either way, by unchecking the "duplicated" option for all of the other drives, you're telling the system that the 8TB drive is the ONLY valid target for duplication. Which... well, defeats the point. 

We don't differentiate between "original" and "copy" in the software, so if it's showing "duplication", that means that the files have duplication enabled for them, and reside on more than one disk. Not that it's the copy.

 

If you want to ensure that a copy resides on this disk, then you're setup is correct. However, if you don't care where the files end up, but you only want duplicated data on the 8TB drive, then what you want to do is leave all of the other drives alone, and just uncheck "unduplicated" data on the 8TB drive. This will make sure that no unduplicated data ends up on this disk.

 

 

As for the write penalties, I'm assuming you're using a Seagate Archive drive, with SMR. 

if so, you should really check out this thread:

http://community.covecube.com/index.php?/topic/1625-do-i-start-buying-8tb-archive-drives-or-not/

 

 

 

As for corruption, this is a touchy subject for a lot of people.

I'm guessing you're referring to "random bit flips" here (commonly called bit rot, incorrectly). If so...  these are statistically unlikely to happen that ... well, it's literally a white whale... and your drive is designed to detect and correct them at the physical and firmware level (invisibility to the OS).  The chances that you'll experience this sort of issue is pretty much nil.

 

If you mean media degradation due to physical defects and age... then that is something different. And StableBit DrivePool does do a number of checks for that.  Specifically, we check the file modify time when accessing the data. If it doesn't match, then we grab a checksum of the files. if that matches, we update the time. Otherwise, we flag the file and notify the user that there is a file part mismatch that needs to be resolved.

Additionally, changing the duplication status, or remeasuring the pool will trigger this for all files in the pool. 

 

Also, I apologize for the shameless self promotion here, but that's exactly what StableBit Scanner SPECIFICALLY addresses.  By default, StableBit Scanner performs a surface scan of all attached disks.  This is a sector by sector scan of the drive, where it attempts to read the data on the drive.  Any unreadable sectors are flagged in the system.  StableBit Scanner will then prompt you to run a file scan to see if we can identify any affected files (sometimes, these can end up in free space but we don't know until we check).  Additionally, if both products are installed on the same system, StableBit DrivePool will evacuate the contents of ANY disk with unreadable sectors as marked by StableBit Scanner. This is in attempt to prevent any potential corruption of more data (as this sort of issue tends to only get worse).

Link to comment
Share on other sites

  • 0

Drashna, thank you for your detailed explanation. This cleared a few assumptions I had, which were incorrect.

 

I am using the latest stable build on 3 WHS 2011 servers, all 3 have also the Scanner software installed. So I am well armed I guess.

 

I just migrated from 1.3 to 2.1.1.561 and have not fully checked all new settings. I simply installed all plug-ins I knew from 1.3. You are referring to the 3rd tab under balancing I guess, if so I have to check that out.

 

I agree with thw "white whale" analogy. If I look back, I cannot really remember that I ever had anything go magically corrupt. There are occasions , but that is always down to software glitches (file was generated incorrectly), hardware failures (drives starting to fail) user error (in my case usually impatience which stops a process which should not have stopped).

Link to comment
Share on other sites

  • 0

You are very welcome. And it's not a problem at all. As for the assumptions, they're semi-common, so I definitely understand. 

 

 

 

As for "Well armed", it's not a bad thing at all! 

And yes, in general, you hope that StableBit Scanner sits there and finds nothing at all. Ideally, that's what should always happen. But since things tend to fall short of the ideal, you should hopefully, have at least an early warning.

 

 

As for the white whale, absolutely.  I've heard people talk about it all the time, but I've never heard anyone experiencing a bit flip.  Though, we have definitely seen software cause corruption (very common with JPEG files, actually, for whatever reason). 

From everythign we've seen, it's really one of those urban legends that keeps on getting passed around.

 

 

That said, because of the other issues, an inventorying system isn't a bad idea. I know a few others have used "fv++" to generate and check file hashes. It's not necessarily a bad idea.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...