Jump to content
  • 0

NTFS compression and failed drive


korso

Question

Hello everyone!

 

Another happy user of Drivepool with a question regarding NTFS compression and file evacuation. A few days ago I started having reallocated sectors counters on one drive. Stablebit scanner ordered drivepool to evacuate all the files, but there was not enough space so some of them remained.

 

I bought another drive, which I added to the pool, and tried to remove the existing one, getting an "Access is Denied" error. Afterwards, I tried to force evacuation of files from the damaged drive using the appropriate option in the Stablebit scanner. This triggered a rebalance operation which was going very well, but then I notice several hundreds of GB marked as "Other" not being moved.

 

Then it stroked to me that the new drive has some files without NTFS compression, whereas the old drives in the pool did have. I think somehow since the checksums are not the same for compressed and uncompressed files this is somehow confusing the scanner. What I did so far (for consistency at least, hope this doesn't make things worse!!!) is to disable compression from all the folders I had it enabled (from the old drives, including the faulty one) and wait for the rebalance to complete.

 

Is this the right approach? Is this also expected to happen when using NTFS compression? In drivepool is actually not worth the hassle to have it enabled? (I was getting not fantastic savings, but hey! every little helps, and wasn't noticing perf. degradation). Hope the post somehow makes sense and also hope my data is not compromised for taking the wrong steps! Thanks!

 

DrivePool version: 2.2.0.738 Beta

Scanner: 2.5.1.3062

OS: Windows 2016

 

Attached DrivePool screenshot as well

post-8125-0-73293800-1504642909_thumb.jpg

Link to comment
Share on other sites

11 answers to this question

Recommended Posts

  • 0

Ok, just checked that enabling NTFS compression does not change the md5sum of the file, so I am not so sure anymore if it has something to do or not :(

 

I will wait for the rebalancer to finish to see if the files go away from the damaged drive since I cannot remove it (Access is Denied). Is there a new version I can try with? Logs that I might need to collect?

Link to comment
Share on other sites

  • 0

Well, NTFS Compression is generally "not good" to use, as it can cause all sorts of issues.  it's generally recommended NOT to use it.  (You'd be better off with compressed folders/zip/rar/7z files, than NTFS compression).

 

As for the access denied when removing, make sure you're using the "force damaged disk removal" option, as that may "fix" this.

 

Otherwise, try installing this version, as it should fix the issue:

http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.0.829_x64_BETA.exe 

Link to comment
Share on other sites

  • 0

Well, NTFS Compression is generally "not good" to use, as it can cause all sorts of issues.  it's generally recommended NOT to use it.  (You'd be better off with compressed folders/zip/rar/7z files, than NTFS compression).

 

As for the access denied when removing, make sure you're using the "force damaged disk removal" option, as that may "fix" this.

 

Otherwise, try installing this version, as it should fix the issue:

http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.0.829_x64_BETA.exe 

 

Generally not enable it ever, or just not with DrivePool? :P I think I will stay away from it anyway due to the recent events, but could you please elaborate a bit (if you don't mind) on the possible issues that might happen with NTFS compression? I use it regularly on non-pooled drives and never had an issue so far (fingers-crossed!)

 

Force removal btw didn't work ,so I will upgrade as soon as the current balancing task completes.

 

Thanks for the super-quick answer btw! :)

Link to comment
Share on other sites

  • 0

Honestly: Ever.   

But generally, you definitely shouldn't enable it when working with the pool.  

 

The two main reasons not to are compatibility (some apps have issues with it), and it can degrade performance with dealing with compressed files.

 

And ... honestly, given drive sizes and the increasingly better price per GB, throwing more drives at the problem is going to be the best solution.  Otherwise, a server OS and data deduplication may serve you MUCH better than compression.

Link to comment
Share on other sites

  • 0

Thanks Christopher! Much appreciated, I will disable NTFS compression (also to check what savings I was getting). I upgraded the DrivePool version with the provided Beta and the disk removal is progressing. I see some nice visualization bits added so thanks you and Alex for the hard work!

 

I will report back as soon as the disk is out, and talking about dedup, I guess enabling Windows Server Dedup on the DrivePool kinda defeats the purpose of data duplication, right? Is this something even supported / recommended? I saw some posts regarding this but no so sure what to get out of them. I guess that deduplicating the files will incur in data loss if one of the drives in the pool fails, plus the fact that the drive is not even recognized if you plug it into a host which does not have the deduplication feature enabled.

Link to comment
Share on other sites

  • 0

You're very welcome!

 

And yeah, Alex has put in a lot of work into making the removal system much more robust, and to improving the UX and feedback. 

 

 

As for the deduplication, yes and no. Deduplication is done per disk, and cannot be done on the pool drive itself.

It works very different than compression or the like. It looks for identical blocks of data in files and "cuts" them out.  It stores the identical blocks in a hidden folder, and the peices the files back together as you read them. 

 

It's actually fairly efficient, but there is definitely overhead for doing so. And a single disk error is more likely to take out more data.  It's why I don't personally use the feature on my storage server, and why I don't generally recommend it for storage. 

 

But it is supported.  And you can see upwards of 50-60% space savings per disk (eg, halving the used space). But this depends entirely on the content of the disk. 

 

So you could fit more onto a single disk, but you're more likely to lose more if/when a disk fails. 

Link to comment
Share on other sites

  • 0

You're very welcome!

 

And yeah, Alex has put in a lot of work into making the removal system much more robust, and to improving the UX and feedback. 

 

 

As for the deduplication, yes and no. Deduplication is done per disk, and cannot be done on the pool drive itself.

It works very different than compression or the like. It looks for identical blocks of data in files and "cuts" them out.  It stores the identical blocks in a hidden folder, and the peices the files back together as you read them. 

 

It's actually fairly efficient, but there is definitely overhead for doing so. And a single disk error is more likely to take out more data.  It's why I don't personally use the feature on my storage server, and why I don't generally recommend it for storage. 

 

But it is supported.  And you can see upwards of 50-60% space savings per disk (eg, halving the used space). But this depends entirely on the content of the disk. 

 

So you could fit more onto a single disk, but you're more likely to lose more if/when a disk fails. 

 

Hi again Christopher. Thanks for the explanation about dedup! I think I will refrain for using it on the drives that are linked to the pool. I was able finally to remove the drive, took a while and had to disable NTFS compression on all the files. The only strange thing is that there were some leftover files marked as 'Other'. I compared them with one of the healthy drives and all of them were there, don't know why they were not removed. Now I am formatting the drive and ready to send it for RMA. Thanks!

Link to comment
Share on other sites

  • 0

You're very welcome.

 

As for the other data: 

http://community.covecube.com/index.php?/topic/37-faq-unduplicated-vs-duplicated-vs-other-vs-unusable/

 

This link covers what is "other" data.   And it's normal to have a small chunk of it on your drive.

 

Yep, I know but the weird thing is that the other data was actually data that was duplicated. I guess it was leftovers from all the balancing / moving around data, so I ended with the same data thrice. In any case, all taken care of :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...