Jump to content
  • 0

Painted

Question

Hi, I've been beating my head on what appears to be an NTFS permissions issue. Suddenly last week my pool started showing Access Denied on any file or folder creation. I tried taking ownership and resetting permissions on the pool but kept getting Access Denied. After trying every solution out there I could think of (takeown, elevated and system cmd windows, icals, everything Access Denied),

 

I checked on the forums here and see that several other people have had this issue, and that the Wss.Troubleshoot tool had the magic solution, so I eagerly tried it out (version .165). 

 

Unfortunately, the tool also reports Fix was not successfully applied. Access is Denied.

 

Help would be appreciated! 

 

 

Link to comment
Share on other sites

16 answers to this question

Recommended Posts

  • 0

Are there any missing disks in the pool?

 

Check disk management (run "diskmgmt.msc") and make sure the pool isn't showing up as read only.

 

Also, do you have StableBit Scanner? If so, does it detect any issues with any of the drives?

If not, check the individual drives that are in the pool, and see if any of them are having issues. If so, that is the cause. 

 

Otherwise, do you have any antivirus, backup or disk tools installed?
And could you do this: http://wiki.covecube.com/StableBit_DrivePool_Q2159701

 

And could you enable file system logging, reproduce the issue, and then upload the logs?

http://wiki.covecube.com/StableBit_DrivePool_2.x_Log_Collection

Link to comment
Share on other sites

  • 0

Are there any missing disks in the pool?

 

Check disk management (run "diskmgmt.msc") and make sure the pool isn't showing up as read only.

 

I was about to start a new Topic, but figured I'd find something close already posted.

 

I don't have time to write many details at the moment but would like to understand this issue. And I have only observed this concerning behavior once, but have not attempted more tests yet.

 

 

Briefly:

Let's assume a simple 2-Drive SATA over USB 3.0 pool with 2-copy duplication -- therefore, each disk should contain a full copy of every file.

 

I pull one disk out (attempting to do so using a USB eject first, but likely to say "in-use").

At that point, I was able to continue to read the pool drive, but unable to write.

 

So...

Is this behavior normal for Drive Pool? i.e., when the pool is degraded, is write activity disabled? Why? Isn't a major feature of redundancy (duplication) that the Pool Drive continues to operate normally with no interruption to the applications using the Pool for read and write operations?

 

Thanks for your assistance!

...Terry.

Link to comment
Share on other sites

  • 0

Yes, this is normal, completely.  And expected.

 

However, something you're missing...  What happens when you modify the files?  Now you have a copy on two disks, but with differing content. What happens if both copies are modified on different systems in different ways?  

 

So, when the disk is removed, it's marked as missing.  The pool is put in a read only state, to prevent any issues with the data getting out of sync.

 

Once you've removed the missing disk from the pool, it goes back to normal.

 

 

 

 

 

If you want to keep a USB drive in sync with, then you may want to use a sync tool (such as free file sync, Always Sync, Good Sync, etc) instead of duplication. 

Link to comment
Share on other sites

  • 0

I am currently removing a 4TB HDD and it will take around 24hours at this rate.  I too have just noticed that my Pool appears to be read only (Can not create any folders or files).  I'm not using duplication.  The following quote is from the online doco:

 

 

Drive removal can take a while, depending on how many pooled files are on that drive.

While a drive is being removed, the pool will go into a read-only mode and some unprotected files may not be visible on the pool until drive removal completes.

 

 

I'm surprised this is considered normal behaviour for DP (as it was not the case with DB) and is counter intuitive to the whole purpose of pooling.  This may cause issues for me as an 8tb hdd could take a couple of days to remove and I could be trying to write to a folder during this time (TV Recordings for example - which is how I found out it was read only - JRiver complained it could not write to my folder).

 

Thinking out-loud, would it be better to use the File Balancer to move all the files first, then use the remove disk function once empty to minimise downtime?  And if that would work, would it not make sense for the Drive Removal option to also function in the same way?

 

Sorry if I'm off the mark as a DP Newbie and interpreted the above posts incorrectly.

 

Thanks

Nathan

Link to comment
Share on other sites

  • 0

Specifically, this is done, because when you open files for writing or modification, the system "locks" the file. This means that we cannot move it off of the disk, which will cause the removal to fail. This is why the pool is set as "read only" during the removal.

 

However, if you want, this behavior can be disabled, with the advanced config settings:

http://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings

Set the "RemoveDrive_PoolReadOnly" value to "False" and restart the StableBit DrivePool Service, or reboot the system. 

 

 

Alternatively, yes, you can use the balancing system to evacuate the drive prior to removal. This is what I recommend actually.  Open the balancing settings, go to the "Balancing" tab, click on the "Drive Usage Limiter" balancer.  Uncheck both the "Duplicated" and "Unduplicated" option for the disk in question.

This will clear out the disk in question, and once the disk is empty, the removal should be very quick.

Link to comment
Share on other sites

  • 0

Thanks - I stopped the Remove and kicked off the the Balancing option last night... 35% done....

Well, hopefully, it goes pretty fast for you. :)

 

And just keep in mind that a normal copy takes roughly 4 hours per TB. And balancing will take longer, because of the stated limitations/configuration. 

Link to comment
Share on other sites

  • 0

FYI - I then stopped the balancer (it was going OK moving files off the drive but it was also doing other balancing) to test the Advanced Config --> Disable Read Only on Drive Removal, then finished the Removal of the Drive fairly quickly and I could still make changes to my pool = All worked.  I've started the balancer again, copied / deleted things to the pool etc.  All seems robust and it has had no issues with me starting, stopping, changing activities, moving HDD between controllers etc.  Between each of my tests I compare the contents against my other pool and so far so good!  

 

I will have some questions on the Balancer (Ordered File Placement) as it is not doing what I would expect in all cases but I'll wait till it has finished (plodding along at 62.2% so it will still take some time).

 

Thanks

Nathan

Link to comment
Share on other sites

  • 0

Well, all the tasks are performed by the background service, and the UI just provides an interface for that service. :)

 

Additionally, ALL file operations are performed in such a way that should survive sudden power loss (we use a "copytemp" file until the operation is completed, and then rename the file and delete the original if it's a move). And stopping the service stops these tasks gracefully. 

And the hard drives are identified by the Volume IDs, which don't change normally. So, they're very tolerant to be moved around as long as the controller isn't doing something weird (eg, part of a RAID/JBOD, or some USB chipsets do weird things). 

Link to comment
Share on other sites

  • 0

Yes, this is normal, completely.  And expected.

 

However, something you're missing...  What happens when you modify the files?  Now you have a copy on two disks, but with differing content. What happens if both copies are modified on different systems in different ways?  

 

So, when the disk is removed, it's marked as missing.  The pool is put in a read only state, to prevent any issues with the data getting out of sync.

 

Once you've removed the missing disk from the pool, it goes back to normal.

 

If you want to keep a USB drive in sync with, then you may want to use a sync tool (such as free file sync, Always Sync, Good Sync, etc) instead of duplication. 

 

I am familiar with AllwaySync, but it doesn't do background duplication (nor pooling).

 

In the RAID systems that I am familiar with, when the system is in degraded mode (i.e, missing a disk for parity or mirror), writes continue to be permitted. If the offline disk is brought online, any files that were updated in the live pool are overwritten on the replaced disk.

 

To avoid sync conflicts, you could either presume that a hot removed drive (or a drive removed during a power down) will not be attached to any other system in write-mode, or the sync could check that all files needing reduplication are of a older date on the target drive (and if there is a date conflict, either reverse the duplication director or record a duplication failure in the log).

 

In other words:  Having the pool go "read-only" due to the loss of one drive is confusing to me. I understand there are benefits to this choice, but it is definitely not the behavior that I desire. The reason I use a pool with duplication is to keep the pool fully and automatically available at all times, even if one drive has partially or completely failed or is removed.

 

Is there a configuration option or future feature?

 

Are Windows "Storage Spaces" a better alternative for me? Do you have a pros/cons chart?

 

Thank-you!

...Terry.

Link to comment
Share on other sites

  • 0

I've created an Issue/Bug for this, and specifically requested that we add an advanced option to allow the pool to remain online/writable when a disk is missing.

https://stablebit.com/Admin/IssueAnalysis/23902

 

However, if this was implemented, we would absolutely require a complete recheck of the pool every time a disk goes missing. For a large pool, this can take a quite large amount of time. 

 

 

I don't think this request is unreasonable at all, and for people that want this functionality, I feel that it would be definitely nice to have.

 

 

 

 

In the meanwhile, storage spaces absolutely does support this (at least documented as such, and developed as such) right now.

 

 

As for a pro/con chart, no we don't have one, but the main differences is that Storage Spaces is a block based solution (like dynamic disks and RAID), as opposed to a file based solution.  

Additionally, there are very few recovery software solutions that can read and recover data from a damaged/degraded/malfunctioning array.  Whereas, because StableBit DrivePool is a file based solution, you can extract the data from even a single disk. And pretty all recovery software can read NTFS. 

 

Another difference is that with ReFS and a mirrored (and I believe parity) array, Storage Spaces can detect and correct corrupted files on the fly.   However, there is a performance impact with using ReFS (as it calculates checksums as data is being written, and reads from them), and there is a significant performance impact for parity arrays.  In both cases, the systems' CPU must do the heavy lifting, and this can/does adversely affect performance. 

Link to comment
Share on other sites

  • 0

I've created an Issue/Bug for this, and specifically requested that we add an advanced option to allow the pool to remain online/writable when a disk is missing.

https://stablebit.com/Admin/IssueAnalysis/23902

 

However, if this was implemented, we would absolutely require a complete recheck of the pool every time a disk goes missing. For a large pool, this can take a quite large amount of time. 

I don't think this request is unreasonable at all, and for people that want this functionality, I feel that it would be definitely nice to have.

 

 

Super, super, super, appreciate this!

 

My requirements for a pool to remain online are not critical for most of my use cases at the moment, so I was still leaning towards DrivePool as my storage management software, but having this functionality supported clinches the deal.

 

There are various scenarios this will be helpful in, even for "planned" disk removal. For example, I could have my Pool setup with 3x duplication so that I can pull a drive at any time to take offsite to secure storage. When convenient, I can then add a replacement disk, and/or permanently change the Pool to 2x duplication.

 

Yes -- there are probably ways to do this step-by-step manually in the current version of DrivePool, but maintaining write capability through unplanned single disk failure is particularly important; even if the side-effect benefits could be accomplished via manual methods.

 

With respect to "complete recheck" -- I agree. When the original or replacement drive is brought online, it is perfectly reasonable for performance to degrade as duplicates are checked and/or rebuilt as necessary. The interim plan is to use AllWay Sync or a similar file comparison+copy/sync utility to bring the new drive up to date in the foreground; having DrivePool do this automatically is a logical feature enhancement.

 

Thank-you very much!  :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...