Jump to content

CosmicPuppy

Members
  • Posts

    6
  • Joined

  • Last visited

Reputation Activity

  1. Like
    CosmicPuppy reacted to Christopher (Drashna) in Access Denied   
    I've created an Issue/Bug for this, and specifically requested that we add an advanced option to allow the pool to remain online/writable when a disk is missing.
    https://stablebit.com/Admin/IssueAnalysis/23902
     
    However, if this was implemented, we would absolutely require a complete recheck of the pool every time a disk goes missing. For a large pool, this can take a quite large amount of time. 
     
     
    I don't think this request is unreasonable at all, and for people that want this functionality, I feel that it would be definitely nice to have.
     
     
     
     
    In the meanwhile, storage spaces absolutely does support this (at least documented as such, and developed as such) right now.
     
     
    As for a pro/con chart, no we don't have one, but the main differences is that Storage Spaces is a block based solution (like dynamic disks and RAID), as opposed to a file based solution.  
    Additionally, there are very few recovery software solutions that can read and recover data from a damaged/degraded/malfunctioning array.  Whereas, because StableBit DrivePool is a file based solution, you can extract the data from even a single disk. And pretty all recovery software can read NTFS. 
     
    Another difference is that with ReFS and a mirrored (and I believe parity) array, Storage Spaces can detect and correct corrupted files on the fly.   However, there is a performance impact with using ReFS (as it calculates checksums as data is being written, and reads from them), and there is a significant performance impact for parity arrays.  In both cases, the systems' CPU must do the heavy lifting, and this can/does adversely affect performance. 
  2. Like
    CosmicPuppy reacted to jmone in Access Denied   
    I am currently removing a 4TB HDD and it will take around 24hours at this rate.  I too have just noticed that my Pool appears to be read only (Can not create any folders or files).  I'm not using duplication.  The following quote is from the online doco:
     
     
     
    I'm surprised this is considered normal behaviour for DP (as it was not the case with DB) and is counter intuitive to the whole purpose of pooling.  This may cause issues for me as an 8tb hdd could take a couple of days to remove and I could be trying to write to a folder during this time (TV Recordings for example - which is how I found out it was read only - JRiver complained it could not write to my folder).
     
    Thinking out-loud, would it be better to use the File Balancer to move all the files first, then use the remove disk function once empty to minimise downtime?  And if that would work, would it not make sense for the Drive Removal option to also function in the same way?
     
    Sorry if I'm off the mark as a DP Newbie and interpreted the above posts incorrectly.
     
    Thanks
    Nathan
  3. Like
    CosmicPuppy reacted to Christopher (Drashna) in SSD Optimizer Balancing Plugin   
    No, we haven't had a chance to really look into it.  
    Alex is the one that does this (he's the developer, and i'm the technical support and customer service guy). 
     
    Though, I'll bring it up to alex.
  4. Like
    CosmicPuppy reacted to Christopher (Drashna) in SSD Optimizer Balancing Plugin   
    @4Frame,
     
    Yes, that is absolutely correct. If you check the notes link, the 3rd bullet point states this. Specifically:
  5. Like
    CosmicPuppy reacted to Alex in SSD Optimizer Balancing Plugin   
    I've just finished coding a new balancing plugin for StableBit DrivePool, it's called the SSD Optimizer. This was actually a feature request, so here you go.
     
    I know that a lot of people use the Archive Optimizer plugin with SSD drives and I would like this plugin to replace the Archive Optimizer for that kind of balancing scenario.
     
    The idea behind the SSD Optimizer (as it was with the Archive Optimizer) is that if you have one or more SSDs in your pool, they can serve as "landing zones" for new files, and those files will later be automatically migrated to your slower spinning drives. Thus your SSDs would serve as a kind of super fast write buffer for the pool.
     
    The new functionality of the SSD Optimizer is that now it's able to fill your Archive disks one at a time, like the Ordered File Placement plugin, but with support for SSD "Feeder" disks.
     
    Check out the attached screenshot of what it looks like
     
    Notes: http://dl.covecube.com/DrivePoolBalancingPlugins/SsdOptimizer/Notes.txt
    Download: http://stablebit.com/DrivePool/Plugins
     
    Edit: Now up on stablebit.com, link updated.

  6. Like
    CosmicPuppy reacted to Christopher (Drashna) in Some questions about duplicating   
    If it's called the "File Placement Limiter", then you're using an older version of StableBit DrivePool, and you may want to update.  We changed the name for it a while ago, to avoid confusion with the "File Placement Rules" feature we added.
     
    If you're using the 1.3 version, then you're fine.
     
     
     
    Either way, by unchecking the "duplicated" option for all of the other drives, you're telling the system that the 8TB drive is the ONLY valid target for duplication. Which... well, defeats the point. 
    We don't differentiate between "original" and "copy" in the software, so if it's showing "duplication", that means that the files have duplication enabled for them, and reside on more than one disk. Not that it's the copy.
     
    If you want to ensure that a copy resides on this disk, then you're setup is correct. However, if you don't care where the files end up, but you only want duplicated data on the 8TB drive, then what you want to do is leave all of the other drives alone, and just uncheck "unduplicated" data on the 8TB drive. This will make sure that no unduplicated data ends up on this disk.
     
     
    As for the write penalties, I'm assuming you're using a Seagate Archive drive, with SMR. 
    if so, you should really check out this thread:
    http://community.covecube.com/index.php?/topic/1625-do-i-start-buying-8tb-archive-drives-or-not/
     
     
     
    As for corruption, this is a touchy subject for a lot of people.
    I'm guessing you're referring to "random bit flips" here (commonly called bit rot, incorrectly). If so...  these are statistically unlikely to happen that ... well, it's literally a white whale... and your drive is designed to detect and correct them at the physical and firmware level (invisibility to the OS).  The chances that you'll experience this sort of issue is pretty much nil.
     
    If you mean media degradation due to physical defects and age... then that is something different. And StableBit DrivePool does do a number of checks for that.  Specifically, we check the file modify time when accessing the data. If it doesn't match, then we grab a checksum of the files. if that matches, we update the time. Otherwise, we flag the file and notify the user that there is a file part mismatch that needs to be resolved.
    Additionally, changing the duplication status, or remeasuring the pool will trigger this for all files in the pool. 
     
    Also, I apologize for the shameless self promotion here, but that's exactly what StableBit Scanner SPECIFICALLY addresses.  By default, StableBit Scanner performs a surface scan of all attached disks.  This is a sector by sector scan of the drive, where it attempts to read the data on the drive.  Any unreadable sectors are flagged in the system.  StableBit Scanner will then prompt you to run a file scan to see if we can identify any affected files (sometimes, these can end up in free space but we don't know until we check).  Additionally, if both products are installed on the same system, StableBit DrivePool will evacuate the contents of ANY disk with unreadable sectors as marked by StableBit Scanner. This is in attempt to prevent any potential corruption of more data (as this sort of issue tends to only get worse).
  7. Like
    CosmicPuppy reacted to Alex in check-pool-fileparts   
    If you're not familiar with dpcmd.exe, it's the command line interface to StableBit DrivePool's low level file system and was originally designed for troubleshooting the pool. It's a standalone EXE that's included with every installation of StableBit DrivePool 2.X and is available from the command line.
     
    If you have StableBit DrivePool 2.X installed, go ahead and open up the Command Prompt with administrative access (hold Ctrl + Shift from the Start menu), and type in dpcmd to get some usage information.
     
    Previously, I didn't recommend that people mess with this command because it wasn't really meant for public consumption. But the latest internal build of StableBit DrivePool, 2.2.0.659, includes a completely rewritten dpcmd.exe which now has some more useful functions for more advanced users of StableBit DrivePool, and I'd like to talk about some of these here.
     
    Let's start with the new check-pool-fileparts command.
     
    This command can be used to:
    Check the duplication consistency of every file on the pool and show you any inconsistencies. Report any inconsistencies found to StableBit DrivePool for corrective actions. Generate detailed audit logs, including the exact locations where each file part is stored of each file on the pool. Now let's see how this all works. The new dpcmd.exe includes detailed usage notes and examples for some of the more complicated commands like this one.
     
    To get help on this command type: dpcmd check-pool-fileparts
     
    Here's what you will get:
     
    dpcmd - StableBit DrivePool command line interface Version 2.2.0.659 The command 'check-pool-fileparts' requires at least 1 parameters. Usage: dpcmd check-pool-fileparts [parameter1 [parameter2 ...]] Command: check-pool-fileparts - Checks the file parts stored on the pool for consistency. Parameters: poolPath - A path to a directory or a file on the pool. detailLevel - Detail level to output (0 to 4). (optional) isRecursive - Is this a recursive listing? (TRUE / false) (optional) Detail levels: 0 - Summary 1 - Also show directory duplication status 2 - Also show inconsistent file duplication details, if any (default) 3 - Also show all file duplication details 4 - Also show all file part details Examples: - Perform a duplication check over the entire pool, show any inconsistencies, and inform StableBit DrivePool >dpcmd check-pool-fileparts P:\ - Perform a full duplication check and output all file details to a log file >dpcmd check-pool-fileparts P:\ 3 > Check-Pool-FileParts.log - Perform a full duplication check and just show a summary >dpcmd check-pool-fileparts P:\ 0 - Perform a check on a specific directory and its sub-directories >dpcmd check-pool-fileparts P:\MyFolder - Perform a check on a specific directory and NOT its sub-directories >dpcmd check-pool-fileparts "P:\MyFolder\Specific Folder To Check" 2 false - Perform a check on one specific file >dpcmd check-pool-fileparts "P:\MyFolder\File To Check.exe" The above help text includes some concrete examples on how to use this commands for various scenarios. To perform a basic check of an entire pool and get a summary back, you would simply type:
    dpcmd check-pool-fileparts P:\
     
    This will scan your entire pool and make sure that the correct number of file parts exist for each file. At the end of the scan you will get a summary:
    Scanning...   ! Error: Can't get duplication information for '\\?\p:\System Volume Information\storageconfiguration.xml'. Access is denied   Summary:   Directories: 3,758   Files: 47,507 3.71 TB (4,077,933,565,417   File parts: 48,240 3.83 TB (4,214,331,221,046     * Inconsistent directories: 0   * Inconsistent files: 0   * Missing file parts: 0 0 B (0     ! Error reading directories: 0   ! Error reading files: 1 Any inconsistent files will be reported here, and any scan errors will be as well. For example, in this case I can't scan the System Volume Information folder because as an Administrator, I don't have the proper access to do that (LOCAL SYSTEM does).
     
    Another great use for this command is actually something that has been requested often, and that is the ability to generate audit logs. People want to be absolutely sure that each file on their pool is properly duplicated, and they want to know exactly where it's stored. This is where the maximum detail level of this command comes in handy:
    dpcmd check-pool-fileparts P:\ 4
     
    This will show you how many copies are stored of each file on your pool, and where they're stored.
     
    The output looks something like this:
    Detail level: File Parts   Listing types:     + Directory   - File   -> File part   * Inconsistent duplication   ! Error   Listing format:     [{0}/{1} IM] {2}     {0} - The number of file parts that were found for this file / directory.     {1} - The expected duplication count for this file / directory.     I   - This directory is inheriting its duplication count from its parent.     M   - At least one sub-directory may have a different duplication count.     {2} - The name and size of this file / directory.   ... + [3x/2x] p:\Media -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media [Device 0] -> \Device\HarddiskVolume3\PoolPart.6a76681a-3600-4af1-b877-a31815b868c8\Media [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media [Device 2] - [2x/2x] p:\Media\commandN Episode 123.mov (80.3 MB - 84,178,119 -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media\commandN Episode 123.mov [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media\commandN Episode 123.mov [Device 2] - [2x/2x] p:\Media\commandN Episode 124.mov (80.3 MB - 84,178,119 -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media\commandN Episode 124.mov [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media\commandN Episode 124.mov [Device 2] ... The listing format and listing types are explained at the top, and then for each folder and file on the pool, a record like the above one is generated.
     
    Of course like any command output, it could always be piped into a log file like so:
    dpcmd check-pool-fileparts P:\ 4 > check-pool-fileparts.log
     
    I'm sure with a bit of scripting, people will be able to generate daily audit logs of their pool
     
    Now this is essentially the first version of this command, so if you have an idea on how to improve it, please let us know.
     
    Also, check out set-duplication-recursive. It lets you set the duplication count on multiple folders at once using a file pattern rule (or a regular expression). It's pretty cool.
     
    That's all for now.
  8. Like
    CosmicPuppy reacted to Christopher (Drashna) in Keeping a record of what's on each disk?   
    Int the newest beta builds, we include command line auditing tools, so you could set up a script to dump the information on a schedule (using Windows' Task Scheduler).
     
    It requires 2.2.0.659 or higher, though.
    http://dl.covecube.com/DrivePoolWindows/beta/download/
     
    And details about the new commands here:
    http://community.covecube.com/index.php?/topic/1587-check-pool-fileparts/
     
    Specifically, the "4" detail level includes EVERYTHING, including the location of the files. 
     
     
     
    This will actually work, if you do it for each pooled disk, and make sure that you can enumerate the hidden "PoolPart" folder on the drive. 
     
    But you'd need to do this for each and every drive.
  9. Like
    CosmicPuppy reacted to DriveDuck in Keeping a record of what's on each disk?   
    On Windows Systems, how about good old fashioned tree command? "tree /f > list.txt" from the movie root should do it.
×
×
  • Create New...