Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11568
  • Joined

  • Last visited

  • Days Won

    366

Posts posted by Christopher (Drashna)

  1. It contains internal data for the drive.  Primarily, information about reparse points on the pool (junctions, symlinks, etc).  

    If you have none of this (you can check by opening the folder), then it should be okay.  But if you do, then messing with the folder can/will break the reparse points.

  2. Unfortunately, there isn't an easy way to do this.  The "SSD Optimizer" balancer plugin won't accomplish this, since it moves the files off the "ssd" drive, as soon as possible.  And the other balancers don't check the file access/modify dates, either. 

    This would have to be done in some sort of manual way. (such as with file placement rules, and moving stuff around frequently).

  3. On 11/21/2022 at 5:36 PM, greg43 said:

    I need to move my Scanner license from one computer to another. What's the procedure?

    Open the UI on the system that Scanner is installed on. Click on "Settings" and select "Scanner Settings". Open the "Licensing" tab, and click on the "Manage license" link.
    This will open a window that shows you the Activation ID, as well as a big button to "Deactivate" the license. Once you've done this, you can activate the license on a new system.

    If you don't have access to the old system, open a ticket at https://stablebit.com/Contact and we can see about resolving the issue. And in this case, please activate the trial period if you haven't already, as the trial period is fully featured and lasts for 30 days.  
     

  4. Oh, definitely!  Balancing and duplication is already a low priority task, to prevent performance impact. 

    But the data scrubbing part, or even just reading the checksums on the files...  Is not low impact.  A good example of this is the measuring for DrivePool.  You may notice that it HAMMERS the system when measuring the pool.  This is because it's opening each file to read it's attributes. This is very expensive to do, in terms of system resources.  The more small files you have, the worse it is.  This isn't even counting any sort of hashing, just opening the files. 

     

    Also, an in depth integrity checker is something I'd love to see, personally.  

  5. There isn't a good answer here.  It's complicated, at best, and it is very hit or miss.  

    And with USB, it is very much dependant on the chips being used.  ALso, if it's put into a RAID array, then SMART won't be accessible by StableBit Scanner. But should be accessible by StableBit DrivePool, in either case.

  6. On 1/8/2023 at 7:02 AM, Javen said:

    So with SSD cache, new files are firstly writen to SSD but will move to HDD later by SSD Optimizer balancer? (so that read speed does not boost)

    Correct.  The balancing system will move the files to the "archive" drives, based on the balancing settings. 

    On 1/8/2023 at 7:02 AM, Javen said:

    The thing is that I have lots Windows modern apps(mostly Xbox app downloaded games from XGPU) and WSL2 based linux distros and docker desktop. I learnt here that above usage is problematic. (However, I don't quite understand the limitation in WSL2, since after I put WSL2 vhdx files to the pool and run I face no issue till now)

    Putting the VHDX files should be fine, since it's a simple file. 

    However, WSL and the like use special drivers on top of the file system, and may do "non standard" things.  Unfortunately, a lot of these are not documented (or at least, not documend in such a way that is useful for anyone implementing a file system or emulating it).  So, a lot of reverse engineering would be needed, at best. 

     

  7. From what I've seen, that's fairly similar to what others have done. 

    As for a "better solution", that absolutely depends on what you're trying for.  If you want faster speeds, a block based RAID solution will absolutely be faster overall.  However, it can be a lot more fragile, unless you just have disks (and money) to throw at the problem. 

    But for use with StableBit DrivePool, that sounds reasonable.

    Personally, I like having an SSD for the pool and use the SSD Optimizer. This way, you can use the Ordered file placement option and fill the drives to capacity.  And if only want to use one disk and do have duplication, you can add the SSD and the pool to it's own pool, and use the SSD Optimizer there, and then just use the ordered file placement balancer on the "sub pool".    With PrimoCache, writes to the pool would still be done to the SSD drive, so flushing the cache should be a lot faster, overall. 

  8. 3 hours ago, kitonne said:

    1/  Is there any way to get data copy #1 or #2 or #3 from Drive Pool, instead of a random copy so I can implement external data integrity checks for the data stored in a DP (3 copies in this example)? 

    Not directly.  However, the dpcmd tool does have some options that will list the full location of each file.  Specifically "dpcmd get-duplication (path-to-file)".

    3 hours ago, kitonne said:

    2/  Are there any plans for internal data integrity checks between the multiple copies in a pool (binary compare, MD5, CRC64, whatever)?

    In some cases, it does run CRC comparison.   But adding more at runtime is problematic, since the driver runs in the kernel. So any operations need to be very quick.  Computting hashes of files is inherantly very expensive.  

    If this sort of functionality is added, it won't be directly as part of StableBit DrivePool, for this reason, and others. 

    3 hours ago, kitonne said:

    3/  I did not find a clean way to replace a bad disk - I was expecting a "swap the disk, and run a duplicate_ckeck" to make sure the files in the pool get the specified number of copies   It looks like once a disk is removed, the rest is read only and you have to jump through hoops to restore r/w functionality.  Is there an way to just remove a bad drive from the pool?  Adding a new one is easy.....

    Checking both the "Force damaged disk removal" and "duplicate data later" options should make the removal happen much faster.  But it will still move off data from the drive, if needed.  

    Otherwise, data would be left on the disk, and if it's not duplicated data... 

    That said, you can immediately eject the disk from the pool using the dpcmd tool.  However, this does not move ANY data from the drive.  Doing so will require manual intervention. Also, the disk still needs to be writable (it basically writes a "not part of a pool" tag to the PoolPart folder on the disk.  

    3 hours ago, kitonne said:

    4/  For 3 times file redundancy, using 5 physical disks (same size), how many disks may fail before I loose data?  In other words, is there a risk of having 2 out of 3 copies on the same disk?

    2 disks.  Eg, X-1 disks.  X being the number of duplication.  So you can lose a number of disks equal to one less than the level of duplication. 

    (also note that no duplication is basically a duplication of 1, so can tolerate 0 disks failing).

    And StableBit DrivePool is aware of partitions, and will actively avoid putting copies of a file on the same physical disk.  This is part of wy we don't support dynamic disks, actually (because checking this becomes immensely more complicated with dynamic disks, and a lot more costly since this is also done in the kernel driver, as well). 

    Also, even if you lose "too many disks", the rest of the pool will continue to work, with the data that is on the remaining disks. 

     

  9. That may not be what is going on, actually.  The percentage of the scan isn't based on the whole disk, but what is left to be scanned.  And since it tracks sections of the disk, it may be effectively just continuing to scan the drive.   

    You can see the status in the sector map for the drive, and this may provide a more accurate picture of what is going on: 
    https://stablebit.com/Support/Scanner/2.X/Manual?Section=Disk Scanning Panel

    Otherwise, could you open a ticket at https://stablebit.com/contact

     

×
×
  • Create New...