Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11700
  • Joined

  • Last visited

  • Days Won

    388

Posts posted by Christopher (Drashna)

  1. 3 hours ago, kitonne said:

    1/  Is there any way to get data copy #1 or #2 or #3 from Drive Pool, instead of a random copy so I can implement external data integrity checks for the data stored in a DP (3 copies in this example)? 

    Not directly.  However, the dpcmd tool does have some options that will list the full location of each file.  Specifically "dpcmd get-duplication (path-to-file)".

    3 hours ago, kitonne said:

    2/  Are there any plans for internal data integrity checks between the multiple copies in a pool (binary compare, MD5, CRC64, whatever)?

    In some cases, it does run CRC comparison.   But adding more at runtime is problematic, since the driver runs in the kernel. So any operations need to be very quick.  Computting hashes of files is inherantly very expensive.  

    If this sort of functionality is added, it won't be directly as part of StableBit DrivePool, for this reason, and others. 

    3 hours ago, kitonne said:

    3/  I did not find a clean way to replace a bad disk - I was expecting a "swap the disk, and run a duplicate_ckeck" to make sure the files in the pool get the specified number of copies   It looks like once a disk is removed, the rest is read only and you have to jump through hoops to restore r/w functionality.  Is there an way to just remove a bad drive from the pool?  Adding a new one is easy.....

    Checking both the "Force damaged disk removal" and "duplicate data later" options should make the removal happen much faster.  But it will still move off data from the drive, if needed.  

    Otherwise, data would be left on the disk, and if it's not duplicated data... 

    That said, you can immediately eject the disk from the pool using the dpcmd tool.  However, this does not move ANY data from the drive.  Doing so will require manual intervention. Also, the disk still needs to be writable (it basically writes a "not part of a pool" tag to the PoolPart folder on the disk.  

    3 hours ago, kitonne said:

    4/  For 3 times file redundancy, using 5 physical disks (same size), how many disks may fail before I loose data?  In other words, is there a risk of having 2 out of 3 copies on the same disk?

    2 disks.  Eg, X-1 disks.  X being the number of duplication.  So you can lose a number of disks equal to one less than the level of duplication. 

    (also note that no duplication is basically a duplication of 1, so can tolerate 0 disks failing).

    And StableBit DrivePool is aware of partitions, and will actively avoid putting copies of a file on the same physical disk.  This is part of wy we don't support dynamic disks, actually (because checking this becomes immensely more complicated with dynamic disks, and a lot more costly since this is also done in the kernel driver, as well). 

    Also, even if you lose "too many disks", the rest of the pool will continue to work, with the data that is on the remaining disks. 

     

  2. That may not be what is going on, actually.  The percentage of the scan isn't based on the whole disk, but what is left to be scanned.  And since it tracks sections of the disk, it may be effectively just continuing to scan the drive.   

    You can see the status in the sector map for the drive, and this may provide a more accurate picture of what is going on: 
    https://stablebit.com/Support/Scanner/2.X/Manual?Section=Disk Scanning Panel

    Otherwise, could you open a ticket at https://stablebit.com/contact

     

  3. Yes and no.

    Specifically, by default, StableBit Scanner will only scan one drive per controller.  And in fact, you have to get into the advanced configuration to increase that.

    So if you're seeing multiple drives being scanned at once, it's likely because they are connected to different controllers (you can verify this by selecting the "group by controllers" option in the UI.  

  4. Yeah, it's a known issue.  It has to do with how the "on demand" feature works, and that it's not supported on the emulated drive that StableBit CloudDrive. 

    Also, it's not just a sparse file, it's a special type of file, that requires file system support.  And because we use an emulated drive, we'd have to reverse engineer that and add support for it.  

    A possible solution is to create a StableBit CloudDrive disk on the pool and use that. But I can understand not wanting to do that. 

  5. For the surface scan, StableBit Scanner doesn't do anything to fix/correct the unreadable sectors.  You can clear the status, but the next scan, they will likely come back. 

    That said, the only way to permanently clear the status is to write to the effected sectors.  StableBit Scanner doesn't do this, as it prevents the ability to recover the data from the disk. 

    However, the simplest (but definitely not "best") way to clear the status is to do a full format pass of the drive. This writes zeros to the entire drive, and may correct the issue.  If this doesn't work, then you may want to consider replacing the drive (RMA it if it's under warranty)

×
×
  • Create New...