Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11568
  • Joined

  • Last visited

  • Days Won

    366

Posts posted by Christopher (Drashna)

  1. Yup.  If you mark the bad blocks as unchecked, it will attempt to scan those blocks at the next available time (typically immediately).  If the rest of the disk is marked as healthy, it will only scan these blocks (as it only scans sections of the disk that are not checked or haven't been checked in the normal scan window (30 days).

     

  2. The problem is, that it can cause issues, even when transcoding.  And Windows is notoriously sensitive to I/O issues.

    That said, the SSD may not mark it as unusable until data is written to those specific cells.  the only foolproof way to do this is a full format of the drive.  I've had this fix the issue for a number of disks, for a while.  So that may help here. 

    But if the drive is still in warranty, you may be able to get it RMAed.  That way, it's not just being tossed.

  3. Also, when removing the drive, it adds a tag to the PoolPart folder that says "this folder/drive is removed".  (the dpcmd tool also can do this, but does so without moving the data off).

    And yeah, what shane said about the folder .... it sounds like the folder entry may have gotten corrupted.  This can cause issues, up to and including the "cannot be added twice" type thing. 

    That said, if the data recovery software is able to get the files, those should be usable, in theory.  StableBit DrivePool stores the whole file on each disk, so provided that the recovered data isn't corrupted, they should be usable. 

  4. Can confirm that those should be the correct files. 

    There should be two groups, and the "Item" has the location bay and case information.  This should match what is in StableBit Scanner.  But not every entry will have this information.   

      "Item": {
        "$type": "ScannerServiceLib.Settings.Disk, Scanner.ServiceLib",
        "Version": 9,
        "DeviceBlockVersion": 2,
        "TemperatureOverrideValue": null,
        "ChangeId": "----",
        "SmartWarningIgnores": null,
        "NameAlias": null,
        "NoSmart": false,
        "NoDirectIo": false,
        "NeverAutoScanSurface": false,
        "RecheckInterval": null,
        "NeverAutoScanFileSystem": false,
        "TemperatureOverrideC": null,
        "QueryPowerModeWithDirectIo": false,
        "LocationCase": "Column 2",
        "LocationBay": "Bay 1"
    

     

  5. It depends on how the API handles it, TBH.  As long as the data is accessible, that's the important part.  However, I do believe that the account has to be writable, as some data is written to the provider when the drive is mounted (at the very least, for the "attach"/lock file for the drive.    But as shane mentioned, there was a fix related to this in the beta version.

    However, we've added Google drive to the converter tool, so that you can download the contents, and convert to the local disk provider.  This is also in the beta version.

    Specifically, you'd need to run "CloudDrive.Convert GoogleDriveToLocalDisk" to kick this off.

     

     

  6. There are a number of options that will cause the scans to stop.  Activity on the disks will throttle or stop the scans.  The time may.  Etc.  You can see these settings here: 
    https://stablebit.com/Support/Scanner/2.X/Manual?Section=General

    That said, the percentage is a bit ... misleading.  StableBit Scanner uses a sector map for the scan, so eaach region is tracked independently.  That percentage is based on what is left to be scanned, and *not* the entire disk's progress.  You can see that represented here: 
    https://stablebit.com/Support/Scanner/2.X/Manual?Section=Disk Scanning Panel#Sector Map

     

  7. On 5/31/2023 at 6:19 PM, rogue_9 said:

    To make sure I understand, I can just add my current drive pool to StableBit, and will that let me push the capped capacity I'm stuck at to the actual capacity, or will I still be stuck at the capped 63 TB capacity because it is a Windows-based storage pool?

    The limit is actually based on the allocation unit size (aka cluster size).  the normal 4kb cluster has that limit.    So each volume will still have this limit, regardless if it is in a pool or not.  However, the pool itself is an emulated drive, and has no clusters/blocks.  So technically, it's limited by what Windows will report, at that point.

    Changing the allocation unit size requires a refomatting, so you can't do it "on the fly".  

    Another fun caveat is that NTFS doesn't support VSS snapshots on volumes over 64TBs.  Which means no CHKDSK pass on volumes larger than that, as well. 

     

    And shane's comments are spot on. As usual. 

  8. Just a heads up, the duplication is essentially copying files from one of the drives to another drive.  The speed depends on a number of factors, such as the size of the files being moved (more and  smaller files will take longer than less and larger files, for the same amount of data).  Also, factors such as pool usage will impact this, since the duplication pass runs in a bachground priority. 

    You can temporarily boost the priority by clicking in the >> button next to the bar.   There is also an advanced setting to permanently boost the priority too. 

    https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings

    The "FileDuplication_BackgroundIO" setting controls this. 

×
×
  • Create New...