Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11700
  • Joined

  • Last visited

  • Days Won

    388

Posts posted by Christopher (Drashna)

  1. sorry, I missed this.  And it looks like you're talking about StableBit CloudDrive, rather than StableBit Cloud. 

    1. For licenses for StableBit DrivePool, StableBit Scanner, and StableBit CloudDrive, each license is valid for only one system at a time.  You can have up to 8 licenses of each product per Activation ID.  After that, you'd need additional Activation IDs.  At least for the personal license.
    2. Basically, it's like a container.  You encrypt it at the start, or not.  You can't change it after the drive has been created.  However, you could enable Bitlocker on the drive (either as an alternative, or in addition).  So yes, you'd have to create a new drive and reupload.
    3. You would want to tweak the I/O performance settings for the drive.  More threads, and/or higher minimum download size can improve speeds. 
  2. Unfortunately, the only way to change the cluster size is to reformat the drive. 

    Also, the reason that it's not pinning the metadata is likely due to being dynamic disks.   And the only way to change that is to delete the partitions and recreate them. 

    In both cases, it may be simpler to move the data off of the drive, destroy it and recreate it.  

     

  3. Unfortunately, no.  StableBit DrivePool doesn't keep a list of all the files on the pool and their locations. 

    However, In StableBit DrivePool's UI, the pool icon (top left corner) should have a list of feedback/notications.  That should include messages like "Complete:  Ensure file consistency on pool" type messages, if the duplication pass completed. 

  4. Open up the UI, and you should see a "missing disk" listed in the UI.  Remove the missing disk, and it should return to a normal state. 

  5. FWIW, this issue appears to be an issue with the file system API that Google Drive is using.  Eg, they're using the wrong one.  We've tried reaching out to get this resolved, but have gotten absolutely no response. (not even an acknowledgement of message received. 

  6. Definitely overthinking it.   Specifically, while StableBit DrivePool will rebalance data, most of the default enabled balancers handle edge cases, so there should be very little balancing that occurs, once the pool has "settled".   

    There is a brief summary of these balancers here: 
    https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing Plug-ins#Default Plug-ins

    But for ease:

    • StableBit Scanner
      • This plug-in is designed to work in conjunction with the StableBit Scanner version 2.2 and newer. It performs automatic file evacuation from damaged drives and temperature control.
    • Volume Equalization
      • This balancer is responsible to equalizing the disk space used on multiple volumes that reside on the same physical disk. it has no user configurable settings.
    • Disk Usage Limiter
      • This plug-in lets you designate which disks are allowed to store unduplicated vs. duplicated files. It doesn't do anything unless you change its settings to limit file placement on a disk.
    • Prevent Drive Overfill
      • This plug-in tries to keep an empty buffer of free space on each drive part of the pool in order to facilitate existing file expansion.
    • Duplication Space Optimizer
      • This plug-in examines the current data distribution on all of your pooled disks and decides if some data needs to be rebalanced in order to provide optimal disk space availability for duplicated files (see About Balancing for more information).

     

    The StableBit Scanner balancer may move stuff around a lot, but only if it detects issues with a drive.  And the Duplication Space Optimizer will try to rebalance the data to minimize the amount of "Unusable for duplication" space on the pool.  Aside from that, none of these should move data around much, normally. 

  7. Sorry about that! We've been working on improving the contact site, but I'm not sure what happened here.  We are looking into it.

    It looks like it should be working fine now.   But just in case, stablebit.cloud has a separate link for support that will still work, and I can be emailed directly at "christopher@covecube.com" for support.

  8. Sorry about that! We've been working on improving the contact site, but I'm not sure what happened.   

    It looks like it should be working fine now.   But just in case, stablebit.cloud has a separate link for support that will still work, and I can be emailed directly at "christopher@covecube.com" for support.

  9. Especially if this is not an NVMe drive, SSDs use all sorts of different values for SMART.   What is valid and okay for one drive may be out of spec on another drive. And that's not asuming that the OEM isn't using some sort of encryption/obsfuscation for the numbers.   Which is super common.  

    NVMe has an actual, published standard, and is generally better about this (though, we've seen a few instances of issues with this). 

  10. StableBit Scanner throttles the scans when there is activety, so that it does not intefere with normal usage.  That may be why you're seeing the reduced load. 

    Also, it only scans one drive per controller, by default.  So this may prevent it from scanning very many drives.  

    Both of these settings can be be configured in the settings.  The throttling settings are in the Scanner settings, but the concurrency limit is in the advanced settings. 

     

    As for the crashes, depending on the specifics, these may be harmless.  Otherwise, head to https://stablebit.com/contact/

  11. I highly recommend against ReFS.  NTFS works fine, is well supported, and doesn't suffer for some of the issues that ReFS does. 

    For instance, ReFS may not work at all on external drives. And by "external", I mean any drive that Windows things is external, which can include internal controller cards.  This is something I found out last year, when my server's boot disk decided to become unbootable, and upgraded the OS. 

    Additionally, integrity checking on ReFS is only enabled for the file system objects by default.  It is not enabled for all of the files, normally. Additionally, if something goes wrong, there is no easy way to recover from file system errors. 

    So, most of the advantages that ReFS has are outweighed by the issues it has.  To the point that I have converted my entire pool from ReFS to NTFS.  Just ... no desire to deal with those issues, again. 

×
×
  • Create New...