Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11549
  • Joined

  • Last visited

  • Days Won

    365

Posts posted by Christopher (Drashna)

  1. The main reason there is no VSS support for the pool, is that there is zero documentation on just how to implement it on the file system side (plenty on the API side, though).  So, at best, we'd have to completely reverse engineer it, do a bunch of testing (stress testing, compatibility testing, consistenty/integtrity testing) of that code, and hope it works right. 

    While it's something I'd love to see, the main issue is resources. We're a very, very small company and simply don't have the resources to do this.

     

    As shane mentions, there are other approaches that you could take to accomplish this.  Though, one that isn't mentioned, is that you could use the Local Disk provider from StableBit CloudDrive and create a drive on the pool.  This drive would be VSS compatible, since StableBit CloudDrive isn't emulating the file system, it's emulating the raw data, and Windows handles the file system and VSS implementation.  

    However, backing up the individual pool drives, or using a file based solution are going to be much simpler, and less fragile (less complexity generally means less fragile).

  2. Just to be clear, StableBit Scanner actively avoids writing to the drives, except for in very specific cases.  

    As mentioned, there are some other utilities that can be used to accomplish this. 

    However, personally, I use a full (non-quick) format of the drives, persionally.  Also, diskpart's "clean all" command will clear the entire drive, and write over every sector on the drive. (same as a full format, but also writes over the partition info).

     

  3. That's very odd.  And I can't reproduce locally.  

    Most likely, this is an issue with the drivers for the drives/enclosure, and hwinfo causing it to crash the driver/controller. 

    I'm guessing that this happens regardless if StableBit DrivePool is installed or not? 

    If so, then you may want to check for updated drivers and/or firmware for the system, and see if that helps.

  4. Alex has said that he plans on postind on the forums an announcement about this, and it may be best to wait for that. 

    That said, between the fact that Google Drive has always had a 750GB per account per day upload limit (which is pretty low), some of the odd issues that pop up with it, and that they've recently limited to 1TB (or 5TB) of data and lock the account if it exceeds it (eg, stops uploads),  the writing has been on the wall for a while.  

  5. I'll post it here too.  

    There is a fix in the latest betas involving memory corruptions of file IDs.  

    However, ... the issue may also be the wrong API being used: 
     

    Quote

    ... incorrectly using File IDs as persistent file identifiers, which they should not be. File IDs in Windows can change from time to time on some filesystems.

    Source: https://learn.microsoft.com/en-us/windows/win32/api/fileapi/ns-fileapi-by_handle_file_information

    The identifier that is stored in the nFileIndexHigh and nFileIndexLow members is called the file ID. Support for file IDs is file system-specific. File IDs are not guaranteed to be unique over time, because file systems are free to reuse them. In some cases, the file ID for a file can change over time.

    If this is the case, then it is expected behavior.

    The correct API to use to get a persistent file identifier is FSCTL_CREATE_OR_GET_OBJECT_ID or FSCTL_GET_OBJECT_ID: https://learn.microsoft.com/en-us/windows/win32/api/winioctl/ni-winioctl-fsctl_create_or_get_object_id

    Object IDs are persistent and do not change over time.

    We support both Object IDs and File IDs.

     

  6. If you want to use the SSD Optimizer and use the rest of the pool,  the "simplest" option may be to use hierarchical pools.  Eg, add the SSD/NVMe drives to a pool, add the hard drives to another pool,   and then add both of these pools to a pool.  Enable the SSD optimizer on the "pool of pools", and then enable the balancers you want on the sub-pools.  

  7. Sync.com cannot be added, as there is no publicly documented API.  Without that API, and a way to officially read and write files/data on the provider, there is no way to support it. 

  8. There isn't a set amount of time, because tasks like balancing, duplication, etc run as a background priority.   This means that normal usage will trump these tasks.   
    Additionally, it has the normal file move/copy issue, estimates can jump radically.   A bunch of small files take a lot more time than a few large files, because it's updating the file system much more frequently.  And for hard drives, this means that the read/write heads are jumping back and forth, frequently.  

    But 6-12 hours per TB is a decent estimate for removal. 

  9. StableBit Scanner won't repair the drives.  Eg, it never writes to the drive (the exceptions being the settings store, and if you run file recovery)

    That said, it will rescan the drives, and update the results.   

    The important bit here is the long format, though.  This writes to the ENTIRE drive, and can/will cause the drive to reallocate or correct bad sections of the disk.  As for not correcting right away, it has to run the scan again, and unless you manually cleared the status, it won't do this right away, but will wait for the 30 days  (or whatever it is configured to)

     

×
×
  • Create New...