Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11568
  • Joined

  • Last visited

  • Days Won

    366

Posts posted by Christopher (Drashna)

  1. Alex has said that he plans on postind on the forums an announcement about this, and it may be best to wait for that. 

    That said, between the fact that Google Drive has always had a 750GB per account per day upload limit (which is pretty low), some of the odd issues that pop up with it, and that they've recently limited to 1TB (or 5TB) of data and lock the account if it exceeds it (eg, stops uploads),  the writing has been on the wall for a while.  

  2. I'll post it here too.  

    There is a fix in the latest betas involving memory corruptions of file IDs.  

    However, ... the issue may also be the wrong API being used: 
     

    Quote

    ... incorrectly using File IDs as persistent file identifiers, which they should not be. File IDs in Windows can change from time to time on some filesystems.

    Source: https://learn.microsoft.com/en-us/windows/win32/api/fileapi/ns-fileapi-by_handle_file_information

    The identifier that is stored in the nFileIndexHigh and nFileIndexLow members is called the file ID. Support for file IDs is file system-specific. File IDs are not guaranteed to be unique over time, because file systems are free to reuse them. In some cases, the file ID for a file can change over time.

    If this is the case, then it is expected behavior.

    The correct API to use to get a persistent file identifier is FSCTL_CREATE_OR_GET_OBJECT_ID or FSCTL_GET_OBJECT_ID: https://learn.microsoft.com/en-us/windows/win32/api/winioctl/ni-winioctl-fsctl_create_or_get_object_id

    Object IDs are persistent and do not change over time.

    We support both Object IDs and File IDs.

     

  3. If you want to use the SSD Optimizer and use the rest of the pool,  the "simplest" option may be to use hierarchical pools.  Eg, add the SSD/NVMe drives to a pool, add the hard drives to another pool,   and then add both of these pools to a pool.  Enable the SSD optimizer on the "pool of pools", and then enable the balancers you want on the sub-pools.  

  4. Sync.com cannot be added, as there is no publicly documented API.  Without that API, and a way to officially read and write files/data on the provider, there is no way to support it. 

  5. There isn't a set amount of time, because tasks like balancing, duplication, etc run as a background priority.   This means that normal usage will trump these tasks.   
    Additionally, it has the normal file move/copy issue, estimates can jump radically.   A bunch of small files take a lot more time than a few large files, because it's updating the file system much more frequently.  And for hard drives, this means that the read/write heads are jumping back and forth, frequently.  

    But 6-12 hours per TB is a decent estimate for removal. 

  6. StableBit Scanner won't repair the drives.  Eg, it never writes to the drive (the exceptions being the settings store, and if you run file recovery)

    That said, it will rescan the drives, and update the results.   

    The important bit here is the long format, though.  This writes to the ENTIRE drive, and can/will cause the drive to reallocate or correct bad sections of the disk.  As for not correcting right away, it has to run the scan again, and unless you manually cleared the status, it won't do this right away, but will wait for the 30 days  (or whatever it is configured to)

     

  7. 5 hours ago, Gabe said:

    Just to clarify based on the second half of your post... Am I correct in interpreting that if I set up any drive (cloud, local, NAS share, etc) in CloudDrive, and I add that to a pool of only CloudDrives in DrivePool (X:), hardlinking could theoretically work on X:? And if that's the case, would there be issues in terms of balancing? 

    Thanks!

    No.  Hardlinking doesn't work on the pool drive, at all, and never will.  The hard links are an object/feature of the volume, not the disk, and require that all instances of the file be on the same *physical* volume. 

    They work on StableBit CloudDrive, because it doesn't emulate the filesystem the way that StableBit DrivePool does.  It handles things on a block level (below the file system, basically), and never directly deals with the file system.  Because of this,  just about anything you can do on a normal disk, you can do on the StableBit CloudDrive disks.  

    But if they're pooled, then the pool's limitations still apply (at least to the pool drive).

  8. Correct.  Duplication is inheritted unless explicity set.  Enabling pool file duplication enables it for the root, and everything else gets inherrited.   And when you change it, it checks to see which files need to be duplicated or unduplicated (the "checking duplication" part that you may have seen).  So, it shouldn't mess with the existing data. 

  9. On 8/31/2023 at 1:11 PM, beatle said:

    Maybe it works now, but my trial expired last week.  What a shame.

    As shane said, please contact us.

    Also, even with the trial expired, the software will continue to work, but the upload is incredibly throttled , so you can get data off but makes it impracticle to use. 

×
×
  • Create New...