-
Posts
11549 -
Joined
-
Last visited
-
Days Won
365
Posts posted by Christopher (Drashna)
-
-
The main reason there is no VSS support for the pool, is that there is zero documentation on just how to implement it on the file system side (plenty on the API side, though). So, at best, we'd have to completely reverse engineer it, do a bunch of testing (stress testing, compatibility testing, consistenty/integtrity testing) of that code, and hope it works right.
While it's something I'd love to see, the main issue is resources. We're a very, very small company and simply don't have the resources to do this.
As shane mentions, there are other approaches that you could take to accomplish this. Though, one that isn't mentioned, is that you could use the Local Disk provider from StableBit CloudDrive and create a drive on the pool. This drive would be VSS compatible, since StableBit CloudDrive isn't emulating the file system, it's emulating the raw data, and Windows handles the file system and VSS implementation.
However, backing up the individual pool drives, or using a file based solution are going to be much simpler, and less fragile (less complexity generally means less fragile).
-
Just to be clear, StableBit Scanner actively avoids writing to the drives, except for in very specific cases.
As mentioned, there are some other utilities that can be used to accomplish this.
However, personally, I use a full (non-quick) format of the drives, persionally. Also, diskpart's "clean all" command will clear the entire drive, and write over every sector on the drive. (same as a full format, but also writes over the partition info).
-
The red and orange arrows are Realtime Placement limiters. And yeah, there was a recent change that makes them show up when they wouldn't have before.
Eg, this should be fine. Especially as it more accurately reflects the status of the pool and balancers.
-
For now, see:
-
That's very odd. And I can't reproduce locally.
Most likely, this is an issue with the drivers for the drives/enclosure, and hwinfo causing it to crash the driver/controller.
I'm guessing that this happens regardless if StableBit DrivePool is installed or not?
If so, then you may want to check for updated drivers and/or firmware for the system, and see if that helps.
-
Alex has said that he plans on postind on the forums an announcement about this, and it may be best to wait for that.
That said, between the fact that Google Drive has always had a 750GB per account per day upload limit (which is pretty low), some of the odd issues that pop up with it, and that they've recently limited to 1TB (or 5TB) of data and lock the account if it exceeds it (eg, stops uploads), the writing has been on the wall for a while.
-
-
Also, make sure that you're up to date. There was a recent release, and there was some updating to the Cloud integration code, that may affect this.
-
I'll post it here too.
There is a fix in the latest betas involving memory corruptions of file IDs.
However, ... the issue may also be the wrong API being used:
Quote... incorrectly using File IDs as persistent file identifiers, which they should not be. File IDs in Windows can change from time to time on some filesystems.
Source: https://learn.microsoft.com/en-us/windows/win32/api/fileapi/ns-fileapi-by_handle_file_information
The identifier that is stored in the nFileIndexHigh and nFileIndexLow members is called the file ID. Support for file IDs is file system-specific. File IDs are not guaranteed to be unique over time, because file systems are free to reuse them. In some cases, the file ID for a file can change over time.
If this is the case, then it is expected behavior.
The correct API to use to get a persistent file identifier is FSCTL_CREATE_OR_GET_OBJECT_ID or FSCTL_GET_OBJECT_ID: https://learn.microsoft.com/en-us/windows/win32/api/winioctl/ni-winioctl-fsctl_create_or_get_object_id
Object IDs are persistent and do not change over time.
We support both Object IDs and File IDs.
-
If you want to use the SSD Optimizer and use the rest of the pool, the "simplest" option may be to use hierarchical pools. Eg, add the SSD/NVMe drives to a pool, add the hard drives to another pool, and then add both of these pools to a pool. Enable the SSD optimizer on the "pool of pools", and then enable the balancers you want on the sub-pools.
-
Unfortunately, some enclosures do this. Fortunately, it's not very common, but it sucks when you find one that does.
-
I believe that you've opened a ticket for this already, and I've replied there.
-
Make sure that you're changing the "override" value, and not the default.
The default may get overwritten, but the override value shouldn't.
-
Well, the biggest issue is that the folders are likely not actually empty. There are some hidden files that are in use by StableBit DrivePool. For instance, we use Alternate Data Streams to set the duplication settings. These don't show up normally, even with enabling the "show protected system files" option.
-
Sync.com cannot be added, as there is no publicly documented API. Without that API, and a way to officially read and write files/data on the provider, there is no way to support it.
-
No, no changes have been made, in this regards.
-
That's positiviely bizarre. Though, I can't really say that it's surprising, unfortunately.
-
YOu can use the "Quick settings" to reset these values.
And the different profiles do have slightly different profiles.
As for the settings, and what they do, there is some basic information about that here:
https://stablebit.com/Support/Scanner/2.X/Manual?Section=Scanner -
Setting the balancing ratio to 100% may help.
But to be blunt, the SSD Optimizer and the Disk Space Equalizer balancer are at odds. The Disk Space Equalizer wants to fill every drive, equally, but the SSD Optimizer wants to clear out several of the drives.
-
There isn't a set amount of time, because tasks like balancing, duplication, etc run as a background priority. This means that normal usage will trump these tasks.
Additionally, it has the normal file move/copy issue, estimates can jump radically. A bunch of small files take a lot more time than a few large files, because it's updating the file system much more frequently. And for hard drives, this means that the read/write heads are jumping back and forth, frequently.But 6-12 hours per TB is a decent estimate for removal.
-
could you open a ticket at https://stablebit.com/contact about it?
-
Glad to hear it!
-
That's definitely not normal, but I think you've already opened a ticket for this, as well.
-
StableBit Scanner won't repair the drives. Eg, it never writes to the drive (the exceptions being the settings store, and if you run file recovery)
That said, it will rescan the drives, and update the results.
The important bit here is the long format, though. This writes to the ENTIRE drive, and can/will cause the drive to reallocate or correct bad sections of the disk. As for not correcting right away, it has to run the scan again, and unless you manually cleared the status, it won't do this right away, but will wait for the 30 days (or whatever it is configured to)
HWiNFO causing my hard drives to go “missing” with DrivePool..?
in General
Posted
Welcome! And if it helps, I'm using an old... old Haswell based system (Xeon E3-1245v3), and an LSI SAS 2008 based card (9240-8i). Ran HWiNFO64, loaded without issues, showed verything, and no drive crashes. Which is why I suspect that it's a system config related issue.