Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11573
  • Joined

  • Last visited

  • Days Won

    366

Everything posted by Christopher (Drashna)

  1. The Performance panel only shows activity that happens through the pool driver. Balancing and duplication activity is not reflected here. The reason for this is how Windows handles the performance counters, and that we don't want to use any hacks to try to make it reflect duplication/balancing, since those can lead to a lot more issues, than what it solves.
  2. Okay, thank you for confirming that, at least. If this happens again, please open a ticket before doing anything else.
  3. Just a heads up, the Set-Disk command is effectively doing the same thing that the diskpart commands are doing. Both are interfacing with the "Virtual disk service", which manages the drives/volumes/etc. Also, it should be noted that Windows Defender also has a number of ransomware prevention options that can be configured. And it may be worth looking into those.
  4. Reparse point information is stored in the .covefs folder in the root of the pool. Worst case, delete the link, remove the contents of the .covefs folder, and then reboot.
  5. Remeasuring the pool may have fixed this, as well. As may have resetting the settings.
  6. IIRC, the pick of files is more or less random.
  7. It contains internal data for the drive. Primarily, information about reparse points on the pool (junctions, symlinks, etc). If you have none of this (you can check by opening the folder), then it should be okay. But if you do, then messing with the folder can/will break the reparse points.
  8. No worries! Sometimes, internet stuff .... But glad that we got it sorted.
  9. Unfortunately, there isn't an easy way to do this. The "SSD Optimizer" balancer plugin won't accomplish this, since it moves the files off the "ssd" drive, as soon as possible. And the other balancers don't check the file access/modify dates, either. This would have to be done in some sort of manual way. (such as with file placement rules, and moving stuff around frequently).
  10. Open the UI on the system that Scanner is installed on. Click on "Settings" and select "Scanner Settings". Open the "Licensing" tab, and click on the "Manage license" link. This will open a window that shows you the Activation ID, as well as a big button to "Deactivate" the license. Once you've done this, you can activate the license on a new system. If you don't have access to the old system, open a ticket at https://stablebit.com/Contact and we can see about resolving the issue. And in this case, please activate the trial period if you haven't already, as the trial period is fully featured and lasts for 30 days.
  11. The only tickets I see from you are from this morning. Both of which have been responded to.
  12. Could you open a ticket at https://stablebit.com/contact about this?
  13. Oh, definitely! Balancing and duplication is already a low priority task, to prevent performance impact. But the data scrubbing part, or even just reading the checksums on the files... Is not low impact. A good example of this is the measuring for DrivePool. You may notice that it HAMMERS the system when measuring the pool. This is because it's opening each file to read it's attributes. This is very expensive to do, in terms of system resources. The more small files you have, the worse it is. This isn't even counting any sort of hashing, just opening the files. Also, an in depth integrity checker is something I'd love to see, personally.
  14. Welcome! And if it makes you feel any better, all of my testing with stuff is done in Hyper-V VMs. Using VHDX files for them.
  15. I was until about April of last year, because my system installation nuked itself (like restores didn't work, but a reinstall did). I haven't bothered to reinstall it, at this point
  16. There isn't a good answer here. It's complicated, at best, and it is very hit or miss. And with USB, it is very much dependant on the chips being used. ALso, if it's put into a RAID array, then SMART won't be accessible by StableBit Scanner. But should be accessible by StableBit DrivePool, in either case.
  17. Correct. The balancing system will move the files to the "archive" drives, based on the balancing settings. Putting the VHDX files should be fine, since it's a simple file. However, WSL and the like use special drivers on top of the file system, and may do "non standard" things. Unfortunately, a lot of these are not documented (or at least, not documend in such a way that is useful for anyone implementing a file system or emulating it). So, a lot of reverse engineering would be needed, at best.
  18. Double check to see if there are any other pools. Likely, it's not mounted to a drive letter, so it won't show up in "Computer". But should show up in the drop down.
  19. They may/should be in the service logs ("C:\ProgamData\StableBitScanner")
  20. From what I've seen, that's fairly similar to what others have done. As for a "better solution", that absolutely depends on what you're trying for. If you want faster speeds, a block based RAID solution will absolutely be faster overall. However, it can be a lot more fragile, unless you just have disks (and money) to throw at the problem. But for use with StableBit DrivePool, that sounds reasonable. Personally, I like having an SSD for the pool and use the SSD Optimizer. This way, you can use the Ordered file placement option and fill the drives to capacity. And if only want to use one disk and do have duplication, you can add the SSD and the pool to it's own pool, and use the SSD Optimizer there, and then just use the ordered file placement balancer on the "sub pool". With PrimoCache, writes to the pool would still be done to the SSD drive, so flushing the cache should be a lot faster, overall.
  21. Most likely, "because Windows". Specifically, the performance counters on the system are corrupted, most likely. Fortunately, that is a simple, two line fix: https://wiki.covecube.com/StableBit_DrivePool_Q2150495
  22. I think you have a ticket for this open already. But yeah, that's very unusual. I'm personally running it on several Windows 11 systems (a mix of 21H2 and 22H2), and don't see this issue. That said, does Disk Management take forever to open too?
  23. Likely, it was some SMART value that triggerd the issue, and then corrected itself. This can happen, especially if problem sectors get written to (the drive can correct them, usually by remapping). If you haven't, set up email notifications (or set up mobile/text notifications in StableBit Cloud), and it should give you more details about any errors that occur.
  24. Not directly. However, the dpcmd tool does have some options that will list the full location of each file. Specifically "dpcmd get-duplication (path-to-file)". In some cases, it does run CRC comparison. But adding more at runtime is problematic, since the driver runs in the kernel. So any operations need to be very quick. Computting hashes of files is inherantly very expensive. If this sort of functionality is added, it won't be directly as part of StableBit DrivePool, for this reason, and others. Checking both the "Force damaged disk removal" and "duplicate data later" options should make the removal happen much faster. But it will still move off data from the drive, if needed. Otherwise, data would be left on the disk, and if it's not duplicated data... That said, you can immediately eject the disk from the pool using the dpcmd tool. However, this does not move ANY data from the drive. Doing so will require manual intervention. Also, the disk still needs to be writable (it basically writes a "not part of a pool" tag to the PoolPart folder on the disk. 2 disks. Eg, X-1 disks. X being the number of duplication. So you can lose a number of disks equal to one less than the level of duplication. (also note that no duplication is basically a duplication of 1, so can tolerate 0 disks failing). And StableBit DrivePool is aware of partitions, and will actively avoid putting copies of a file on the same physical disk. This is part of wy we don't support dynamic disks, actually (because checking this becomes immensely more complicated with dynamic disks, and a lot more costly since this is also done in the kernel driver, as well). Also, even if you lose "too many disks", the rest of the pool will continue to work, with the data that is on the remaining disks.
×
×
  • Create New...