Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11568
  • Joined

  • Last visited

  • Days Won

    366

Everything posted by Christopher (Drashna)

  1. IIRC, the pick of files is more or less random.
  2. It contains internal data for the drive. Primarily, information about reparse points on the pool (junctions, symlinks, etc). If you have none of this (you can check by opening the folder), then it should be okay. But if you do, then messing with the folder can/will break the reparse points.
  3. No worries! Sometimes, internet stuff .... But glad that we got it sorted.
  4. Unfortunately, there isn't an easy way to do this. The "SSD Optimizer" balancer plugin won't accomplish this, since it moves the files off the "ssd" drive, as soon as possible. And the other balancers don't check the file access/modify dates, either. This would have to be done in some sort of manual way. (such as with file placement rules, and moving stuff around frequently).
  5. Open the UI on the system that Scanner is installed on. Click on "Settings" and select "Scanner Settings". Open the "Licensing" tab, and click on the "Manage license" link. This will open a window that shows you the Activation ID, as well as a big button to "Deactivate" the license. Once you've done this, you can activate the license on a new system. If you don't have access to the old system, open a ticket at https://stablebit.com/Contact and we can see about resolving the issue. And in this case, please activate the trial period if you haven't already, as the trial period is fully featured and lasts for 30 days.
  6. The only tickets I see from you are from this morning. Both of which have been responded to.
  7. Could you open a ticket at https://stablebit.com/contact about this?
  8. Oh, definitely! Balancing and duplication is already a low priority task, to prevent performance impact. But the data scrubbing part, or even just reading the checksums on the files... Is not low impact. A good example of this is the measuring for DrivePool. You may notice that it HAMMERS the system when measuring the pool. This is because it's opening each file to read it's attributes. This is very expensive to do, in terms of system resources. The more small files you have, the worse it is. This isn't even counting any sort of hashing, just opening the files. Also, an in depth integrity checker is something I'd love to see, personally.
  9. Welcome! And if it makes you feel any better, all of my testing with stuff is done in Hyper-V VMs. Using VHDX files for them.
  10. I was until about April of last year, because my system installation nuked itself (like restores didn't work, but a reinstall did). I haven't bothered to reinstall it, at this point
  11. There isn't a good answer here. It's complicated, at best, and it is very hit or miss. And with USB, it is very much dependant on the chips being used. ALso, if it's put into a RAID array, then SMART won't be accessible by StableBit Scanner. But should be accessible by StableBit DrivePool, in either case.
  12. Correct. The balancing system will move the files to the "archive" drives, based on the balancing settings. Putting the VHDX files should be fine, since it's a simple file. However, WSL and the like use special drivers on top of the file system, and may do "non standard" things. Unfortunately, a lot of these are not documented (or at least, not documend in such a way that is useful for anyone implementing a file system or emulating it). So, a lot of reverse engineering would be needed, at best.
  13. Double check to see if there are any other pools. Likely, it's not mounted to a drive letter, so it won't show up in "Computer". But should show up in the drop down.
  14. They may/should be in the service logs ("C:\ProgamData\StableBitScanner")
  15. From what I've seen, that's fairly similar to what others have done. As for a "better solution", that absolutely depends on what you're trying for. If you want faster speeds, a block based RAID solution will absolutely be faster overall. However, it can be a lot more fragile, unless you just have disks (and money) to throw at the problem. But for use with StableBit DrivePool, that sounds reasonable. Personally, I like having an SSD for the pool and use the SSD Optimizer. This way, you can use the Ordered file placement option and fill the drives to capacity. And if only want to use one disk and do have duplication, you can add the SSD and the pool to it's own pool, and use the SSD Optimizer there, and then just use the ordered file placement balancer on the "sub pool". With PrimoCache, writes to the pool would still be done to the SSD drive, so flushing the cache should be a lot faster, overall.
  16. Most likely, "because Windows". Specifically, the performance counters on the system are corrupted, most likely. Fortunately, that is a simple, two line fix: https://wiki.covecube.com/StableBit_DrivePool_Q2150495
  17. I think you have a ticket for this open already. But yeah, that's very unusual. I'm personally running it on several Windows 11 systems (a mix of 21H2 and 22H2), and don't see this issue. That said, does Disk Management take forever to open too?
  18. Likely, it was some SMART value that triggerd the issue, and then corrected itself. This can happen, especially if problem sectors get written to (the drive can correct them, usually by remapping). If you haven't, set up email notifications (or set up mobile/text notifications in StableBit Cloud), and it should give you more details about any errors that occur.
  19. Not directly. However, the dpcmd tool does have some options that will list the full location of each file. Specifically "dpcmd get-duplication (path-to-file)". In some cases, it does run CRC comparison. But adding more at runtime is problematic, since the driver runs in the kernel. So any operations need to be very quick. Computting hashes of files is inherantly very expensive. If this sort of functionality is added, it won't be directly as part of StableBit DrivePool, for this reason, and others. Checking both the "Force damaged disk removal" and "duplicate data later" options should make the removal happen much faster. But it will still move off data from the drive, if needed. Otherwise, data would be left on the disk, and if it's not duplicated data... That said, you can immediately eject the disk from the pool using the dpcmd tool. However, this does not move ANY data from the drive. Doing so will require manual intervention. Also, the disk still needs to be writable (it basically writes a "not part of a pool" tag to the PoolPart folder on the disk. 2 disks. Eg, X-1 disks. X being the number of duplication. So you can lose a number of disks equal to one less than the level of duplication. (also note that no duplication is basically a duplication of 1, so can tolerate 0 disks failing). And StableBit DrivePool is aware of partitions, and will actively avoid putting copies of a file on the same physical disk. This is part of wy we don't support dynamic disks, actually (because checking this becomes immensely more complicated with dynamic disks, and a lot more costly since this is also done in the kernel driver, as well). Also, even if you lose "too many disks", the rest of the pool will continue to work, with the data that is on the remaining disks.
  20. That may not be what is going on, actually. The percentage of the scan isn't based on the whole disk, but what is left to be scanned. And since it tracks sections of the disk, it may be effectively just continuing to scan the drive. You can see the status in the sector map for the drive, and this may provide a more accurate picture of what is going on: https://stablebit.com/Support/Scanner/2.X/Manual?Section=Disk Scanning Panel Otherwise, could you open a ticket at https://stablebit.com/contact
  21. Regardless of the quality of the software, this isn't helpful nor constructive, and provides nothing to the conversation.
  22. To make sure, you mean in a VM, correct? If so, then there is absolutely no issue, as the VHD/VHDX are presented as physical disks to the VM.
  23. please head to https://stablebit.com/contact and open a ticket there, if you haven't already. We don't handle licensing issues on the forum, due to generally needing sensitive information.
  24. Just a heads up, the SSD Optimizer balancer is mostly meant for writes, and writes of new files. It doesn't effect reads, unless those files are still on the SSD (or locked there). If you're looking to boost the read speeds, you'd want something like primocache.
×
×
  • Create New...