Jump to content

Christopher (Drashna)

  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Christopher (Drashna)

  1. From what I've seen, that's fairly similar to what others have done. As for a "better solution", that absolutely depends on what you're trying for. If you want faster speeds, a block based RAID solution will absolutely be faster overall. However, it can be a lot more fragile, unless you just have disks (and money) to throw at the problem. But for use with StableBit DrivePool, that sounds reasonable. Personally, I like having an SSD for the pool and use the SSD Optimizer. This way, you can use the Ordered file placement option and fill the drives to capacity. And if only want to use one disk and do have duplication, you can add the SSD and the pool to it's own pool, and use the SSD Optimizer there, and then just use the ordered file placement balancer on the "sub pool". With PrimoCache, writes to the pool would still be done to the SSD drive, so flushing the cache should be a lot faster, overall.
  2. Most likely, "because Windows". Specifically, the performance counters on the system are corrupted, most likely. Fortunately, that is a simple, two line fix: https://wiki.covecube.com/StableBit_DrivePool_Q2150495
  3. I think you have a ticket for this open already. But yeah, that's very unusual. I'm personally running it on several Windows 11 systems (a mix of 21H2 and 22H2), and don't see this issue. That said, does Disk Management take forever to open too?
  4. Likely, it was some SMART value that triggerd the issue, and then corrected itself. This can happen, especially if problem sectors get written to (the drive can correct them, usually by remapping). If you haven't, set up email notifications (or set up mobile/text notifications in StableBit Cloud), and it should give you more details about any errors that occur.
  5. Not directly. However, the dpcmd tool does have some options that will list the full location of each file. Specifically "dpcmd get-duplication (path-to-file)". In some cases, it does run CRC comparison. But adding more at runtime is problematic, since the driver runs in the kernel. So any operations need to be very quick. Computting hashes of files is inherantly very expensive. If this sort of functionality is added, it won't be directly as part of StableBit DrivePool, for this reason, and others. Checking both the "Force damaged disk removal" and "duplicate data later" options should make the removal happen much faster. But it will still move off data from the drive, if needed. Otherwise, data would be left on the disk, and if it's not duplicated data... That said, you can immediately eject the disk from the pool using the dpcmd tool. However, this does not move ANY data from the drive. Doing so will require manual intervention. Also, the disk still needs to be writable (it basically writes a "not part of a pool" tag to the PoolPart folder on the disk. 2 disks. Eg, X-1 disks. X being the number of duplication. So you can lose a number of disks equal to one less than the level of duplication. (also note that no duplication is basically a duplication of 1, so can tolerate 0 disks failing). And StableBit DrivePool is aware of partitions, and will actively avoid putting copies of a file on the same physical disk. This is part of wy we don't support dynamic disks, actually (because checking this becomes immensely more complicated with dynamic disks, and a lot more costly since this is also done in the kernel driver, as well). Also, even if you lose "too many disks", the rest of the pool will continue to work, with the data that is on the remaining disks.
  6. That may not be what is going on, actually. The percentage of the scan isn't based on the whole disk, but what is left to be scanned. And since it tracks sections of the disk, it may be effectively just continuing to scan the drive. You can see the status in the sector map for the drive, and this may provide a more accurate picture of what is going on: https://stablebit.com/Support/Scanner/2.X/Manual?Section=Disk Scanning Panel Otherwise, could you open a ticket at https://stablebit.com/contact
  7. Regardless of the quality of the software, this isn't helpful nor constructive, and provides nothing to the conversation.
  8. To make sure, you mean in a VM, correct? If so, then there is absolutely no issue, as the VHD/VHDX are presented as physical disks to the VM.
  9. please head to https://stablebit.com/contact and open a ticket there, if you haven't already. We don't handle licensing issues on the forum, due to generally needing sensitive information.
  10. Just a heads up, the SSD Optimizer balancer is mostly meant for writes, and writes of new files. It doesn't effect reads, unless those files are still on the SSD (or locked there). If you're looking to boost the read speeds, you'd want something like primocache.
  11. It's a one time thing, basically. Newer versions have significantly reworked update code.
  12. "0x80070643" indicates that the system has a pending reboot. Rebooting the system will usually fix this issue. If not, this will: https://support.microsoft.com/en-us/topic/fix-problems-that-block-programs-from-being-installed-or-removed-cca7d1b6-65a9-3d98-426b-e9f927e1eb4d
  13. This is for older versions of StableBit DrivePool and StableBit Scanner. Updating to the latest version will fix this.
  14. I don't believe it does. But I'm not certain. However, let me flag this for Alex: https://stablebit.com/Admin/IssueAnalysis/28716
  15. No. There are no plans on implementing this sort of functionality.
  16. Yes and no. Specifically, by default, StableBit Scanner will only scan one drive per controller. And in fact, you have to get into the advanced configuration to increase that. So if you're seeing multiple drives being scanned at once, it's likely because they are connected to different controllers (you can verify this by selecting the "group by controllers" option in the UI.
  17. If you're using duplication, then definitely, as each copy of duplicated files are written in parallel. Having multiple SSDs (and using the SSD Optimizer balancer) will get better write speeds, as it is much less likely to have to use a spinning drive for writing.
  18. Yeah, it's a known issue. It has to do with how the "on demand" feature works, and that it's not supported on the emulated drive that StableBit CloudDrive. Also, it's not just a sparse file, it's a special type of file, that requires file system support. And because we use an emulated drive, we'd have to reverse engineer that and add support for it. A possible solution is to create a StableBit CloudDrive disk on the pool and use that. But I can understand not wanting to do that.
  19. Yes. New data will be stored in the local cache, until it's able to be uploaded. If this is in regards to the 750GB per day, per account upload limit, throttling the upload speed to ~80mbps should help prevent hitting that limit.
  20. Well thanks! And yeah, it's a perpetual problem. Antivirus vendors don't like new releases from small software vendors.
  21. A quick look into this shows that there may not really be any performance difference, especially as data may not be staying on the drive (depending on how you use them)
  22. Do you have a "found.###" folder in the root of your drive? If so, the data may be there. Also something like WinDirStat or WizTree can be good for visualizing where the data is.
  23. It will see the drives as non-pooled drives. So you should be fine, in that regards.
  24. If you sent a support ticket, those should be addressed rapidly. If not, send an email to me at christopher@covecube.com
  • Create New...