Jump to content
Covecube Inc.


Popular Content

Showing content with the highest reputation since 04/13/19 in all areas

  1. 1 point
    Mick Mickie: Thanks for jumping in Mick! Christopher just said "Crashes", so I thought the "Fix" might not help me. I don't usually upgrade to Beta versions of anything, but in this case I think I might make an exception. I was just upgrading to sort of get away from the built-in wssx versions in preparation for my upcoming move to Server 2016 Essentials. Again, thanks! Gary
  2. 1 point
    I've had it happen with normal reboots as well, just not as often as with crashes. It just depends on the timing. Imagine what happens on a reboot. Windows is forcefully shutting down services, including the Stablebit Scanner Service. So if this service gets shutdown at the timeframe where it is writing new DiskId files the files can end up corrupted, thus after a reboot the Service creates new DiskId files meaning all previous scan status is lost. Now the DiskId are not written literally every second anymore (which increases the risk that the service gets killed at the time of writing files significantly) but instead 20-40 minutes (don't know the exact interval now) . That's a reduction of a factor of 1200 to 2400 so the risk that you reboot at the exact time the files are written should basically be zero now.
  3. 1 point
    Well, a number of others were having this as well, and I've posted this info in a number of those threads, so hopefully, confirmation will come soon.
  4. 1 point
    The cause of the issue is fixed with Beta 3246. Since my machine is not frequently crashing, maybe once per month, it will take a while to verify it on my side.
  5. 1 point
    i think you mean mbit :-P Yes. It all depends on the response time you have. Speed is not the issue, it's my response time to google's servers You're just lucky to be closer. Plus i got upload verification on, that also cuts off speeds on uploads I get around 2500-2800 ms response time pr. thread and then instant download. So the less calls and the bigger download would do wonders for me
  6. 1 point
    We've definitely talked about it. And to be honest, I'm not sure what we can do. Already, we do store the file system data, if you have pinning enabled, in theory. Though, there are circumstances that can cause it to purge that info. The other issue, is that by default, every block is checksummed. That is checked on download. So, if corrupted data is downloaded, then you would get errors, and a warning about it. However, that didn't happen here. And if that is the case, more than likely, it sent old/out of date data. Which ... I'm not sure how we can handle that in a way that isn't extremely complex. But again, this is something that is on our mind.
  7. 1 point

    Request: Increased block size

    Again, other providers *can* still use larger chunks. Please see the changelog: This was because of issue 24914, documented here. Again, this isn't really correct. The problem, as documented above, is that larger chunks results in more retrieval calls to particular chunks, thus triggering Google's download quota limitations. That is the problem that I could not remember. It was not because of concerns about the speed, and it was not a general problem with all providers. EDIT: It looks like the issue with Google Drive might be resolved with an increase in the partial read size as you discussed in this post, but the code change request for that is still incomplete. So this prerequisite still isn't met. Maybe something to follow up with Christopher and Alex about.
  8. 1 point
    This information is pulled from Windows' Performance counters. So it may not have been working properly temporarily. Worst case, you can reset them: http://wiki.covecube.com/StableBit_DrivePool_Q2150495
  9. 1 point
    I'm not sure what you mean here. There is the read striping feature which may boost read speeds for you. Aside from that, there is the file placement rules, which you could use to lock certain files or folders to the SSDs to get better read speeds.
  10. 0 points

    10gb speeds using ssd cache?

    I doubt Stablebit would want to go the RamCache route because of the risk of any system failure causing the loss of (more) data (compared to SSD Cache or normal storage). I don;t but I know there are people here that succesfully use the SSD Cache. And it really depends on what SSD you are using. If it is a SATA SSD then you would not expect the 10G to be saturated. In any case, @TeleFragger (OP) does use duplication so he/you will need two SSDs for this to work.


  • Create New...