Jump to content

mcrommert

Members
  • Posts

    26
  • Joined

  • Last visited

Reputation Activity

  1. Like
    mcrommert got a reaction from Jaga in Damaged Flash Drive - CHKDSK finds nothing   
    Its now showing 1.22 mb of damaged but i see no red sectors...i will just treat this usb key as suspect...no more time should be wasted on it
  2. Like
    mcrommert reacted to Julius in Damaged Flash Drive - CHKDSK finds nothing   
    Have you considered it may not be the flash-drive that's causing this? Try the USB controller or the motherboard. Especially check if the board gets enough power. I've seen USB drives unjustifiably getting the blame for issues more often than I can count, while a mainboard was broken, or PSU didn't have enough power left for the USB lanes.
    If that's ruled out, and it still fails, try running SpinRite level 2 scan on the USB-drive (or similar software). May work wonders.
  3. Like
    mcrommert reacted to Christopher (Drashna) in Confused about Duplication   
    Okay, so, you do want to do what I've posted above.
    Actually, the RC version of StableBit Drivepool will automatically prefer local disks (and "local" pools) over "remote" drives for reads.  So nothing you need to do here.
    As for writes, if real time duplication is enabled, there isn't anything we can really do here.  In this case, both copies are written out at the same time.  So, to the local pool and the CloudDrive disk, in this case.   
    But the writes happen to the cache, and then are uploaded.  There are some optimizations there to help prevent inefficient uploads. 
    No, this is to make sure that unduplicated data is kept local instead of remote.
    as for the drives size, it will ALWAYS report the raw capacity, no matter the duplication settings.  So this pool WILL report 80TB.   We don't do any processing of the drive size, because .... there is no good way to do this.  Since each folder can have different duplication levels (off, x2, x3 ,x5, x10, etc), and the contents may drastically vary in size, there is no good way to compute this, other than showing the raw capacity. 
    There isn't a (good) way to fix this.  
    You could turn off real time duplication, which would do this. But it means that new data wouldn't be protected for up to 24 hours, .... or more.  
    Also, files that are in use cannot be duplicated, in this config.   
    So, it leaves your data more vulnerable, which is why we recommend against it. 
     
    The other option is to add a couple of small drives and use the "SSD Optimizer" balancer plugin.  You would need a number of drives equal to the highest duplication level, and the drives don't need to be SSDs.   New files would be written to the drives marked as "SSD"s, and then moved to the other drives later (depending on the balancing settings for that pool). 
×
×
  • Create New...