Jump to content

manticore.mm

Members
  • Posts

    10
  • Joined

  • Last visited

Posts posted by manticore.mm

  1. Hi

    i have a really bad problem here and i dont know what i should do anymore..

    i had to reinstall my server 2016 because i changed my boot drive to a nvme drive and had to change bios to uefi for that.

    after reinstalling everything stablebit shows not a single smart data anymore, the only message is "The on-disk SMART check is not accessible..."

     

    before the new installation everything was working great, all my disk are connected to the onboard sata controller.

     

    hardware:

    supermicro x10sdv-tln4f, xeon d 1541

    128gb ram ddr2400

    boot: intel 760p 128gb nvme

    ssd: micron 5100 pro 960gb

    data: 4x wd gold 12tb

     

    i am using v 2.5.2.3153 BETA of stablebit scanner

     

    i installed the latest intel chipset drivers and there is no problem with e.g. crystal disk info. so smart is enabled.

    pls help me

    stable1.JPG

    stable2.JPG

    stable3.JPG

    stable4.JPG

     

    This is "by controller"

    stablebit_controller.JPG

  2. thanks for your answer.

     

    so whats your "best practice" for situations like this?

     

    leave the option "move duplicated files from damaged disks" on?

     

    i let the balancer do the job and the drive is now empty without any problems, but the file scan is not possible any more.

     

    i am trying to find out whats the best way to deal with such errors like current pending sectors. if one drive dies completely, ok you have to remove it and replace it. thats easy. but what to do with drives showing few unstable sectors? when can you trust them again? never?

     

    i removed the drive from the pool an now i write zeros to it.

  3. hm i checked the manual section of drive pool and it says:

     

    Protected files are never copied from the drive that's being removed. Instead, they are regenerated from one of the drives that holds the other duplicated file parts.

    This is important when you're removing a drive that's already damaged, as the integrity of the drive will not affect duplicated files.

     

    This is ok. So i think it would be the best to setup the scanner plugin to only move non protected files away from the damaged disk.

    And then, if you remove the damaged disk, on this process the health copy will be duplicated again.

     

    I'm right?

  4. At the moment stablebit scanner detected an unreadable sector on one of my 1,5TB Samsung drive.

     

    I dont know witch files are damaged because scanning disk surface is at 97,41% and it says "you will be able to start a file scan when the disk surface scan completes" -> this is ok.

     

    Surface scan has stopped because the plugin is evacuating the files from this disk (99% duplicated files)

     

    So what does happen with the corrupted files? Are they evacuated in that corrupted state to an other disk? This would be bad, because of duplicating i have an healthy copy of these files. I will never find out what was the healthy or the corrupted file?

     

    If this is true, it is a really bad default setting for stablebit scanner plugin to move duplicated files from damaged disks

  5. After some Raid Configurations like StorageSpaces and Snapraid i tried Stablebit Drivepool and i like it.. will buy it. At the moment i have 19 days for trial.

     

    Case: 2x Sharkoon Rebel 9 Economy 

    OS: Windows Server 2012 R2 Standard

    CPU:  Intel Core i7 2600K 3,4GHz

    MoBo: Asus V Gene Z77

    RAM: 4x8GB Kingston Hyper X Black

    PSU:  Enermax Revolution 1050 Watt

    OS Drive: Crucial M4 SSD 512 GB

    2 Storage Controller:

    1. IBM M1015 @ 9211-8i IT v.20
    2. IBM 9211-8i IT v.20

    HDD:

    • 4x Toshiba HDWE140 4TB
    • 3x Samsung HD154UI 1,5TB
    • 1x Seagate ST3000DM001 3TB
    • 2x WD30EZRX-00M 3TB
    • 1x WD20EARS-07M 2TB
    • 5x WD10EAVS-00D 1B
    • ---> 33,2TB Pool

    With two cases (stand together connected only via long Power and long Sata Cables) and my two controllers i can add 20 drives.

    One case has 4x3 Icydock Hotswap Bays. The other at the moment only internal HDDs.

     

    That NUC 6i3 with 2x4TB WD40EZRX and 16GB DDR4 Ram, 2TB 2,5" and Samsung 950 Pro 256GB is for 24/7 usage. 24/7 Server with 3 VMs and Windows 10 as host os.

    I do all my office/internet work with the NUC, big storage server is running only plex server.

     

    fullsizerenderuckow.jpg

  6. I have tried it on two different PCs in two different countries and with 50/10 or 100/100 upload was always good but if you want to download big files direct download from web page was way faster than stablebit. i tested with 10/20mb chunks, high thread count, prefetch on/off etc.. i dont see to pay for this atm.

  7. First i love the idea behind stable bit cloud.

     

    I have an unlimited google drive account and i like the idea i can use stable bit as an archiv tool in combination with google drive.

     

    First my specs:

    - 50/10 mbit internet line

    - 100TB Google Drive created with default settings, 10mb chunk, 2 dl and 2ul threads, pining and prefetching on etc.. , cache set to minimum.

    - latest beta, 1.0.0.595 x64

    - win10 pro x64

     

    Upload -> its all ok, is see 10-12mbit upload and thats really ok

    Download -> horrible. If i try to copy a file from pool to local hard drive it starts with 1mb/s going down to 0kb/s and so on.. really slow. not usable.

     

    If i download a file directly from google drive web i get 6-7mb/s

     

    i change the pool settings but nothing helped. until now i am testing your product, but with that download speeds it really sucks :(

     

    I tryed it on an 100/100 line too and even there the download rate is much worse in compare with a download from the webpage.

     

    Its not useable.. even if i think to create a 100gb cache, i dont want copy files not in cache with effective 0,5-1mb/s.

×
×
  • Create New...