Jump to content

exterrestris

Members
  • Posts

    5
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by exterrestris

  1. My snapraid.conf is pretty standard - I haven't really changed any of the defaults (so I haven't included them). I choose to keep a copy of the content file on every disk, but that's not strictly necessary.

    # Defines the file to use as parity storage
    # It must NOT be in a data disk
    # Format: "parity FILE [,FILE] ..."
    parity C:\Snapraid\Parity\1\snapraid.parity
    
    # Defines the files to use as content list
    # You can use multiple specification to store more copies
    # You must have least one copy for each parity file plus one. Some more don't hurt
    # They can be in the disks used for data, parity or boot,
    # but each file must be in a different disk
    # Format: "content FILE"
    content C:\Snapraid\Parity\1\snapraid.content
    content C:\Snapraid\Data\1\snapraid.content
    content C:\Snapraid\Data\2\snapraid.content
    content C:\Snapraid\Data\3\snapraid.content
    content C:\Snapraid\Data\4\snapraid.content
    
    # Defines the data disks to use
    # The name and mount point association is relevant for parity, do not change it
    # WARNING: Adding here your boot C:\ disk is NOT a good idea!
    # SnapRAID is better suited for files that rarely changes!
    # Format: "data DISK_NAME DISK_MOUNT_POINT"
    data d1 C:\Snapraid\Data\1\PoolPart.a5f57749-53fb-4595-9bad-5912c1cfb277
    data d2 C:\Snapraid\Data\2\PoolPart.7d66fe3d-5e5b-4aaf-a261-306e864c34fa
    data d3 C:\Snapraid\Data\3\PoolPart.a081b030-04dc-4eb5-87ba-9fd5f38deb7b
    data d4 C:\Snapraid\Data\4\PoolPart.65ea70d5-2de5-4b78-bd02-f09f32ed4426
    
    # Excludes hidden files and directories (uncomment to enable).
    #nohidden
    
    # Defines files and directories to exclude
    # Remember that all the paths are relative at the mount points
    # Format: "exclude FILE"
    # Format: "exclude DIR\"
    # Format: "exclude \PATH\FILE"
    # Format: "exclude \PATH\DIR\"
    exclude *.unrecoverable
    exclude Thumbs.db
    exclude \$RECYCLE.BIN
    exclude \System Volume Information
    exclude \Program Files\
    exclude \Program Files (x86)\
    exclude \Windows\
    exclude \.covefs
    

    As for DrivePool balancers, yes, turn them all off. The Scanner is useful to keep if you want automatic evacuation of a failing drive, but not essential, and the SSD Optimiser is only necessary if you have a cache drive to use as a landing zone. If you don't use a landing zone, then you can disable automatic balancing, but if you do then you need it to balance periodically - once a day rather than immediately is best, as you ideally want the SnapRAID sync to happen shortly after the balance completes.

    I'm not sure what the default behaviour of DrivePool is supposed to be when all balancers are disabled, but I think it does split evenly across the disks.

  2. DrivePool doesn't care about the drives in the pool having drive letters, so you'll be fine there. SnapRAID on the other hand needs to be able to access the drives - if you don't want drive letters for each, then you can mount each drive in an empty folder instead (as long as the drive is NTFS).

    I use the following folder structure:

    C:\Snapraid
        \Data
            \1
            \2
            \3
            \4
        \Parity
            \1
        \Cache
            \1
            \2

    Each of the numbered folders is a mount point for a disk - SnapRAID is then configured to point at each of the data drives and the parity drive. Best practice is to set the "root" of each data disk in the SnapRAID config to the hidden Drivepool poolpart directory (because if/when you swap out a disk, Drivepool will assign a new ID to the poolpart directory, so the "root" would change). Also make sure that Drivepool isn't going to do any balancing between the data drives, as that will just cause an excessive number of changes each time SnapRAID runs.

    Since SnapRAID doesn't like it when you write to the disk whilst it's doing a sync/scrub, I also use two cache drives with the SSD Optimizer plugin. These are not protected by SnapRAID, but are instead set up as another Drivepool with duplication enabled, which is then part of the main pool, like so:

    SnapRAID (Pool) - Mounted as E:\
     - C:\Snapraid\Data\1
     - C:\Snapraid\Data\2
     - C:\Snapraid\Data\3
     - C:\Snapraid\Data\4
     - Data Cache (Pool)
        - C:\Snapraid\Cache\1
        - C:\Snapraid\Cache\2

    This means that new files are always written to the cache drives, and will not interfere with SnapRAID if it happens to be running, but those files still have protection against disk failure. The SSD Optimizer is the only balancer enabled on the main pool (well, except for the Scanner drive evacuation balancer), and balances into the main pool once a day, so the cache disks only need to be big enough to hold the largest file that is likely to be written to the pool.

  3. I'm running StableBit Scanner 2.5.4.3216. I've also had to change the driver I'm using back to the default Windows (non-UASP) driver, as the ASUS UAS Storage Driver seems to be a bit buggy, but this hasn't changed the lack of health data. With the ASUS UAS Storage Driver, attempting to do a surface scan causes the drive to stop responding after completing ~5% (read speed drops to 0 in Task Manager, but the disk remains 100% active), and it'll never pass that point even after stopping and restarting the scan. Switching back to using the default Windows driver (USB Mass Storage Device) allows the entire disk to be scanned without problems.

  4. I have an Icy Box IB-1816M-C31 USB 3.1 Gen 2 enclosure containing a 512GB Samsung 950 Pro NVMe SSD, for which Scanner cannot read the SMART data. This enclosure uses a JMicron JMS583 USB-NVMe bridge, and shows up as a UASP device using the ASUS UAS Storage Driver (removing this driver and using the default driver, which takes it out of UASP mode, doesn't change anything).

    CystalDiskInfo is able to read the SMART data, as is smartctl when run with the -d sntjmicron option to correctly identify the USB device.

    -d sntjmicronC:\Program Files\smartmontools\bin>smartctl -a -d sntjmicron /dev/sdg
    smartctl 7.0 2018-12-30 r4883 [x86_64-w64-mingw32-w10-1803] (sf-7.0-1)
    Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
    
    === START OF INFORMATION SECTION ===
    Model Number:                       Samsung SSD 950 PRO 512GB
    Serial Number:                      S2GMNXAH113261W
    Firmware Version:                   2B0QBXX7
    PCI Vendor/Subsystem ID:            0x144d
    IEEE OUI Identifier:                0x002538
    Controller ID:                      1
    Number of Namespaces:               1
    Namespace 1 Size/Capacity:          512,110,190,592 [512 GB]
    Namespace 1 Utilization:            237,899,358,208 [237 GB]
    Namespace 1 Formatted LBA Size:     512
    Namespace 1 IEEE EUI-64:            002538 5161b033cd
    Local Time is:                      Sat Jul 20 17:41:36 2019 GMTST
    Firmware Updates (0x06):            3 Slots
    Optional Admin Commands (0x0007):   Security Format Frmw_DL
    Optional NVM Commands (0x001f):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat
    Maximum Data Transfer Size:         32 Pages
    
    Supported Power States
    St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
     0 +     6.50W       -        -    0  0  0  0        5       5
     1 +     5.80W       -        -    1  1  1  1       30      30
     2 +     3.60W       -        -    2  2  2  2      100     100
     3 -   0.0700W       -        -    3  3  3  3      500    5000
     4 -   0.0050W       -        -    4  4  4  4     2000   22000
    
    Supported LBA Sizes (NSID 0x1)
    Id Fmt  Data  Metadt  Rel_Perf
     0 +     512       0         0
    
    === START OF SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    
    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        51 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          10%
    Percentage Used:                    2%
    Data Units Read:                    40,017,269 [20.4 TB]
    Data Units Written:                 74,499,658 [38.1 TB]
    Host Read Commands:                 586,596,545
    Host Write Commands:                842,689,794
    Controller Busy Time:               3,890
    Power Cycles:                       3,841
    Power On Hours:                     10,578
    Unsafe Shutdowns:                   66
    Media and Data Integrity Errors:    0
    Error Information Log Entries:      13,386
    
    Error Information (NVMe Log 0x01, max 64 entries)
    Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS
      0      13386     0  0x0b0e  0x4004  0x000            0     0     -
      1      13385     0  0x0609  0x4004  0x000            0     0     -
      2      13384     0  0x0508  0x4004  0x000            0     0     -
      3      13383     0  0x0407  0x4004  0x000            0     0     -
      4      13382     0  0x0306  0x4004  0x000            0     0     -
      5      13381     0  0x0a06  0x4004  0x000            0     0     -
      6      13380     0  0x010a  0x4004  0x000            0     0     -
      7      13379     0  0x0e06  0x4004  0x000            0     0     -
      8      13378     0  0x0d01  0x4004  0x000            0     0     -
      9      13377     0  0x0c00  0x4004  0x000            0     0     -
     10      13376     0  0x0b0f  0x4004  0x000            0     0     -
     11      13375     0  0x0a0e  0x4004  0x000            0     0     -
     12      13374     0  0x090d  0x4004  0x000            0     0     -
     13      13373     0  0x080c  0x4004  0x000            0     0     -
     14      13372     0  0x070b  0x4004  0x000            0     0     -
     15      13371     0  0x060a  0x4004  0x000            0     0     -
    ... (48 entries not shown)
    
    

    The stickied Direct I/O Test tool cannot read the SMART data either - it can only retrieve the following SSD data, no matter what method is used:

      Version:   40
    
      Model:   SamsungSSD 950 PRO 2B0Q
      Serial number:   DD56419883A90
    
      Bus type:   Usb
      Command queuing:   True
      Device type:   0x00
      Raw device properties:   0
      Removable media:   False
    
      Vendor ID:   Samsung
      Product ID:   SSD 950 PRO 2B0Q
      Product revision:   0204
    

    Every other option shows as a red cross, unless ScsiPassthroughJmicron is selected. In that case, SMART status under Direct I/O becomes available (but not any other option), however it reports as Status: Failing

    cdi.PNG

  5. I'm using Drivepool (without duplication) in combination with Snapraid. This is working well, but there is a issue I would like to solve - Snapraid doesn't tolerate writing to the disk while it's running a sync (creating a snapshot). Most of the time this isn't a problem, as my daily sync is scheduled late in the morning after any backups or similar would usually have completed. However, periodically the sync will overlap with other activity causing it to fail - for example, when a full backup is being written (and so takes longer), or the sync includes a full backup (which takes longer to calculate parity).

    What I thought I could do is add another disk and use the SSD Optimizer plugin to create a landing zone. This would be in the pool, but not protected by Snapraid, which would allow for writes to happen without interfering with Snapraid if a sync is running. Now most of the time I'd want this to balance pretty quickly, but only if a sync isn't running. What would be ideal was if there a way to enable/disable balancing from the command line - then the script I use to run my daily sync could disable balancing as it starts, and re-enable it once complete. Is this possible?

×
×
  • Create New...