Jump to content

Shane

Moderators
  • Posts

    1036
  • Joined

  • Last visited

  • Days Won

    105

Posts posted by Shane

  1. Click on the affected drive in Scanner. Then click on the plus sign next to it. If any part of the disk is grey instead of green or red, click the Start Check button on the left (hover over the buttons to see the tooltips if needed) so that it can complete scanning (this can take a long while). Once all of the disk is colored there should be three green ticks below (indicating the disk is now healthy), however if there is a red cross instead of the first green tick click the underlined text beside it. This should bring up the File Scan window. Start the file scan to proceed with checking for damaged files and attempting to recover them.

    https://stablebit.com/Support/Scanner/2.X/Manual?Section=File Recovery

    Note: after you've used Scanner to run file recovery, if the disk is still complaining and you want to force the disk to go ahead and confirm the unused sector as good/bad, open a command prompt and run the command "chkdsk driveletter: /r" as an administrator (where driveletter is the drive letter of the affected drive). This will take a considerable amount of time however.

     

  2. Looks like it might still be an issue - depending on the utility you're trying to use.

    I'm running some tests on my home server using DrivePool 2.3.2.1493 (latest version currently) with Read Striping temporarily enabled (it defaults to off). I haven't reproduced the issue with FLAC but I am using the more recent 1.4.2 release, HashCheck v2.4.0 (gurnec) sometimes returns MISMATCH or UNREADABLE (occasionally on the 1st and 3rd file of my first sample set of 79 files totalling 1.2GB) or just UNREADABLE (much more frequently on my second sample set of 82 files totalling 185GB), and HashTools 4.6 (Binary Fortress) hasn't had any problems at all on any passes so far of either sample set. Update: I also tested the Windows file comparison tool FC.exe and found it was susceptible too, sometimes returning either mismatching bytes or failing to find a file on the pool despite it being present.

    From what I can tell this issue does not affect normal file copy operations: both loop copy (a->b->a) and cascade copy (a->b->c) tests of the smaller sample set shows no corruption at all after 512 iterations, although I am still running these tests with the larger sample set and will edit this post with the result after they finish in a day or two. Edit: the larget sample set also showed no corruption after 24 iterations (cascade, could not run more due to available space) and 77 iterations (loop).

    Given the above: my guess would be that different file-checking utilities might be using different functions/calls to read the files and some of those functions/calls may be too low-level / not designed to allow for virtual drives, although I'm not sure how that results in different data being delivered from the virtual drive and only sometimes at that. Some sort of race condition or timeout? Thankfully, as mentioned above, the Read Striping option is not enabled by default (which might also be why this issue hasn't seen much light).

    @Christopher (Drashna) It might be a good idea to add a Y/N dialog warning (or at least update the tooltip) to the Read Striping option, perhaps along the lines of "some utilities that depend on reading files in chunks, e.g. for comparison or integrity checking, may experience issues with this enabled"?

  3. If drivepool has duplicates available on other drives to replace damaged files that are detected and removed by chkdsk then drivepool will do so when it next does a consistency check (which you can also manually trigger: Settings -> Troubleshooting -> Recheck duplication).

  4. Drivepool works in a way that doesn't keep an "original" and a "copy" - a duplicated file simply exists on multiple drives.

    Folder duplication has priority over file placement where they conflict - for example, if a folder should have 3x duplication but file placement is restricted to 2 drives, it will place 2 duplicates in those drives and another duplicate somewhere else - and will warn you with a banner in the File Placement tab that it was unable to fully comply with your placement rules.

  5. 1. If duplication is not set to real-time (Manage Pool -> Performance -> Real-time duplication) it will only run duplication every 24 hours or when duplication level is enabled/changed, and the duplication pass can take some time to complete depending on the amount that needs to be duplicated. Could that be the issue?

    2. You may need to tick "Balancing plug-ins respect file placement rules" in Manage Pool -> Balancing -> Settings. Also if you have duplication enabled for a folder then you might want to check whether that is overriding any File Placement rules that say to store that folder in only one drive (because then Drivepool is being told contradictory things).

     

     

  6. Latest 64 bit edition of both should work with it?

    Do note that Drivepool requires simple, non-dynamic NTFS (or ReFS) drives though, as far as I know you can't pool drives that are created using Microsoft's Storage Spaces or Microsoft's software RAID (you can pool hardware RAID drives if they meet the aforementioned criteria).

  7. I don't think there's any option to adjust it. This probably merits being submitted via the Contact form as well I think; the consistency checking schedule being controllable to reduce wear and resource usage is something I think the community would be interested in but can only be added by the developer.

  8. @Roger79 If the SSD Optimizer counts as "emptying" the SSD drives into the Archive drives, you might need under Balancing -> Settings -> File placement settings:

    • (uncertain) File placement rules respect...
    • (ticked) Balancing plug-ins respect file placement rules
    • (unticked) Unless the drive is being emptied.

    The other possibility is that File Placement being told "use drive X as permanent storage" isn't compatible with the SSD Optimizer being told "use drive X as temporary cache".

  9. Sounds like you'd want the following for each pool?

    1. Balancing -> Balancers -> only SSD Optimizer ticked
      • under Drives, tick the SSD / Archive drives as appropriate to set your cache drive
      • under Options, set sliders as desired (these only concern filling, they don't empty)
      • under Ordered placement, leave both Unduplicated and Duplicated unticked (or, if you want to use it, make sure "Leave the files as they are" is selected).
    2. Balancing -> Settings
      • under Automatic balancing, select Balance immediately, with the "Not more often than..." unticked
      • under Automatic balancing - Triggers, select 100% / unticked (as you want it always moved straight away)
      • under Plug-in settings, "Allow balancing plug-ins to force immediate balancing..." ticked (so it should move straight away anyway)
      • under File placement settings, should be irrelevant since you're not using the File Placement section.

    This should result in any files copied to the pool going via your SSD cache drive first then being immediately moved to the others. As always with "production" data, I recommend confirming the behaviour is as expected with a test pool.

  10. Marking a pooled drive read-only may cause drivepool to not write to the pool at all? EDIT: can cause it to complain that the disk is write-protected if an operation on the pool involves that disk.

    My suggestion (assuming otherwise defaults) would be:

    • Manage Pool -> Balancing... -> Balancers tab -> Ordered File Placement plugin -> move the problem drive to the bottom -> select "Only control new file placement".
    • Manage Pool -> Balancing... -> Settings tab -> tick "File placement rules respect real-time file placement limits set by the balancing plugins"

    This should still allow other disks in the pool to be written.

  11. Given you've already chosen not to keep personal data, at this point personally I'd do a bare metal install of Windows - that is using external install media (e.g. USB or DVD) and choosing to delete the existing OS partition entirely so it doesn't retain any system files/configuration at all from the previous install.

  12. Fair enough. Hmm. In the meantime, I did a quick search and you can use a DISKPART script to make a disk volume read-only or writable via the command line, for example:

    diskpart /s z:\make-readonly.txt and diskpart /s z:\make-writable.txt

    where make-readonly.txt would be a text file consisting of

    select volume driveletter
    attributes volume set readonly
    exit

    and make-writable.txt would be a text file consisting of

    select volume driveletter
    attributes volume clear readonly
    exit

    where "driveletter" would be the drive letter of your cloud drive (e.g. "select volume p").

    Give it a try on a test cloud drive? It doesn't work with DrivePool, but maybe it will work with CloudDrive?

  13. You could use the Contact form to submit a feature request to Stablebit for such a command line / scheduled operation, but I'm not sure how effective such a thing would be; if your computer can be scripted or scheduled to toggle the read-only state of a cloud resource, so can any ransomware potentially be written to do the same.

    Edit: I can see some value in a "lock this drive (make it read-only) until I manually enter a password to unlock it" command.

  14. The way DrivePool's duplication works there isn't really an "original" and a "copy" of a file, the file just exists on two or more drives at the same time (exception: if real-time duplication isn't turned on, the file starts off only on one drive until the next scheduled duplication pass).

    You can use the Ordered File Placement balancer (OFP) to fill drives in a pool in a certain order, so long as there's room to do so. So a file might end up on drive 1 and 2, or 1 and 3, or 2 and 3, etc, but OFP can't ensure a file exists on two particular separate groups of drives. OFP can use separate priority lists for non-duplicated and duplicated files, so you could have non-duplicated and duplicated files filling the drives of a pool in different directions.

    You can use the Drive Usage Limiter balancer (DUL) to place files only on certain drives, so long as there's room to do so. Like the OFP, the DUL can be set to restrict non-duplicated and duplicated files differently but can't ensure a file exists on separate groups of drives.

    If however what you want is to ensure that a group of drives never holds the only copies of your files? Use a super-pool to duplicate the files (and/or have backups).

  15. That would work (presuming you have enough free space).

    Alternatively you could use a USB drive dock to connect and add one of the new drives before removing one of the old drives, then repeat the process with the other new and old drives. Though this assumes you have a spare USB port and a USB dock to plug into it.

    There's also manual tricks you can use to more quickly (still takes a while) replace pooled drives with new ones, but they require a certain level of "knowing what you're doing" in case anything doesn't go according to plan, involving copying from inside the pool drives' hidden PoolPart folders.

×
×
  • Create New...