Jump to content

number_one

Members
  • Posts

    4
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by number_one

  1. I also don't really have the bandwidth to set up a test at the moment, but my issues were easily reproducible by using rsync through a Git for Windows bash environment. I was using rsync to copy files from a DrivePool volume to a remote machine via SSH. Granted that's not a super common scenario, but it's also not at all rare for developers and/or those who also use Linux and/or Mac environments. I don't doubt that this is a tricky issue to debug, but with all the scenarios detailed I really find it hard to believe that no one on the development team can reproduce the issue at all.
  2. Yes, I haven't heard of any update from Covecube about any resolution to this (or even that they're working on it), so you should DEFINITELY disable read striping. I'm quite frankly a bit alarmed at how there seems to be no official acknowledgement of the issue at present. The only post in this thread from an actual employee is from nearly five years ago. I do understand that it likely involves specific edge cases to be affected by the issue, but those edge cases are clearly not rare or hard to demonstrate. In my case all it took was using rsync through a Git Bash environment to have the bug cause massive corruption. And it is easily repeatable (as in if I sync files using rsync from a Drivepool volume there were essentially NO instances where there wasn't at least some corruption when read striping was enabled).
  3. This issue is definitely NOT resolved and is really extremely serious. I can't believe it hasn't gotten more attention given the high potential for data corruption. I'm on v2.3.11.1663 (latest at this time) and was highly perplexed at seeing random corruption throughout thousands of files I copied to a linux server via an rsync command. This sent me on a wild goose chase trying to look into bad RAM modules or bugs in rsync but it is now clear that the issue was DrivePool all along (it didn't help that I actually did have some bad RAM on the linux server, but that was red herring as it has since been replaced with ECC RAM that has been tested). After noticing that the source data on a DrivePool volume "seemed" valid but thousands of the files copied to the linux server were corrupt I spent weeks trying to figure out what was going on. Trying to narrow down the issue I started working with individual files. In particular I looked at some MP3 files that were corrupt on the remote side. When I would re-copy a file via rsync with the --checksum parameter it would always report the mismatch and act like it was re-copying the file, but then sometimes the file would STILL be corrupt on the remote side. WTF? Apparently this bug was causing the rsync re-copy to send yet another corrupted version of the file to the remote side, though it would occasionally copy a good version. Super weird and very inconsistent. So then I wrote a Node.js script to iterate through a folder and generate/compare MD5 hashes of source files (on the DrivePool volume) and target files (on the remote linux server). I started with a small dataset of around 4000 files (22 of which were corrupt). Things got even weirder with multiple runs of this script showing different files with mismatched hashes, and I realized it was frequently generating an incorrect hash for the SOURCE file. There could be different results each time the script was run. Sometimes hundreds of files would show a hash mismatch. It's only been a short time since I disabled read-striping so I can't verify that has fixed everything but with read-striping disabled I haven't yet experienced a single corrupt transfer. An rsync command to compare based on checksum completed and fixed all 22 remaining corrupt files. And another couple runs of my hash compare script for the small 4000 file dataset shows no hash mismatches. The only thing preventing this from becoming an utter disaster is that I hadn't yet deleted the source material after copying to the remote server, so I still have the original files to compare and try to repair the whole mess. However, some of the files were already reorganized on the remote server, so it is still going to take a lot of manual work to get everything fixed. Sorry for the rant, but if the devs are not going to actually fix DrivePool I'm about done with this software. There are too many "weird" things going on (not just this particularly bad bug).
  4. Unfortunately this is still an issue in 2022 with v2.5.7.3565. I've been dealing with not being able to safely remove most usb flash drives for more than the past year and finally got annoyed enough to look into the issue. As soon as I stop the service ejecting works great. "Do not automatically scan removable media" is selected, but I also have automatic scanning disabled, so it shouldn't be doing ANYTHING for ANY disk and yet this issue persists. I'm going to have to leave the service disabled and only use it manually. This kind of defeats the purpose though and doesn't offer much beyond what you can get with various free SMART monitoring apps. In fact, if I have to manually enable it then I don't even get the basic SMART monitoring (only snapshots when manually run). It would certainly be nice if the problem with removable media could actually be fixed once and for all so the software could be used for its intended purpose.
×
×
  • Create New...