This issue is definitely NOT resolved and is really extremely serious. I can't believe it hasn't gotten more attention given the high potential for data corruption. I'm on v2.3.11.1663 (latest at this time) and was highly perplexed at seeing random corruption throughout thousands of files I copied to a linux server via an rsync command. This sent me on a wild goose chase trying to look into bad RAM modules or bugs in rsync but it is now clear that the issue was DrivePool all along (it didn't help that I actually did have some bad RAM on the linux server, but that was red herring as it has since been replaced with ECC RAM that has been tested).
After noticing that the source data on a DrivePool volume "seemed" valid but thousands of the files copied to the linux server were corrupt I spent weeks trying to figure out what was going on. Trying to narrow down the issue I started working with individual files. In particular I looked at some MP3 files that were corrupt on the remote side. When I would re-copy a file via rsync with the --checksum parameter it would always report the mismatch and act like it was re-copying the file, but then sometimes the file would STILL be corrupt on the remote side. WTF? Apparently this bug was causing the rsync re-copy to send yet another corrupted version of the file to the remote side, though it would occasionally copy a good version. Super weird and very inconsistent.
So then I wrote a Node.js script to iterate through a folder and generate/compare MD5 hashes of source files (on the DrivePool volume) and target files (on the remote linux server). I started with a small dataset of around 4000 files (22 of which were corrupt). Things got even weirder with multiple runs of this script showing different files with mismatched hashes, and I realized it was frequently generating an incorrect hash for the SOURCE file. There could be different results each time the script was run. Sometimes hundreds of files would show a hash mismatch.
It's only been a short time since I disabled read-striping so I can't verify that has fixed everything but with read-striping disabled I haven't yet experienced a single corrupt transfer. An rsync command to compare based on checksum completed and fixed all 22 remaining corrupt files. And another couple runs of my hash compare script for the small 4000 file dataset shows no hash mismatches.
The only thing preventing this from becoming an utter disaster is that I hadn't yet deleted the source material after copying to the remote server, so I still have the original files to compare and try to repair the whole mess. However, some of the files were already reorganized on the remote server, so it is still going to take a lot of manual work to get everything fixed.
Sorry for the rant, but if the devs are not going to actually fix DrivePool I'm about done with this software. There are too many "weird" things going on (not just this particularly bad bug).