-
Posts
1143 -
Joined
-
Last visited
-
Days Won
114
Shane last won the day on February 27
Shane had the most liked content!
About Shane

Profile Information
-
Gender
Not Telling
-
Location
Australia
-
Interests
Eclectic.
Recent Profile Visitors
276477 profile views
Shane's Achievements
-
Ignore drive marked as Damaged with virtually no damage
Shane replied to Parallax Abstraction's question in Nuts & Bolts
@Christopher (Drashna) Any ideas? -
Ignore drive marked as Damaged with virtually no damage
Shane replied to Parallax Abstraction's question in Nuts & Bolts
If you manually put a test file into that disk's poolpart and do a Re-measure, does it evacuate the file? -
Parallax Abstraction reacted to an answer to a question:
Ignore drive marked as Damaged with virtually no damage
-
Shane started following File corruption when balancing drives , Ignore drive marked as Damaged with virtually no damage , Rename PoolPart.* folder and 4 others
-
Ignore drive marked as Damaged with virtually no damage
Shane replied to Parallax Abstraction's question in Nuts & Bolts
Hmm. Scanner checks by disk, not by volume, so I suspect an old trick where you split the drive into multiple partitions with the bad sectors in unallocated space won't work: Disk: |<- volume 1 -> <- unallocated space containing 8 bad sectors -> <- volume 2 ->| Only other option I can think of is telling Scanner to NOT scan the surface of that disk automatically (in Scanner GUI, click that disk, click "Disk settings" (has the three green horizontal bars next to it), tick "Never scan surface automatically", click Save) then expand the disk's entry so you can click on Edit blocks and select "Mark all Unreadable blocks Unchecked". Scanner should then still alert on any new SMART errors but it'll be up to you to remember to manually scan that disk from time to time. -
Wouldn't work. For example, say our SSD drives are D and E and our HDD drives are F and G, so we pool D and E as S and then S and F and G as T and enable duplication on T. This effectively gets us D:\poolpart.SD and E:\poolpart.SE and S:\poolpart.TS (and thus D:\poolpart.SD\poolpart.TS and E:\poolpart.SE\poolpart.TS) and F:\poolpart.TF and G:\poolpart.TG - which means any data placed on S never goes on the HDDs (regardless of whether duplication is enabled). If you did it the other way (so the pool of HDDs is being added to the pool of SSDs) you'd at least get data placed on the SSD pool going on the HDDs sometimes but you wouldn't be able to guarantee that every file would end up on both a HDD and a SSD unless you started manually controlling it via (I suspect) a quantity of placement rules at least equal to the quantity of SSDs - which rapidly defeats the purpose of automating things simply. FWIW, I use the volume labels of the drives to indicate their position and serial. E.g. a drive's label might be cardC-portP-bayB-diskD. That way if I saw that all the drives whose labels began with card1 suddenly showed in the GUI as missing from my pool I'd have good reason to suspect there was a problem with that particular card.
-
Hi, the PoolPart.* folders cannot be renamed. You could create a shorter symbolic link or junction to a poolpart, but as it is generally not recommended to access poolparts directly (e.g. except for initial seeding) is there a particular reason you need to?
-
SolidSonicTH reacted to an answer to a question:
SSD for original files, HDD for dupes. Give it to me straight and simple.
-
Shane reacted to an answer to a question:
Pool file duplication causing file corruption under certain circumstances
-
Just to explicitly confirm, you did then disable Read Striping and further attempts to re-rip and recopy have all been successful since?
-
Hmm. Some digging around suggests you might need to add it as "DisablePayloadSigning": false to your particular provider's section in the "%programdata%\StableBit CloudDrive\ProviderSettings.json" file? (put a comma after the word "false" if it is not the last entry in the section)
-
Are other SMART utilities (e.g. CrystalDiskInfo) also seeing the wrong age?
-
I don't think the GUI should be that slow (even with that many drives). You could make a note of your settings and try a clean install of the DrivePool software or you could open a support ticket with StableBit to see if they can figure out what is causing it?
-
The issue with OneDrive referenced via their link to reddit is due to OneDrive relying on the expected NTFS behaviour of FileIDs persisting across reboots, which DrivePool critically does not implement, rather than relying on a file's "last modified" time stamp. If that were to be the cause with regard to the new version of Synology Drive (a switch to relying on FileIDs) then I would expect that you should (only) be seeing resyncing of unmodified files upon rebooting the DrivePool machine (at which point it would presume all the files had changed and try to resync everything, which could take considerable time). EDIT: The reddit post does link to a post on this forum that discusses both the problem with FileID and a separate problem with DrivePool's interaction with the file change notification API, and if the new version of Synology Drive is using the latter this could also be a source of compatibility issues. BTW, re "Z:\DrivePool\" are you linking or mounting your pool drive as a folder in another drive? I notice Synology claims that "Sync or Backup Tasks are not supported if the file and drive types are:" [...] "Windows symbolic links" and "If local folder contains a mount point, files inside the mount point might not be synced or backed up because Synology Drive cannot detect file changes inside a mount point." Did you try this and did it function correctly?
-
Yes, sounds reasonable.
-
Thankyou. Would it be possible for you to run a separate SHA-1 checksumming utility against both your seeding and non-seeding files in the pool, at least thrice before and at least thrice after a rebalance from the same VM as DrivePool? optionally, if practical, also from the same VM as qbittorrent (it can be a different container)? This is to check whether: the utility produces consistent results (i.e. the multiple runs before the rebalance should all return the same result if it is only the rebalance at fault, ditto after, even if the results between before and after differ) the utility detects the same changes in checksums that qbittorrent is detecting (i.e. that it is not a bug in that build of qbittorrent/libtorrent) closed files are affected like seeding files (optional) the checksumming is affected by being performed over the network I'm a bit busy with work at present but I'll see if I can get some time to set up some tests this weekend with DrivePool and qbittorrent on my own machines. I haven't noticed any checksum changes from rebalancing (note: I use syncthing to do mirroring and versioning between a pair of active pools and if rebalancing caused damaged files I would have expected that to show up) but I don't use qbittorrent and my instances of DrivePool are running on metal rather than in a VM. P.S. Regarding potential bitrot of old data in general, for long-term static files I use a script with MultiPar to build a recovery file for each of them. Then every once in a long while I run a script on them using MultiPar in verify mode to check for damage. May or may not be suitable for your use case?
-
Ouch. Just to check I'm understanding this: 1. You have files in your pool that have finished downloading via qBittorrent (no more writes) and are now seeding for others (remain open for reads). 2. Every time a rebalance is performed while files are seeding, qBittorent detects the checksums of all seeding files have changed and it redownloads the damaged parts. 3. You don't know whether files that weren't being seeded at the time are or aren't being damaged by the rebalancing. Is that exactly correct? 4. Is the qBittorrent application being run on the same machine as DrivePool or is it accessing the pool over a network? 5. Which version of qBittorrent are you running (latest stable is 5.1.4)? 6. Which version of DrivePool are you running (latest stable is 2.3.13.1687)?
-
Hi dropshadow, if you're not needing them you can un-tick them, it's less work for the balancer (e.g. I currently have ticked on my main pool, in order, StableBit Scanner, Prevent Drive Overfill and Duplication Space Optimizer).
-
I was going to suggest maybe the DrivePool service store might be borked - but "installed a clean copy of windows" implies that would've been wiped clean too. All I could suggest at this point is opening a support ticket with StableBit and hopefully they might figure it out?
