-
Posts
982 -
Joined
-
Last visited
-
Days Won
96
Reputation Activity
-
Shane got a reaction from AtariBaby in Best way to cut and paste?
If you're not doing anything complicated with your pool, that should be it. Manage Pool -> Re-measure and if you're using duplication Cog Icon -> Troubleshooting -> Recheck duplication if it doesn't do it automatically (if you've got duplication turned on for the whole pool then at the end you should see the big pie chart in the GUI showing no unduplicated data).
You can also inspect/alter the duplication levels of folders and subfolders manually via the table provided by Manage Pool -> File Protection -> Folder duplication if you only want duplication for parts of the pool.
-
Shane got a reaction from mikernet in Duplicating a pool then keeping it in sync across a network
Hi mikernet. For whatever it is worth, I am successfully using SyncThing with my primary pool (currently only ~ 16TB before duplication) without any fine-tuning, but the content is reasonably static (e.g. photo albums, family videos, financial records, various documents) and I have DrivePool's Read Striping disabled.
Note that SyncThing is not intended for situations where changes are likely to occur faster than the ability to distribute those changes between devices (e.g. if device A renames a file and device B also renames that file before it can receive the update from device A, then SyncThing is going to complain).
Diving in...
DrivePool FileID Notification bug: as I understand it, this risk involves both excessive notifications (reads being classified as writes) and insufficient notifications (e.g. moves not being reported), to which applications that rely on FileID notifications are vulnerable. The original thread for reference.
SyncThing uses two independent mechanisms to detect changes, a scheduled full scan (default hourly) and a watcher that listens for notifications (default enabled); link to documentation. This means SyncThing's scheduled scans will catch any moves that are missed by its watcher, resulting in at most a delay of one hour (or as configured) to update other nodes. I do not know how well its watcher would handle excessive notifications if those occur, but it can be easily disabled if the load becomes a problem.
DrivePool FileID Generation bug: as I understand it, this risk involves DrivePool's use of an incrementing-from-zero counter that resets on reboot, resulting in data corruption/loss for any application (local or remote) that relies on FileID having cross-boot permanence for identifying files (q.v. above original thread link).
As far as I have determined SyncThing should not be affected by this bug so long as its watcher is not being used to monitor pool content via a remote share (e.g. if for some strange esoteric reason you mapped a share "\\pc\pool" to a drive "X:" and then told SyncThing to add "X:\stufftosync" as a folder instead of following the instruction to only add local folders). I'm... actually not sure if the watcher can even do that, but if so that's the only risk.
DrivePool Read Striping bug: as I understand, this risk involves DrivePool sometimes returning invalid hashes or even invalid content to certain applications when this feature is enabled to read a duplicated file via concurrent access to the drives that file is stored on. Some systems apparently do not experience this bug despite using the same application as others. The original thread for reference.
I have NOT tested striping with SyncThing, as I keep DrivePool's Read Striping pre-emptively disabled rather than have to exhaustively test all the apps I use (and given the possibility that the problem could involve some kind of race condition, which are a PITA to avoid false negatives on, I am happy to continue erring on the side of safety over performance).
-
Shane reacted to number_one in Pool file duplication causing file corruption under certain circumstances
This issue is definitely NOT resolved and is really extremely serious. I can't believe it hasn't gotten more attention given the high potential for data corruption. I'm on v2.3.11.1663 (latest at this time) and was highly perplexed at seeing random corruption throughout thousands of files I copied to a linux server via an rsync command. This sent me on a wild goose chase trying to look into bad RAM modules or bugs in rsync but it is now clear that the issue was DrivePool all along (it didn't help that I actually did have some bad RAM on the linux server, but that was red herring as it has since been replaced with ECC RAM that has been tested).
After noticing that the source data on a DrivePool volume "seemed" valid but thousands of the files copied to the linux server were corrupt I spent weeks trying to figure out what was going on. Trying to narrow down the issue I started working with individual files. In particular I looked at some MP3 files that were corrupt on the remote side. When I would re-copy a file via rsync with the --checksum parameter it would always report the mismatch and act like it was re-copying the file, but then sometimes the file would STILL be corrupt on the remote side. WTF? Apparently this bug was causing the rsync re-copy to send yet another corrupted version of the file to the remote side, though it would occasionally copy a good version. Super weird and very inconsistent.
So then I wrote a Node.js script to iterate through a folder and generate/compare MD5 hashes of source files (on the DrivePool volume) and target files (on the remote linux server). I started with a small dataset of around 4000 files (22 of which were corrupt). Things got even weirder with multiple runs of this script showing different files with mismatched hashes, and I realized it was frequently generating an incorrect hash for the SOURCE file. There could be different results each time the script was run. Sometimes hundreds of files would show a hash mismatch.
It's only been a short time since I disabled read-striping so I can't verify that has fixed everything but with read-striping disabled I haven't yet experienced a single corrupt transfer. An rsync command to compare based on checksum completed and fixed all 22 remaining corrupt files. And another couple runs of my hash compare script for the small 4000 file dataset shows no hash mismatches.
The only thing preventing this from becoming an utter disaster is that I hadn't yet deleted the source material after copying to the remote server, so I still have the original files to compare and try to repair the whole mess. However, some of the files were already reorganized on the remote server, so it is still going to take a lot of manual work to get everything fixed.
Sorry for the rant, but if the devs are not going to actually fix DrivePool I'm about done with this software. There are too many "weird" things going on (not just this particularly bad bug).
-
Shane got a reaction from AtariBaby in Best way to cut and paste?
Hi, if you're moving data into the pool from outside it but the source drive is one you've already added to the pool, then cut-and-pasting directly into that drive's hidden poolpart will:
* be immensely quicker (because Windows will do a 'move' rather than a 'copy to destination, remove from source' operation)
* skip all the normal checks and rules (like updating its statistics, calculating any balancing/placement requirements, warning if that content overlaps with existing folders/files on other drives in the pool, performing any real-time duplication, etc) so you have check for yourself and run a re-measure / duplication check / anything else needed.
Basically it's like driving a car - leave it in automatic it handles all the gear shifts for you, put it in manual can give more performance but it's up to you to know whether it's safe to use gear X at speed Y.
-
Shane reacted to GoreMaker in Pool file duplication causing file corruption under certain circumstances
100%
Once I disabled it, all my data corruption issues vanished. I haven't turned it back on since. It was a nightmare restoring all my backups and re-doing days of work from scratch.
-
Shane reacted to number_one in Pool file duplication causing file corruption under certain circumstances
Yes, I haven't heard of any update from Covecube about any resolution to this (or even that they're working on it), so you should DEFINITELY disable read striping. I'm quite frankly a bit alarmed at how there seems to be no official acknowledgement of the issue at present. The only post in this thread from an actual employee is from nearly five years ago. I do understand that it likely involves specific edge cases to be affected by the issue, but those edge cases are clearly not rare or hard to demonstrate. In my case all it took was using rsync through a Git Bash environment to have the bug cause massive corruption. And it is easily repeatable (as in if I sync files using rsync from a Drivepool volume there were essentially NO instances where there wasn't at least some corruption when read striping was enabled).
-
Shane reacted to dominator99 in stablebit drivepool reports a disk as missing
Hi All
This post is more information than asking a question.
I run a Win 10 PC backup PC with 13 HDD's; randomly, DP & Scanner would report a missing disk. A Windows reboot usually fixed the problem but recently the problem has been occurring more frequently. I ran Seagate diagnostics & the HDD passed all tests. I swapped out data & power cables & tried connecting the HDD to another SATA port; problem still there.
Finally, I upgraded to a larger PSU, 700W to 850W, & the problem has not reappeared since.
The big difference in the 700W versus the 850W psu was the +12V line; 48A max in the 700W psu & 70.8A max in the 850W psu.
-
Shane reacted to 51d in Drive not detected as part of pool
I recovered it by using a regular 30 seconds disk drive scan in the properties tab which I find incredible to say the least! Thanks for the help though as the fix was easier than I initially thought.
```
Chkdsk was executed in scan mode on a volume snapshot.
Checking file system on M:
Volume label is EXOS_SATA_2.
Stage 1: Examining basic file system structure ...
136448 file records processed. File verification completed.
Phase duration (File record verification): 3.77 seconds.
11868 large file records processed. Phase duration (Orphan file record recovery): 5.13 milliseconds.
0 bad file records processed. Phase duration (Bad file record checking): 0.64 milliseconds.
Stage 2: Examining file name linkage ...
128 reparse records processed. 149924 index entries processed. Index verification completed.
Phase duration (Index verification): 11.28 seconds.
Phase duration (Orphan reconnection): 12.50 milliseconds.
Found lost file "<0x1,0x28>"; requesting reconnection to index "$I30" of directory "\ <0x5,0x5>"
... repaired online.
Phase duration (Orphan recovery to lost and found): 12.09 milliseconds.
128 reparse records processed. Phase duration (Reparse point and Object ID verification): 4.82 milliseconds.
Stage 3: Examining security descriptors ...
Security descriptor verification completed.
Phase duration (Security descriptor verification): 22.34 milliseconds.
6739 data files processed. Phase duration (Data attribute verification): 7.89 milliseconds.
CHKDSK is verifying Usn Journal...
Usn Journal verification completed.
Windows has found problems and they were all fixed online.
No further action is required.
15259645 MB total disk space.
12984412 MB in 73609 files.
28672 KB in 6740 indexes.
679835 KB in use by the system.
65536 KB occupied by the log file.
2274541 MB available on disk.
4096 bytes in each allocation unit.
3906469375 total allocation units on disk.
582282643 allocation units available on disk.
Total duration: 15.12 seconds (15126 ms).
----------------------------------------------------------------------
Stage 1: Examining basic file system structure ...
Stage 2: Examining file name linkage ...
CHKDSK is scanning unindexed files for reconnect to their original directory.
Recovering orphaned file PoolPart.81f616f5-be54-46c7-93f3-f25ebce744e6 (28) into directory file 5.
Recovering orphaned file PoolPart.81f616f5-be54-46c7-93f3-f25ebce744e6 (28) into directory file 5.
1 unindexed files recovered to original directory.
Stage 3: Examining security descriptors ...
```
-
Shane reacted to jamesbk11218 in NTFS Permissions and DrivePool
I know this is old but just wanted to say thank you. The steps in the second post fixed this issue for me. Even though I was not sure whether "p" meant the drive where the drivepool program is installed or the individual DP Drive letters (I have 4 Pools). Thanks again!
-
Shane got a reaction from Saaiberke in Changing from WHS2011 (old version of Drivepool) to Windows 10 on same machine
Hi Saaiberke,
You may need to re-enter your Balancing and/or File Placement settings, so I suggest taking note of them first.
If you are using your boot drive as part of a pool, you should remove the boot drive from that pool for the migration.
(Anything else on the boot drive that you want to keep should be copied to another drive.)
You should deactivate your StableBit license(s) on the WHS2011 install as a final step before replacing it, so that you can reactivate the license(s) on the Windows 10 install without having to contact StableBit to help. Note that while the DrivePool license is deactivated your pool(s) will be read-only.
I recommend powering down your system to physically disconnect your pool drives before proceeding with the OS replacement, then powering down to physically reconnect the pool drives after you have installed DrivePool on the new OS.
P.S. You should also ensure that Read Striping is disabled on your pool(s) when they come back, as there is currently a Read Striping bug that can cause the wrong data to be read from duplicated files in the pool.
-
Shane got a reaction from muaddib in Pool file duplication causing file corruption under certain circumstances
I've pinned this topic and have added a mod notice to the top, summarising the issue and how to disable Read Striping, at least until StableBit can release a fix.
-
Shane got a reaction from DMZH in strange balancing?
It won't affect unduplicated mode. It's just there's currently a bug that may cause duplicated files to be read incorrectly if read striping is enabled - so disabling it now means no risk from the bug if you happen to turn on duplication in the future.
-
Shane got a reaction from DMZH in strange balancing?
Far as I can tell it's a normal situation, the pool's working properly.
-
Shane got a reaction from DMZH in strange balancing?
Hi DMZH, it's likely the result of it being so much bigger than all the other disks interacting with the default balancers (most likely the Duplication Space Optimizer) and DrivePool's preference to otherwise write to the fastest disk with the most free space.
If you're not using duplication and/or want an even distribution of used space, you could turn off that balancer and turn on the Disk Space Equalizer balancer (set it to equalize by space used). You may need to adjust DSE balancer's priority (e.g. if you are using Scanner, make sure it is lower than Scanner).
-
Shane reacted to GoreMaker in Need Advice on Balancing
I don't really need those tools because I also setup a drive hosted on my home server via SMB using StableBit CloudDrive, and that drive is now part of the M:SATA pool for lazy (not real-time) duplication of files that are hosted on the SATA disk. That means I have a current copy of all my pool's files on a different computer. I also run a nightly backup of the directories hosted on the N: pool as a historical backup, which is accomplished with Duplicati and stored off-site. I've come across no issues with Duplicati and DrivePool so far. The resulting backups can be restored without problems.
-
Shane got a reaction from GoreMaker in Need Advice on Balancing
Great to hear. However, be aware that currently FileID does not behave as expected on pools, so software that assumes FileID is perfect may break badly after a reboot. Link.
Currently there is no ETA on a fix from StableBit. In summary, generally a FileID is presumed by apps that use it to be unique and persistent on a given volume that reports itself as NTFS (collisions are actually possible albeit astronomically unlikely), however DrivePool's implementation is such that collisions after a reboot are effectively guaranteed on a given pool. Affected software is that which decides that historical file A (pre-reboot) is current file B (post-reboot) because they have the same FileID and proceeds to read or write the wrong file. TLDR, if you're not using such apps (so far some backup/sync tools, e.g. OneDrive, some have also reported FreeFileSync) you're unaffected, if you are then you should be careful not to use them with anything you keep on a pool.
-
Shane reacted to GoreMaker in Need Advice on Balancing
That worked perfectly. I chose a variant of the first option you suggested:
- a 4x 2TB NVME pool at L: with...
> Duplication Space Optimizer plugin enabled,
> Disk Space Equalizer plugin enabled,
> duplication enabled,
> real-time duplication disabled,
> read striping enabled
- a 1x 4TB SATA pool at M: with...
> Disk Space Equalizer plugin enabled (for when I add disks to this pool in the future),
> duplication disabled,
> read-striping enabled (for when I add disks to this pool in the future)
- a pool at N: made up of pools L: and M: with...
> the Ordered File Placement plugin enabled prioritizing the L:NVME pool for writing new files,
> duplication enabled,
> real-time duplication disabled,
> read-striping disabled (I don't want to include slow M:SATA disks in a read stripe)
With these settings:
- sequential writing to pool N: happens as fast as the max speed of one NVME disk (7000+ MBps)
- sequential reading for unduplicated files happens as fast as the max speed of one NVME disk (+/- 7000 MBps)
- sequential reading for duplicated files happens almost twice as fast as the max speed of one NVME disk (12,000+ MBps)
- my usable NVME pool capacity is 3.65TB (half of 7.3TB formatted)
- all files are hosted and duplicated on the L:NVME pool
- a 3rd copy of all files exists on the M:SATA pool
- everything can be expanded or repaired at a later date by adding or replacing disks as needed
- this all happens seamlessly through a single partition at N:
I'm particularly impressed that the M:SATA pool is intelligently only keeping 1 copy of every unique file from the L:NVME pool, and not duplicating duplicates. It just lists the duplicated files on L: as "Other".
I'm using this pool to host all my Windows User directory's libraries (Documents, Pictures, Videos, Music, etc, with the sole exception of AppData). So far, this is almost as good as ZFS in some ways, and better in other ways. It blows Intel RST, Microsoft Storage Spaces, or Windows Disk Management arrays right out of the water.
I'm annoyed I haven't discovered DrivePool sooner.
-
Shane got a reaction from GoreMaker in Need Advice on Balancing
"I have 4x 2TB nvme SSDs and 1x 4TB SATA SSD in one pool. Ideally, I'd like all my files to be balanced evenly across the 4x nvme disks, and the duplicates stored on the SATA SSD. Eventually, I plan to add another 1x 4TB SATA SSD to equal the total volume of the 4x nvme disks, but for now that's not necessary for the amount of storage my files are taking up."
Hi GoreMaker! If this is the goal I would be recommending a multi-pool arrangement, not the SSD Optimizer - the latter is intended for using faster disks as cache rather than as storage and will want to empty the NVMe disks to fill the SATA disk(s).
Example #1 (let drivepool handle the duplication and the deciding of whether user IO will be to/from your NVMe or your SATA disks):
Create a pool (let's call it N) and add your 4 NVMe disks to it. Set this pool's balancing to evenly distribute via the Disk Space Equalizer plugin. Create a pool (let's call it S) and add your 1 SATA disk (and later, the second SATA disk) to it. Set it as above. Create a pool (let's call it P) and add your pools N and S to it. Set it to x2 duplication. You can now put files on P and they will be evenly distributed across your NVMe disks with their duplicates distributed on your SATA disk(s). Example #2 (you decide whether user IO will be to/from your NVMe or SATA disks):
Create a pool (let's call it N) and add your 4 NVMe disks to it. Set this pool's balancing to evenly distribute via the Disk Space Equalizer plugin. Create a pool (let's call it S) and add your 1 SATA disk (and later, the second SATA disk) to it. Set it as above. Set up a scheduled task or similar to mirror the content of your N pool to your S pool (e.g. robocopy N:\ S:\ /mir /dcopy:dat /xa:s). You can now put files on N and they will be mirrored on S whenever the task runs. Or you could set up a two-way sync via third-party utility if you wished to have N and S synchronising bidirectionally. -
Shane got a reaction from Mesonto in Is there DrivePool per disk balancing?
Hi Mesonto, it's not possible within a single pool.
If the requirement is just to avoid having to change shares on the LAN you could consider using nested pools (e.g. E and F supporting D)?
Issues to consider would be 1, as the drives are already in use for D it would involve either a lot of background time adding/removing drives or some delicate manual work migrating/reseeding the pool structures, and 2, if you have any exceedingly deep path lengths (over 32 thousand characters!) in your existing pool you may not be able to nest it.
-
Shane got a reaction from BlueDragon in Reserving a disk for evacuation
A way to do it would be Manage Pool -> Balancing -> Settings -> tick "File placement rules respect real-time file placement limits set by the balancing plug-ins", then Manage Pool -> Balancing -> File Placement -> add a Rule for "*" with all drives ticked except the one you want to leave empty.
-
Shane reacted to servonix in File Placement Rules not being respected as pool is filling up
Each file placement rule is set to never allow files to be placed on other disks.
Since posting this I have actually been able to correct the issue. I added another file placement rule to the list for .svg images in my media library as I noticed emby fetched one for a newly added movie. When Drivepool started balancing I noticed it placing files onto the SSD's excluded from the media file placement rules. I stopped the balancing, then started again and the issue started correcting itself. All media files on the SSD's were moved to the hard drives, and all images/metadata were moved to the correct SSD's.
I strongly suspect that somehow the scanner balancer combined with old file placement rules from using those SSD's with the optimizer plugin were at fault.
One of my drives had developed a couple of unreadable sectors a few weeks ago and started evacuating files onto the SSD's. At the time the SSD's were included in the file placement rules from previously being used with the ssd optimizer plugin. I had stopped the balancing, removed the culprit drive from the pool choosing to duplicate later and did a full reformat of the drive to overwrite the bad sectors. When the format completed I did a full surface scan and the drive showed as healthy, so I was comfortable re-adding it back into the pool. Drivepool reduplicated the data back onto the drive and a balancing pass was also done to correct the balancing issues from the drive being partially evacuated. The file placement rules were also changed at this time to exclude those SSD's from being used by my media folders and metadata/images. Everything was working as expected until the other day.
For whatever reason only a small portion of newly added files were actually written to the excluded SSD's. It was literally only stuff added within the last couple of days despite a significant amount of data being added since re-adding that drive back into the pool and adjusting file placement rules to exclude the SSD's that were previously used by the optimizer. It's as if some kind of flag set by the scanner balancer or the old file placement rules weren't fully cleared.
Regardless of the cause I'm glad that everything seems to be chugging along like normal now. If I start noticing any more weird behavior I intend on enabling logging to help support identify what's going on under the hood.
-
Shane got a reaction from epfromer in Markers on Drive Pool disk list
The light blue, dark blue, orange and red triangle markers on a Disk's usage bar indicates the amounts of those types of data that DrivePool has calculated should be on that particular Disk to meet certain Balancing limits set by the user, and it will attempt to accomplish that on its next balancing run. If you hover your mouse pointed over a marker you should get a tooltip that provides information about it.
https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Disks List (the section titled Balancing Markers)
-
Shane got a reaction from Fox01 in covefs.sys causing my computer to crash?
If you uninstall DrivePool 2.3.11.1663 and install DrivePool 2.3.8.1600 (download folder link) does the BSOD stop happening?
-
Shane got a reaction from sardonicus in Lost the option to manually rebalance after adding a drive.
Directly under the pie graph, "Manage Pool" is clickable and it highlights when you place the mouse cursor over it; I've linked a screenshot:
-
Shane reacted to IanR in Drive Pool 2.3.10.1661_x64 on WHS2011 - Failing to start.
With 1600 Installed, go into Settings -> Select Updates... -> Settings -> Disable Automatic Updates
Mine now runs without the constant notification that an update is available.