-
Posts
981 -
Joined
-
Last visited
-
Days Won
96
Shane last won the day on March 3
Shane had the most liked content!
About Shane

Profile Information
-
Gender
Not Telling
-
Location
Australia
-
Interests
Eclectic.
Shane's Achievements
-
Fair. For what it's worth the mantra "RAID is not backup" is shorthand for "RAID should never be a substitute for having independent copies of your data somewhere else" and this applies to DrivePool as well: if enough drives die you will lose data from the pool/array. TLDR, the only way to do backups is to actually have backups. The business mantra for that one is of course "3-2-1": have at least three copies*, on two different media, with at least one kept offsite. The middle part used to mean HDD versus tape, but these days also means using different HDD brands/models to minimise the risk from potential manufacturing defect. *the original and two separate backups Re hardware RAID being obsolete... it depends on the use-case. For home use? Basically, yeah, if the alternative is using an OS with good software RAID support. But sadly DrivePool is only available for Windows and DrivePool doesn't support dynamic drives so I can't even use Windows software RAID1 with it.
-
Shane started following Error Removing Drive - , Files left on disk after drive removed from pool , A mess and 5 others
-
I have seen this happen before... not recently... maybe twice since I started using DrivePool (2013)? When I checked, the couple of "leftover" files had been properly copied to the pool where they were supposed to be, along with everything else that had been on the removed drive, so in the end... ¯\_(ツ)_/¯ @Alex @Christopher (Drashna) any ideas?
-
I'd suspect some of the NTFS permissions on the drive have been damaged given you as a user can read the files but drivepool can't (note chkdsk does not fix ACLs). One way I'd tackle it: make sure nobody/nothing is using the pool "ignore" the bad drive on the pool - from an administrator command prompt, run: dpcmd -ignore-poolpart poolpath UID where poolpath is the path to the pool (e.g. "P:\") and UID is the name of the hidden poolpart folder on the bad drive (e.g. "PoolPart.123-abc") its content will disappear from the pool but still be on the drive itself, and the pool will alert that the drive is "missing" if this doesn't work: turn off the computer, unplug the bad drive, turn the computer back on, remove the missing drive, plug the bad drive into a non-drivepool computer networked to the drivepool computer and proceed as below adapted as necessary (e.g. by sharing the pool and mapping a network drive to it) remove the missing bad drive from the pool if this doesn't work, you might have damaged permissions on one or more of your "good drives" - see https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/ copy only the user content inside the hidden poolpart folder of the bad drive to the pool, excluding system folders, drivepool metadata, stripping ACLs and skipping files already at the destination e.g. assuming you aren't using nested pools, you could use: robocopy "m:\poolpart.abc-123\" "p:\" /dcopy:dat /copy:dat /e /xc /xn /xo /xx /xd "System Volume Information" /xd "$RECYCLE.BIN" /xd ".covefs" it's important to include the backslashes at the end of the source and target paths if you're wondering, "extra" files are files that were found only on the destination if you're concerned that you might not have the command exactly how you want, instead of starting with robocopy start with robocopy /L (this is basically test mode, it shows you what it would do without actually doing it) confirm that pool is now working properly and all user content is present. if pool still gives unreadable errors, see https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/ if pool still won't duplicate... ouch, I'd try starting a new pool and migraring all your content across. Hope this helps!
-
Does DrivePool record a list of which files are on which disks anywhere?
Shane replied to HarveyBeans's question in General
Hi, short answer is DrivePool doesn't keep a record of files in the pool. Longer answer is File IDs are "kept" in the sense that DrivePool needs to map the File IDs of accessed files in the poolparts to a unique ID representing them in the pool (and the current implementation isn't persistent across reboots) but that's basically just a table of numbers that won't be linkable to the actual files since the failed disk is no longer communicating with the pool. -
If the same file(s) that can't be copied with Directory Opus can be copied with Windows Explorer at the time, then it might be Directory Opus - but the error text and file sizes you mention being involved made me wonder if it might also involve a Windows memory paging issue and/or SMB File Server tuning issue, which can be a bit of a rabbit hole (a lof of tips that can be found on the internet are for older/32-bit versions of Windows).
-
Does it only happen with Directory Opus? e.g. try Windows File Explorer, then try command line? Is there any antivirus running on the pool machine? Does problem go away (temporarily) if pool machine is rebooted? Is there plenty of free space on the drive(s) that would receive the file in the pool (usually the drive(s) with the most free space)? Which OS version is running on the pool machine? Check the Windows Task Manager (Performance, Memory) to record the pool machine's "Paged pool" and "Non-paged pool" memory sizes, when the problem is and isn't ocurring, and compare?
-
There are some SSDs with 10 year warranties, but SSDs are still a lot more expensive than HDD so... I hear you on the brittleness of UEFI. A midway option between N disks (no duplication) and 2N disks (2x duplication) that I've occasionally considered (but haven't really had a pool big enough to make it worth buying the cards/hubs) is pooling RAID6 arrays instead of using duplication with individual disks, basically offloading the redundancy onto the hardware. Con: Needs one or more RAID cards (or RAID-capable DAS boxes), it's not full duplication, any given array would be vulnerable while rebuilding (though I have backups) and card management means yet another UI pane to deal with. Pro: An array only needs 6+ disks to store 4+ disks worth of data (more disks = better ratio but more time to rebuild) and 3 disks would have to die concurrently to actually lose any data, offers a boost to read performance and RAID6 allows detection and repair of bitrot.
-
Does it correctly show the space on the disks with no letters if you perform a Manage Pool -> Re-measure after you've removed the letters?
-
For whatever it's worth, I've currently got a pair of WD Red Plus at over 9 years run time and a WD Red at over 7 years run time; I've generally had good luck with Reds of all stripes outlasting their warranties by a year or more and hope that continues. ... I'm reasonably sure 20+ TB spinners first retailed back in 2021, so technically none of them have lasted over four years yet. 😜
-
For real-time duplication you need two separate physical disks, if your disks have multiple partitions it will still pick partitions on separate disks; the SSD Optimizer reflects this in how it treats multiple partitions on a single disk as a collective entry. So If you're wanting to keep it simple with one pool and 2x real-time duplication you'd need 2 physical SSDs to avoid immediate writes to your HDDs. If you have only 1 physical SSD you'd need to either use Christopher's suggestion above, or turn off real-time duplication and let the pool duplicate at night, to avoid immediate writes to your HDDs.
-
Drivepool recycle bin, constant error when windows loads
Shane replied to jamieuk147's question in General
Hi, to delete the Recycle Bin on the pool so Window can recreate it, you can try the following from a command prompt run as an Administrator: net stop "stablebit drivepool service" rmdir /q /s P:\$RECYCLE.BIN net start "stablebit drivepool service" Where P is the drive letter of the pool. If that doesn't work, then replace the middle line with lines consisting of: rmdir /q /s X:\PoolPart.guid\$RECYCLE.BIN Where X is the letter of each drive that is in the pool and PoolPart.guid is the hidden poolpart folder on that drive used by DrivePool (protip: once you've typed up to the full stop hit the tab key and it should fill out the rest), so if you had five disks in the pool you'd end up with five lines of rmdir. If that doesn't work, let us know what it had trouble with! -
Hi RSD, that likely means the NTFS permissions are damaged beyond the ability of the Windows dialog to repair. You may need to try the linked SetACL method instead. Presuming the damage is not irreparable even with SetACL, that should at least allow you to access the poolpart folder and then deal with the System Volume Information folder that is inside that poolpart.
-
AtariBaby reacted to an answer to a question: Best way to cut and paste?
-
mikernet reacted to an answer to a question: Duplicating a pool then keeping it in sync across a network
-
Hi mikernet. For whatever it is worth, I am successfully using SyncThing with my primary pool (currently only ~ 16TB before duplication) without any fine-tuning, but the content is reasonably static (e.g. photo albums, family videos, financial records, various documents) and I have DrivePool's Read Striping disabled. Note that SyncThing is not intended for situations where changes are likely to occur faster than the ability to distribute those changes between devices (e.g. if device A renames a file and device B also renames that file before it can receive the update from device A, then SyncThing is going to complain). Diving in... DrivePool FileID Notification bug: as I understand it, this risk involves both excessive notifications (reads being classified as writes) and insufficient notifications (e.g. moves not being reported), to which applications that rely on FileID notifications are vulnerable. The original thread for reference. SyncThing uses two independent mechanisms to detect changes, a scheduled full scan (default hourly) and a watcher that listens for notifications (default enabled); link to documentation. This means SyncThing's scheduled scans will catch any moves that are missed by its watcher, resulting in at most a delay of one hour (or as configured) to update other nodes. I do not know how well its watcher would handle excessive notifications if those occur, but it can be easily disabled if the load becomes a problem. DrivePool FileID Generation bug: as I understand it, this risk involves DrivePool's use of an incrementing-from-zero counter that resets on reboot, resulting in data corruption/loss for any application (local or remote) that relies on FileID having cross-boot permanence for identifying files (q.v. above original thread link). As far as I have determined SyncThing should not be affected by this bug so long as its watcher is not being used to monitor pool content via a remote share (e.g. if for some strange esoteric reason you mapped a share "\\pc\pool" to a drive "X:" and then told SyncThing to add "X:\stufftosync" as a folder instead of following the instruction to only add local folders). I'm... actually not sure if the watcher can even do that, but if so that's the only risk. DrivePool Read Striping bug: as I understand, this risk involves DrivePool sometimes returning invalid hashes or even invalid content to certain applications when this feature is enabled to read a duplicated file via concurrent access to the drives that file is stored on. Some systems apparently do not experience this bug despite using the same application as others. The original thread for reference. I have NOT tested striping with SyncThing, as I keep DrivePool's Read Striping pre-emptively disabled rather than have to exhaustively test all the apps I use (and given the possibility that the problem could involve some kind of race condition, which are a PITA to avoid false negatives on, I am happy to continue erring on the side of safety over performance).
-
If you're not doing anything complicated with your pool, that should be it. Manage Pool -> Re-measure and if you're using duplication Cog Icon -> Troubleshooting -> Recheck duplication if it doesn't do it automatically (if you've got duplication turned on for the whole pool then at the end you should see the big pie chart in the GUI showing no unduplicated data). You can also inspect/alter the duplication levels of folders and subfolders manually via the table provided by Manage Pool -> File Protection -> Folder duplication if you only want duplication for parts of the pool.
-
A description of the bug can be found here: Read striping can provide a serious performance boost when reading from duplicated files. If however you are affected by the bug, then read striping can also result in errors and/or corruption for affected apps. Personally, I use a wide variety of apps that interact with my pool and I don't need the extra performance, so I prefer to err on the side of caution and pre-emptively disable it.