-
Posts
1034 -
Joined
-
Last visited
-
Days Won
104
Posts posted by Shane
-
-
10 hours ago, Salim said:
I'm not sure how drivepool is setup exactly (I haven't used it yet because of the lack of hardlink support), but I would assume
Salim, what you're asking for isn't simple. To try to use a bad car analogy to give you an idea: "I don't know how this truck works that can tow multiple trailers with the abilities to redistribute the cargo between trailers and even switch out whole trailers, safely, while the truck is still being driven on the highway, but I would assume it can't be difficult to make it so it could also/instead run on train tracks with the press of a button."
However looking at your previous thread:
On 1/28/2025 at 7:00 PM, Salim said:The main drive is already a 4 disk 2-way mirror storage space, but this only protects against 1 drive failure, if for any reason 2 drives fail, my data is nuked. So I am syncing this storage space onto another single drive, so if the storage space fails for any reason, my files are immediately accessible on the single drive (and vice versa) without having to perform any restore or rebuild or unhide folders first.
So I am wanting to write the data simultaneously onto both drives (the storage space and the single drive) instead of writing to one and then syncing to the other.
Presuming you still want this, on Windows, AND hardlinks, I'm again going to suggest a CloudDrive-on-DrivePool setup, something like:
-
Create a DrivePool pool, let's call it "Alpha", add your "main drive" as the first disk (let's call it "Beta") and your "single drive" as the second disk (let's call it "Gamma"), enable real-time duplication, disable read-striping, set whole-pool duplication x2 (you could use per-folder only if you knew exactly what you were doing).
- note: if you plan to expand/replace Beta and/or Gamma in the future, and don't mind a bit of extra complexity now, I would suggest adding each of them to a pool of their own and THEN add those pools instead to Alpha to help with future-proofing. YMMV.
- Connect "Alpha" as a Local Disk provider in CloudDrive, then Create your virtual drive on it, let's call that "Zod"; make sure its chosen size is not more than the free space of each of Beta and Gamma (so if Beta had 20TB free and Gamma had 18TB free you'd pick a size less than 18TB) so that it'll fit.
There might be some fiddly bits to expand upon in that but that's the gist of it. Then you could create hardlinks in Zod to your heart's content and they'll be replicated across all disks. The catch would be that you couldn't "read" Zod's data by looking individually at Alpha, Beta or Gamma because of the block translation - but if you "lost" either Beta or Gamma due to physical disk failure you could still recover by replacing the relevant disk(s), with Zod switching to a read-only state until you did so. You could even attach Gamma to another machine that also had CloudDrive installed, to extract the data from it, but you'd have to be careful to avoid turning that into a one-way trip.
-
Create a DrivePool pool, let's call it "Alpha", add your "main drive" as the first disk (let's call it "Beta") and your "single drive" as the second disk (let's call it "Gamma"), enable real-time duplication, disable read-striping, set whole-pool duplication x2 (you could use per-folder only if you knew exactly what you were doing).
-
Hmm. Try the following from a command prompt run as an administrator:
- diskpart
- list volume
Then for each volume:
-
select volume #
(e.g. select volume 1) - attributes volume
- attributes disk
You are looking for any and all volumes and/or disks that are Read-only. Each time you find such, if any, try:
-
attributes volume clear readonly
(and/or as appropriate) -
attributes disk clear readonly
(then repeat attributes volume and/or attributes disk as above to check that it cleared)
If you don't find any such then I'm currently out of ideas.
-
My first guess would be that the pool's file permissions have been messed up by the bad disk. I would first try the solutions linked/described here: https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/#findComment-34550
-
Try running the following in a command prompt run as administrator:
- dpcmd refresh-all-poolparts
And where poolpartPath is the full path of the poolpart folder of the missing drive:
-
dpcmd unignore-poolpart poolPartPath
- for example, dpcmd unignore-poolpart e:\poolpart.abcdefgh-abcd-abcd-abcd-abcdefgh1234
And where volumeid is that of the "missing" drive (you can use the command 'mountvol' to get a list of your volume identifiers):
-
dpcmd hint-poolpart volumeid
- for example, dpcmd hint-poolpart \\?\Volume{abcdefgh-abcd-abcd-abcd-abcdefgh1234}\
- while poolparts and volumes happen to use the same format for their identifying string they are different objects
If that doesn't help, you may try one or more of the following, in no particular order of preference:
- repairing DrivePool (windows control panel, programs and features, select StableBit DrivePool, select Change, select Repair)
- uninstalling, rebooting and reinstalling DrivePool (may lose configuration but not duplication or content, and keep your license key handy just in case)
-
resetting the pool (losing balancing/placement configuration, but not duplication or content):
- Open up StableBit DrivePool's UI.
- Click on the "Gear" icon in the top, right-hand corner.
- Open the "Troubleshooting" menu, and select "Reset all settings..."
- Click on the "Reset" button.
Alternatively, you may try manually reseeding the "missing" drive:
-
if - and only if - you already had and have real-time duplication enabled for the entire pool and you just want to get going again ASAP:
- remove the "missing" drive from DrivePool
- quick format it (just make sure it's the right drive that you're formatting!!!)
- re-add the "missing" drive, let drivepool handle re-propagating everything from your other drives in the background.
-
otherwise:
- open the "missing" drive in Windows File Explorer (or other explorer of your choice)
- find the hidden poolpart.guid folder in the "missing" drive
- rename it from poolpart.guid to oldpart.guid (don't change the guid)
- remove the "missing" drive from DrivePool
-
if the "missing" drive is now available to add again, proceed to:
- re-add the "missing" drive
- move all your folders and files (but not $recycle.bin, system volume information or .covefs) from the old oldpart.guid folder to the new poolpart.guid folder that's just been created
- tell the pool to re-measure
- once assured everything is showing up in the pool, you can delete the oldpart.guid folder.
-
else if the "missing" drive is still not available to add again, proceed to:
-
copy your folders and files (but not $recycle.bin, system volume information or .covefs) from the old oldpart.guid folder to the pool drive
- duplicates of your folders/files may exist on the pool, so let it merge existing folders but skip existing files
- you may need to add a fresh disk to the pool if you don't have enough space on the remaining disks in the pool
- check that everything is showing up in the pool
- if all looks good, quick format the "missing" drive and it should then show up as available to be re-added.
-
copy your folders and files (but not $recycle.bin, system volume information or .covefs) from the old oldpart.guid folder to the pool drive
-
FileID has not been fixed in DrivePool as of v2.3.12.1683.
-
Hmm, in theory even if it's duplicating the metadata to other drives, it should still prefer to read from the fastest drive - but that presumes the metadata gets written to a SSD in the first place. Looks like forcing it can't be done from the GUI.
@Christopher (Drashna) does DrivePool internally prefer SSDs when it decides where to place the [METADATA] folder(s)? If not, can that be added as an option to DrivePool?
Actually, that would be a useful GUI-definable option to add to the Manage Pool -> Balancing -> File Placement -> Folders/Rules -> File placement options dialog in general, between the "New drives" and "Overflow" sections:
Preference -----------
[✓] Prefer fast drives from those selected (e.g. SSD)
[✓] Prefer slow drives from those selected (e.g. HDD)
(i) If both or neither are selected, no preference will be made.with the [METADATA] folder also visible and customisable here instead of being hidden (perhaps lock the other options for it).
-
Hi, dpcmd delete-poolpart-reparse-points deletes all reparse points in the poolparts of the nominated pool drive. Using the command with no parameters will provide a (brief) description of how to use the command.
E.g. dpcmd delete-poolpart-reparse-points p: will delete all such points on the poolparts of pool drive "p:"
-
Hi! If your storage is only HDD consider upgrading that to SSD or a mix of SSD and HDD (note that if you duplicate a file across a SSD and a HDD, DrivePool should try to pick the SSD to access first where possible). Also consider upgrading your network (e.g. use 5Ghz instead of 2.4GHz bands for your WiFi, and/or use 2.5, 5 or 10 Gbit instead of 1 Gbit links for your cabling and NICs, etc) depending on what your bottleneck(s) are.
You can try prioritising network access over local access to DrivePool via the GUI: in Manage Pool -> Performance make sure Network I/O boost is ticked.
(And also make sure 'Read Striping' is NOT ticked as that feature is bugged).
Fine-tuning SMB itself can be complicated; see https://learn.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/ and https://learn.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/smb-file-server amongst others.
PrimoCache is good for local use but doesn't help with mapped network drives according to its FAQ, and DrivePool user gtaus reported that they didn't see any performance benefits when accessing a pool (that had PrimoCache enabled) over their network.
-
On 6/21/2025 at 5:54 AM, Salim said:
In my example, the file is duplicated, but it's only hardlinked when on the same drive. As soon as one of the hardlinked copies (that are on the same drive) is moved to another drive, then the hardlink would break and a copy is made instead.
This shouldn't be difficult to implement.
Those last six words are written on the gravestones of many a project.
The problem with breaking hardlinks is that from the perspective of someone externally accessing the pool, this would mean that making changes to certain "files" would automagically update other "files" (because they're hardlinks on the same physical disk within the pool) right up until they suddenly didn't (because they're no longer hardlinks on the same physical disk within the pool). Any reliable implementation of hardlinks in DrivePool would have to ensure that hardlinks did not break.
On 6/21/2025 at 8:07 PM, Salim said:If File A and File B are hardlinked on Drive A and both are duplicated onto Drive B, they will be hardlinked on Drive B as well.
If File A and File B are hardlinked on Drive A and File A is duplicated onto Drive B and File B is duplicated onto Drive C, then a copy of the file is made onto Drive C.
Note that if we're getting technical - and we need to when we're "opening up the hood" to look at how hardlinks actually work - there is no "File A and File B hardlinked together"; there is only one file with multiple links to it in the allocation system of the volume. If you make a change to the content of what you call "File A" then you are making a change to the content of what you call "File B", because it's one content.
This is not unlike an inverse of DrivePool's duplication where one file on the pool can be stored as multiple instances in the disks of the pool and making a change to that file involves simultaneously propagating that change to all instances of that file in the disks of the pool.
Now in theory this should "just" mean that (at minimum) whenever DrivePool performs balancing, placement, duplication or evacuation (so basically quite often by default) it would have to include something like the equivalent of "fsutil hardlink list" on the operational file(s) on the source disk(s) to check for hardlinks and then (de)propagate any and all such to the target disk(s) as part of the copy and/or delete process.
But in practice this means (at minimum) squeezing more code into a part of the kernel that complains about every literal millisecond of performance sacrificed to have such nice things. And extrapolating hardlinks isn't a simple binary decision, it's more along the lines of a for-do array. The word "just" is doing a lot of work here - and we haven't even gotten into handling the edge cases Christopher mentioned. DrivePool needs to include code to handle "File A" potentially being in a folder with a different duplication level to a folder containing "File B" (and potentially "File C", "File D", etc as NTFS supports up to 1024 hardlinks per file). Even if we KISS and "just" pick the highest level out of caution, DrivePool also has to check whether "File A" is in a folder with a placement rule that is different to the folder with "File B" (or, again, potentially "File C", "File D", etc). What is DrivePool supposed to do when "File A" is in a folder that must be kept only on disks #1 and #2 while "File B" is in a folder that must be kept only on disks #3 and #4? That's a user-level call, which means yet more lookups in kernel space (plus additions to the GUI).
On 6/21/2025 at 8:07 PM, Salim said:There must be a way to implement this instead of just plainly not supporting hardlinks for any scenario.
TLDR? There is but Christopher is right: "the additional overhead of all of these edge cases" ... "make things a LOT more complex." That's generally the problem in a lot of these situations - the more things a tool needs to be able to do the harder it gets to make it do any/all of those things without breaking, and at the end of the day the people making the tools need to put food on their table.
Maybe you could try a local CloudDrive-on-DrivePool setup? I don't know how much that would affect performance or resilience, but you'd get hardlinks (because the combination lets you shift the other features into a separate layer). Other alternatives... mount a NTFS-formatted iSCSI LUN from a ZFS NAS (e.g. QNAP, Synology, TrueNAS, etc)?
-
Yes; x2 on all but the criticals (e.g. financials) which are x3 (they don't take much space anyway).
It is, as you say, a somewhat decent and comfortable middle road against loss (in addition to the backup machine of course). Drive dies? Entire pool is still readable while I'm getting a replacement.
I do plan to eventually - unless StableBit releases a "DrivePool v3" with parity built in - move to a ZFS NAS or similar but today is not that day (nor is this month, maybe not this year either).
-
Does the problem go away if you revert the update (navigate to Settings > Windows Update > Update history > Uninstall updates and find the most recent, or if it was a full version upgrade, e.g. 23H2 to 24H2, you can do so within 10 days or so via Settings > System > Recovery > Go Back)?
(either way, particularly the latter, I recommend you have a backup handy in case Windows gets worse instead of better)
Also if you can identify the particular update causing the problem please let us know!
-
1) If you open Windows Disk Management, are any of the Disks "Read Only"? Try the solution in this post (it also shows a screenshot of what to look for).
2) If that doesn't help or isn't applicable, try resetting the pool's NTFs permissions? Try the solutions in this post.
Please let us know how it goes, and if the above don't help I'd suggest opening a support ticket.
-
In Windows Disk Management (which has a GUI) you can choose to mount a drive to a path instead of to a letter (or both). The advantage of using paths instead of letters is that this lets Explorer (and any other file manager) access any number of drives instead of only 26.
For example say you have a folder "c:\mount", you can create a bunch of empty subfolders within it called "disk1", "disk2", etc and then in Disk Management you add each drive in your pool to the subfolder you created for it (doesn't have to be disk1, etc, it could be the serial number of each drive or any other distinctive method. You can then browse those drives via those subfolders.
Then (for example) dir c:\mount /s /b > c:\mount\list.txt would output a file containing a sorted-by-disk list of all your files on those drives:
c:\mount\disk1\poolpart.A\doc.txt
c:\mount\disk2\poolpart.B\apple.jpg
c:\mount\disk3\poolpart.C\video.aviIf you wanted a separate file for each drive you could create a batch that did separate dir /s /b commands for each disk.
Just don't put the mounting folder in your pool drive (e.g. if your pool uses P: don't use P:\mount) as that risks an endless loop.
Otherwise, to have DrivePool save file tables per disk as a built-in option, you could try submitting a feature request via the Contact form.
-
If you have all your poolpart disks mounted to a path under the same folder, e.g. e:\disks\, then you can use the Windows Task Scheduler to set a daily task that runs a command along the lines of dir e:\disks\ /s /b > "e:\pooldirs%date%%time%.log" or similar (if you've got a LOT of files per disk, you might want to do some parallelisation).
Note that if your date and time format involves slashes or colons you'll need to use something different, e.g. %date:/=%%time::=% to strip those characters.
-
9 hours ago, Ben436123 said:
I recently had a 16TB disk start failing in my DrivePool setup. DrivePool is currently moving data off that drive, but the process is quite slow... I assume it's moving one file at a time..
Would it be a safe and viable alternative to use a disk imaging tool to directly image the failing drive to a new healthy one, and let DrivePool pick up the new drive with the same data?
Just trying to minimize the risk of file loss and speed things up if possible. Any recommendations?
Hi Ben, yes it's moving one file at a time (it's also doing stuff that lets you continue to use the pool and those files even while they're being moved).
Without duplication enabled (I presume), faster alternatives pretty much require stopping the pool in some fashion:
1. You can clone the disk but you should do so on a different machine so that DrivePool never sees the original and the clone at the same time. If the clone isn't perfect enough DrivePool might not recognise the poolpart as part of the pool but then you'd just have to seed the new disk. The pool will be read-only while the disk is missing.
2. You can use dpcmd to ignore the old disk (the disk will show up as "missing" in the GUI and the pool drive will become read-only), use the GUI to add the new disk, then copy/move the user content* you want out of the old disk's hidden poolpart folder into the new disk's hidden poolpart folder (with whatever method you prefer, so long as it supports alternate data streams**), then remove the "missing" old disk via the GUI (at which point the pool drive will be writable again).
* For example don't move the .covefs, $RECYCLE.BIN or System Volume Information folders.
** If it doesn't, you'll need to redo any duplication levels you've set on the pool.P.S. Either way you'd have to cancel the current removal process first.
-
That description looks a bit confusing?
Checking if I'm understanding this:
- you have two machines, I'll call them host1 and host2
- host1 has a 4TB physical drive, mounted as drive "W" and shared as "\\host1\something"
- host1's drive (and thus share) contains a CloudDrive data folder (that I'm guessing was created at some point in the past, since you mention "given the data that I believed was on it"?)
- host2 has CloudDrive installed, you've connected to "\\host1\something" via the File Share provider and you're trying to use it to get access to the content stored in the above folder?
If so:
- you should be using the Attach function in host2's CloudDrive (this is to access an existing data folder on the connected provider and mount the resulting the virtual drive), not the Create function (this is to create a new data folder on the connected provider and mount the resulting virtual drive)
- if the data folder was originally created via a provider other than the File Share provider you will need to convert the folder first (via the command-line utility CloudDrive.Convert.exe) and I strongly recommend backing up the data folder before conversion if it contains any irreplaceable data.
-
Hi, changing the drive letter/path (e.g. via Windows Disk Management) of a disk in a pool doesn't affect DrivePool. Just don't change the hidden PoolPart.* folders on the disks.
-
Hi, if the trial's expired your config and data should all be still there; trial pools become read-only when a trial expires (and if activated become editable again).
As to the trial expiring in hours instead of a month like it should have lasted I suggest opening a support ticket with StableBit to sort it out!
-
You should be fine. DrivePool identifies poolpart drives by the data internal to each drive, not their position on a controller. Basically so long as Windows can find and read the drives (e.g. they show up in Disk Management as basic NTFS volumes), so too should DrivePool.
-
This thread may help you with regard to NTFS permissions:
-
Hi, DrivePool does not inspect - nor does it block the inspection of - the content inside your files. If you're getting warnings about suspicious files, that's whatever security software you're using (or the one built into Windows) doing its own job. If you're certain they're false positives then most security software allows you to set exclusions on particular files, folders or drives; depending on how pervasive the security software is you may need to exclude both the pool and the poolparts because it may scan both (e.g. I leave my pool drive included but exclude all the hidden poolpart folders on drives that form it so that the security software doesn't "double up" its scanning).
-
I've seen this a couple of times over the years with large old(er) pools/drives. If I had to make a wild guess I'd suspect the removal process gets stuck while attempting to clean up the empty folders after it's moved all the files, maybe from either a bug (too much folder depth hitting a pathing limit?) or Windows locking a folder when it shouldn't or a broken NTFS ACL somewhere. After double-checking the duplication was in order (it was each time) I removed the "missing" drive and it carried on normally.
-
Have you opened a support ticket with Stablebit?
-
If the only balancers you are using are the Ordered File Placement and Scanner, you could try ticking all the boxes under "Plug-in settings" and "File placement settings"?
Pool file duplication causing file corruption under certain circumstances
in General
Posted
There's no mention of such in the changelog, which I'd expect if it had been.