-
Posts
1036 -
Joined
-
Last visited
-
Days Won
105
Reputation Activity
-
Shane got a reaction from roirraWedorehT in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11
Have tested files stored in a CloudDrive (v1.2.12.1836) drive mounted on a DrivePool (v2.3.12.1683) pool; fileID (and objectID) is stable and persistent until the file is deleted.
So you can safely use a clouddrive on top of a drivepool with respect to this bug.
Pros: a fully NTFS-compliant drive that pools the capacity of multiple physical disks, would even let you write a file bigger than the capacity of any individual disk!
Cons: slower than using drivepool directly, content is not readable directly from the physical disks, may need to be manually resized to take advantage of additional disks (or if disks are to be removed), some or all of the clouddrive may become unreadable if X disk(s) is lost unless (the clouddrive folder in) the pool is duplicated at an X+1 multiplier.
Kind of like a software RAID that has no ECC but can still survive a certain amount of disk failure(s) and unlike a normal RAID can let the user safely add and remove disks?
-
Shane reacted to Christopher (Drashna) in Drivepool shows "starting service" for hours even though drive is usable
No, it dosen't have any preference. But if you have read striping enabled, it should use that to optimize the reads.
However, you might be able to use the file placement rules to hack this. Eg, add a rule for "\.covefs\*" should work (or at least, dosen't error out). This should allow you to specify a drive (and testing this, it looks to be respected)
-
Shane reacted to Christopher (Drashna) in Replaced bad drive, now the entire pool is read-only?
You can also do this with power shell, and with "two" commands.
In both cases, this gets a list of disks -> checks for any items that have the "read only property" -> and for each, run the command to disable read only.
These are safe to run, and I have verified that they cause no issue on my personal system/server.
-
Shane got a reaction from Salim in Hardlinks
Salim, what you're asking for isn't simple. To try to use a bad car analogy to give you an idea: "I don't know how this truck works that can tow multiple trailers with the abilities to redistribute the cargo between trailers and even switch out whole trailers, safely, while the truck is still being driven on the highway, but I would assume it can't be difficult to make it so it could also/instead run on train tracks with the press of a button."
However looking at your previous thread:
Presuming you still want this, on Windows, AND hardlinks, I'm again going to suggest a CloudDrive-on-DrivePool setup, something like:
Create a DrivePool pool, let's call it "Alpha", add your "main drive" as the first disk (let's call it "Beta") and your "single drive" as the second disk (let's call it "Gamma"), enable real-time duplication, disable read-striping, set whole-pool duplication x2 (you could use per-folder only if you knew exactly what you were doing). note: if you plan to expand/replace Beta and/or Gamma in the future, and don't mind a bit of extra complexity now, I would suggest adding each of them to a pool of their own and THEN add those pools instead to Alpha to help with future-proofing. YMMV. Connect "Alpha" as a Local Disk provider in CloudDrive, then Create your virtual drive on it, let's call that "Zod"; make sure its chosen size is not more than the free space of each of Beta and Gamma (so if Beta had 20TB free and Gamma had 18TB free you'd pick a size less than 18TB) so that it'll fit. There might be some fiddly bits to expand upon in that but that's the gist of it. Then you could create hardlinks in Zod to your heart's content and they'll be replicated across all disks. The catch would be that you couldn't "read" Zod's data by looking individually at Alpha, Beta or Gamma because of the block translation - but if you "lost" either Beta or Gamma due to physical disk failure you could still recover by replacing the relevant disk(s), with Zod switching to a read-only state until you did so. You could even attach Gamma to another machine that also had CloudDrive installed, to extract the data from it, but you'd have to be careful to avoid turning that into a one-way trip.
-
Shane reacted to servonix in Hardlinks
Since you don't use/haven't used drivepool I would suggest learning more about how it works so you can see why supporting hardlinks isn't as simple as you might think, and also why having drivepool try to support them conditionally isn't a great idea either.
I've been using drivepool for about 12 years now myself. I have full confidence that if there was a simple way for Alex to add support for hardlinks on the pool it would have been done a long time ago.
I also want to be 100% clear that I'd love to see hardlinks supported on the pool myself, but I also understand why it hasn't happened yet.
-
Shane reacted to denywinarto in 1 Poolpart cannot be read by drivepool (SOLVED)
Whew, i don't know which one actually fixed it, but it suddenly showed up on my pool!
I did everything above until the poolpart.guid cannot be renamed, so i tried putting the drive offline and online DP said one of the disk is missing, thats when i noticed it's back on the pool. Many thanks Shane saved me from massive headache of copying 20tb files!
I almost reinstalled DP as well, i was too focused on finding the missing drive on the non-pooled Iist I didn't notice it's back on the pool
-
Shane got a reaction from denywinarto in 1 Poolpart cannot be read by drivepool (SOLVED)
Try running the following in a command prompt run as administrator:
dpcmd refresh-all-poolparts And where poolpartPath is the full path of the poolpart folder of the missing drive:
dpcmd unignore-poolpart poolPartPath for example, dpcmd unignore-poolpart e:\poolpart.abcdefgh-abcd-abcd-abcd-abcdefgh1234 And where volumeid is that of the "missing" drive (you can use the command 'mountvol' to get a list of your volume identifiers):
dpcmd hint-poolpart volumeid for example, dpcmd hint-poolpart \\?\Volume{abcdefgh-abcd-abcd-abcd-abcdefgh1234}\ while poolparts and volumes happen to use the same format for their identifying string they are different objects If that doesn't help, you may try one or more of the following, in no particular order of preference:
repairing DrivePool (windows control panel, programs and features, select StableBit DrivePool, select Change, select Repair) uninstalling, rebooting and reinstalling DrivePool (may lose configuration but not duplication or content, and keep your license key handy just in case) resetting the pool (losing balancing/placement configuration, but not duplication or content): Open up StableBit DrivePool's UI. Click on the "Gear" icon in the top, right-hand corner. Open the "Troubleshooting" menu, and select "Reset all settings..." Click on the "Reset" button. Alternatively, you may try manually reseeding the "missing" drive:
if - and only if - you already had and have real-time duplication enabled for the entire pool and you just want to get going again ASAP: remove the "missing" drive from DrivePool quick format it (just make sure it's the right drive that you're formatting!!!) re-add the "missing" drive, let drivepool handle re-propagating everything from your other drives in the background. otherwise: open the "missing" drive in Windows File Explorer (or other explorer of your choice) find the hidden poolpart.guid folder in the "missing" drive rename it from poolpart.guid to oldpart.guid (don't change the guid) remove the "missing" drive from DrivePool if the "missing" drive is now available to add again, proceed to: re-add the "missing" drive move all your folders and files (but not $recycle.bin, system volume information or .covefs) from the old oldpart.guid folder to the new poolpart.guid folder that's just been created tell the pool to re-measure once assured everything is showing up in the pool, you can delete the oldpart.guid folder. else if the "missing" drive is still not available to add again, proceed to: copy your folders and files (but not $recycle.bin, system volume information or .covefs) from the old oldpart.guid folder to the pool drive duplicates of your folders/files may exist on the pool, so let it merge existing folders but skip existing files you may need to add a fresh disk to the pool if you don't have enough space on the remaining disks in the pool check that everything is showing up in the pool if all looks good, quick format the "missing" drive and it should then show up as available to be re-added. -
Shane reacted to servonix in Hardlinks
Your examples seem to only take a 2 disk pool into consideration. Adding a 3rd drive (or more) into the pool immediately complicates things when it comes to the balancing and duplication checks that would be needed to make this possible.
Also, even if you try to control the conditions under which hardlinks are allowed to be created, those conditions can change due to adding/removing a drive or any automatic balancing or drive evacuation that takes place. Allowing hardlinks to be created under certain conditions when those conditions could change at any point afterwards probably isn't a good idea.
Any implementation to fully support hard links has to work properly and scale with large pools without any (or at least minimal) performance penalties, and be 100% guaranteed to not break the links regardless of dupication, balancing, etc.
-
Shane got a reaction from Source_man in Regarding read performance from SMB
Hi! If your storage is only HDD consider upgrading that to SSD or a mix of SSD and HDD (note that if you duplicate a file across a SSD and a HDD, DrivePool should try to pick the SSD to access first where possible). Also consider upgrading your network (e.g. use 5Ghz instead of 2.4GHz bands for your WiFi, and/or use 2.5, 5 or 10 Gbit instead of 1 Gbit links for your cabling and NICs, etc) depending on what your bottleneck(s) are.
You can try prioritising network access over local access to DrivePool via the GUI: in Manage Pool -> Performance make sure Network I/O boost is ticked.
(And also make sure 'Read Striping' is NOT ticked as that feature is bugged).
Fine-tuning SMB itself can be complicated; see https://learn.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/ and https://learn.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/smb-file-server amongst others.
PrimoCache is good for local use but doesn't help with mapped network drives according to its FAQ, and DrivePool user gtaus reported that they didn't see any performance benefits when accessing a pool (that had PrimoCache enabled) over their network.
-
Shane reacted to gtaus in DrivePool + Primocache
FWIW, if anybody is still interested in this topic, I tried PrimoCache with DrivePool on my home media server, and the transfer speeds did indeed show significant improvement within that computer. However, most of my workflow is from transferring files from client computers into my host server. In that case, PrimoCache has NO effect at all. PrimoCache does not work with network drives, which is how my client computers see the DrivePool volume on my server. So, transferring files from my client computers to DrivePool on my server has absolutely no improvement in transfer rate with PrimoCache.
I have found my better option is to use the SSD Optimizer as a frontend cache for DrivePool, which allows me to transfer my files from a client computer into the SSD frontend of DrivePool on the server. Then the SSD will rebalance the files as needed on to the archive HDDs in the background. Of course, file transfers are still limited by your network speed, but having the SSD frontend on DrivePool pick up the files is usually much faster than writing to the archive HDDs directly over the network.
I contacted PrimoCache support about this issue, and they verified that PrimoCache does not work with network drives. Since network transfers is the bottleneck in my system, I ended up not purchasing PrimoCache although I do think it's a very good solution for other setups.
-
Shane reacted to Jaga in DrivePool + Primocache
I didn't really do it to compare speeds, but rather to show that a popular, robust, and established caching program works with DrivePool. And it's one I highly recommend (I run it on all my workstations, laptops, and server). The only caveat in the case of using it with a Drivepool, is that usually pool drives are very large, so you need a lot of RAM to get even a small hitrate on the cache. But for writes, most recently accessed files and frequent files, it's quite effective.
I ran a similar test with Anvil Pro, despite it being geared more against SSD testing (which really is how you should benchmark Primocache). Here's the test with Primocache paused:
And the test with Primocache re-enabled:
It shows a ridiculous increase in speed, as you'd expect. The read/write speeds for both tests are exactly where I'd expect to see them. And since Windows' cache isn't all that aggressive (nor does it auto-scale well), I would not expect to see any difference using a benchmark that was intended to use it. Anvil Pro may - I don't know. Certainly the Windows cache wouldn't have much of an impact once you start scaling the test over 1GB.
Feel free to run the test yourself, with whatever benchmark software you want. Primocache has a free 60-day trial license with full features so people can test it themselves.
-
Shane reacted to Jaga in DrivePool + Primocache
I recently found out these two products were compatible, so I wanted to check performance characteristics of a pool with a cache assigned to it's underlying drives. Pleasantly, I found there was a huge increase in pool drive throughput using Primocache and a good sized Level-1 RAM cache.
This pool uses a simple configuration: 3 WD 4TB Reds with 64KB block size (both volume and DrivePool). Here are the raw tests on the Drivepool volume, without any caching going on yet:
After configuring and enabling a sizable Level-1 read/write cache in Primocache on the actual drives (Z: Y: and X:), I re-ran the test on the DrivePool volume and got these results:
As you can see, not only do both pieces of software work well with each other, the speed increase on all DrivePool operations (the D: in the benchmarks was my DrivePool letter) was vastly greater. For anyone looking to speed up their pool, Primocache is a viable and effective means of doing so. It would even work well with the SSD Cache feature in DrivePool - simply cache the SSD with Primocache, and boost read (and write if you use a UPS) speeds. Network speeds are of course, still limited by bandwidth, but any local pool operations will run much, much faster.
I can also verify this setup works well with SnapRAID, especially if you also cache the Parity drive(s).
I honestly wasn't certain if this was going to work when I started thinking about it, but I'm very pleased with the results. If anyone else would like to give it a spin, Primocache has a 60-day trial on their software.
-
Shane reacted to Christopher (Drashna) in Do you duplicate everything?
Ah, okay. So just another term like "spinning rust"
And that's ... very neat!
You are very welcome!
And I'm in that same boat, TBH. Zfs and btrfs look like good pooling solutions, but a lot that goes into them. Unraid is another option, but honestly, one I'm not fond of, mainly because of how the licensing works (I ... may be spoiled by our own licensing, I admit).
And yeah, the recovery and such for the software is great. A lot of time and effort has gone into making it so easy, and we're glad you appreciate that! And the file recovery does make things nice, when you're in a pinch!
And I definitely understand why people keep asking for a linux and/or mac version of our software. There is a lot to say about something that "just works".
-
Shane reacted to Thronic in Do you duplicate everything?
Thx for replying, Shane.
Having played around with unraid the last couple of weeks, I like the zfs raidz1 performance when using pools instead of the default array.
Using 3x 1TB e-waste drives laying around, I get ~20-30MB/s writing speed on the default unraid array with single parity. This increases to ~60MB/s with turbo write/reconstruct write. It all tanks down to ~5MB/s if I do a parity check at the same time - in contrast with people on /r/DataHoarder claiming it will have no effect. I'm not sure if the flexibility is worth the performance hit for me, and I don't wanna use cache/mover to make up for it. I want storage I can write directly to in real time.
Using a raidz1 vdev of the same drives, also in unraid, I get a consistent ~112 MB/s writing speed - even when running a scrub operation at the same time. I then tried raidz2 with the same drives, just to have done it - same speedy result, which kinda impressed me quite a bit.
This is over SMB from a Windows 11 client, without any tweaks, tuning settings or modifications.
If I park the mindset of using drivepool duplication as a middle road of not needing backups (from just a - trying to save money - standpoint), parity becomes a lot more interesting - because then I'd want a backup anyway just because of the more inherent dangerous nature of striping and how all data relies on it.
DDDDDDPP
DDDDDDPP
D=data, P=parity (raidz2)
Would just cost 25% storage and have backup, with even backup having parity. Critical things also in 3rd party provider cloud somewhere. The backup could be located anywhere, be incremental, etc. Comparing it with just having a single duplicated DrivePool without backups it becomes a 25% cost if you consider row1=main files, row2=duplicates. Kinda worth it perhaps, but also more complex.
If I stop viewing duplication in DrivePool as a middle road for not having backups at all and convince myself I need complete backups as well, the above is rather enticing - if wanting to keep some kind of uptime redundancy. Even if I need to plan multiple drives, sizes etc. On the other hand, with zfs, all drives would be very active all the time - haven't quite made my mind up about that yet.
-
Shane got a reaction from Thronic in Do you duplicate everything?
Yes; x2 on all but the criticals (e.g. financials) which are x3 (they don't take much space anyway).
It is, as you say, a somewhat decent and comfortable middle road against loss (in addition to the backup machine of course). Drive dies? Entire pool is still readable while I'm getting a replacement.
I do plan to eventually - unless StableBit releases a "DrivePool v3" with parity built in - move to a ZFS NAS or similar but today is not that day (nor is this month, maybe not this year either).
-
Shane reacted to Scott H in A file cannot be opened because the share access flags are incompatible
A re-install of drivepool fixed the issue in my case.
-
Shane reacted to ppmguire in 0x8000ffff catastrophic failure after 2.3.12.1683 update
1) Mentioned in my OP they aren't marked as read only.
2) Funny thing. As I was preparing to do this and grabbed the SetACL stuff I did a laughable SFC /Scannow then for sanity sake another reboot and it corrected itself. SFC found no corrupted files but luckily the reboot fixed a potentially annoying problem.
I will add the SetACL instructions under the read only section in my documentation in case this might happen again. After several years of using Drivepool in multiple different areas this is the first I've seen this and my usual troubleshooting didn't work. Thank you for the response.
-
Shane reacted to Fritz the Cat in Kudos to Drivepool
While updating my backup server to Server 2022 I accidentally deleted the partition on one of the 4 14TB HDs. I was in full panic mode but continued with the upgrade intending to deal with the deleted partition later. Well, the pool was x2 and after installing Drivepool, to my surprise, it fixed itself without me having to do anything at all. So cool.
-
Shane reacted to viperpray in Cant find new versions
I'm not positive what version the original server was running at the time and i have no way to check. That sadly as it's not even the same OS drive so i can't just use event viewer to try and go back.
How ever i did state that chatgpt didn't get any of the version numbers correct for any of stable bit as it thinks stablebit is somehow on like 2.4.XX.XXXX and as far as i can tell there is no such version ever. as i said AI is cool and can give helpful advice but for the most part AI is kind of trash in it's current state for most anything. It's crazy to think that multi billion$ company's are relying on this thing to manage ppl or even other AI. either way im happy its working if i could give you a reason it's working outside of it just is i would. I just know i tried the lower ver multiple times and had no luck over the last 2-3months and my pool is about 284TB so every time i ran one of them commands to force any type of access it took many hours.
-
Shane got a reaction from Thronic in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11
SyncThing does not rely at all on persistent FileID and does not rely solely on watcher notifications, thus it is not vulnerable to DrivePool's FileID collisions and is insulated against DrivePool's change notification bug.
However, as MitchC mentioned, there is another DrivePool issue, the Read Striping bug. While that bug is apparently not omnipresent across installs (so you may just be one of the lucky ones) and requires Read Striping to be both enabled (warning this is currently the default on new installs) and active (because even if enabled it only affects pools that are using duplication) it is definitely something to be neutralised.
TLDR as far as solely SyncThing + DrivePool is concerned, if you're not using duplication you're fine and if you are using duplication but not using Read Striping you're fine; if you have been using duplication and Read Striping then you should disable the latter and check for corrupted files (and the nature of the bug means a file corrupted on one pool could then be synced to the other pool in that corrupted state).
-
Shane reacted to MitchC in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11
If this is bi-directional syncing as well this is a bit of the nightmare scenario for this bug. 80TB is a massive amount if you had just 00.1% of your data (aka 100GB) changed would you notice?
SHA hashing of every file is good at detecting ordinary corruption but would likely not catch the data loss this set of bugs can cause. The issue here would appear more like you overwrote a file with new content intentionally (or depending on the application that you copied a file over another).
I can assume if your data is all pristine then Syncthing probably doesn't use file ID's right now.
It is well shown that drivepools file change notification system has multiple large flaws. Many applications can trigger a 'file content changed' notification from Windows even when they are only reading it. Maybe syncthing checks the file timestamp and size at that point and if its the same it does nothing. If it listens to the file system though at best you just have the file completely read and re-hashed for sync-thing to decide no changes, at worst it just queues it for syncing and you sync extra data. Either way you could be wearing the drives out faster that needed, losing performance, or potentially wasting bandwidth and backup time. We also know that drivepool does not properly bubble up file change notifications when writes actually happen which, depending how syncthing is watching, could mean it misses some files that change. Not a huge deal but if it does a full scan monthly to make sure it has all file changes detected and in between you rely on file change notifications to catch files to sync it means you might think everything was in sync to right before a crash when in reality it might be up to a month out of date.
If the file is likely to actually have changed (say a log file) I would say unrelated. Even for one time writes, it could be the application was still writing the file as well when syncthing starts hashing and again not related. It is also possible though that it goes to read a changed file, but causes the notification bug so it gets the file has changed and then provides that warning. This could be a race condition as it would likely cause it right at the start of the read so depending when it starts considering the notifications after it reads to be a change it may only happen some times. Another option would be if something else also has file change notifications then if the other app reads the file after syncthing starts reading it the other app causes a 'write notification' even though its only reading due ot this bug.
First, there is 0% chance these bugs are not critically problematic with drivepool. They can lead to direct data loss or corruption and sensitive data to be leaked which is a horrid statement for a file system. The question is do these bugs affect your specific use case.
The problem is it may not present uniformly. Maybe syncthing does diff based syncing for only the parts of a file that changed for bigger files (say over 10MB) but any files it thinks have changed that are under 10MB it just syncs them blindly as they are so small and it keeps cpu usage down. Maybe it uses a more simple solution. If a file is 'changed' it tries to optimize append based changes. It hashes the file file size and if that equals the old hash it knows it only needs to sync the newer bytes, otherwise it syncs the whole file.
Even if the worst that happens right now is you have excess drive reads of bandwidth spend that speaks nothing of tomorrow. Maybe syncthing decides it does not need to hash a file when it gets a change notification as that causes a full additional read of the entire file (hurting performance and drive life) so it starts to just trust windows file change notifications. Maybe you never even upgrade syncthing but right now you don't use an application that triggers the 'file content changed' notification when it just opens a file for reading (IE VSCode might not but something like notepad does). You start using a new file editor or video player and now it does trigger that bug, so now syncthing is getting a whole lot more of these change notifications. When you upgrade syncthing do you read about all changes between versions? Who knows if some internal changes wold even make the change log. If syncthing starts relying on FileID's more in the next version then your data may slowly corrupt.
If most of your data doesn't change then hashing it all now, and hashing it down the line and comparing would show you if a file changes that shouldn't. This is not the same hashing that syncthing does as it is looking for corruption from the system/disk/transfer and not for the file contents being updated on purpose. Still, even then as these bugs are likely to effect files that change more often first that may not catch things quickly (mainly there you are waiting for it to go to write a FileID of one of the new files but it ends up overriding an old file instead).
I briefly looked at syncthings documentation that it said it compares file modification times for detecting if a file changed. I don't know if their documentation just didn't mention it using size as well or if it actually only looks at the modification time to detect changes. If so this could be more risky as well.
Personally I moved to Truenas which while not as flexible in terms of drive pooling got me what I wanted and the snapshotting makes backups fantastic. For others if unraid or similar is possible you could still have such flexibility without the liability of drivepool. This is not a fun game of chance to play where you are hoping you don't find yourself using the wrong combination of apps that leads to data loss.
Drivepool clearly works (at least mostly as many may not know perf or other issues are caused by drivepool) correct for most people. Because of the exceptional difficulty of knowing how these bugs could effect you today or in the future I still see it as quite reckless that drivepool does not at least warn you of these possibilities. This is not dis-similar to the fact that there seems to be a decent bug in reading striped data from drivepool for multiple users yet there is no such warning that striping can cause such a bad issue:
-
Shane reacted to Christopher (Drashna) in Cant find new versions
2.3.11.1663 should be safe for people with existing pools.
However, if you're an older version ow Windows, then you may need to use 2.3.8.1600 due to signing issues.
As for folder enumeration issues, there isn't a specific fix for this, and ChatGPT is *not* a reliable source of information, ever. LLMs are incredibly prone to "halluciating" (a nice way to say "making shit up with no basis in reality").
And the absolute latest release is:
https://dl.covecube.com/DrivePoolWindows/release/download/StableBit.DrivePool_2.3.12.1680_x64_Release.exe
If you're having issues on the newer versions, please open a ticket at https://stablebit.com/Contact
-
Shane reacted to Christopher (Drashna) in LAN control. Sometimes remote devices appear in the drop down, sometimes they don't.
It's flaky because multicast detection can be flaky.
That said, you can manually add entries, if you need to:
https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings#RemoteControl.xml
-
Shane reacted to dominator99 in 2.14 GB unreadable (4492426 sectors) reported by SB scanner
Hi Shane
I ran HDD regenerator & seatools on a different PC, which doesn't have SB scanner installed. Although HDD regenerator reported issues with 7 sectors & recommended backing up the HDD, it didn't actually report any bad sectors in the section below the progress graph but obviously it found something it didn't like, hence the suggestion.
I've 6 HDD's & 1 SSD on the PC that reported the problem; one of the HDD I only replaced after the issue with the problem drive & everything is normal even using the same SATA port & data cable, so it seems the HDD that has the problem is actually faulty but seatools says it's OK!
I've RMA'd the HDD & returned it to Seagate so can no longer test it.
-
Shane got a reaction from JasonC in Show better information to find corresponding disk in Drivepool
A few options that come to mind:
You can right-click the column labels row in the Scanner GUI and enable display of drive letters if it isn't showing those.
You can right-click a specific disk and select "Disk Settings" to (amongst other options) change the Name of it from the default (the latter usually being its model code) to something you can more easily match with DrivePool.
If you don't use drive letters for your poolpart drives, you can mount them to a path in Windows Disk Manager (e.g. c:\disks\1, c:\disks\2, c:\disks\3, etc) and both Scanner and DrivePool will show that path instead of the drive letter.