Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 06/02/25 in Posts

  1. Ah, okay. So just another term like "spinning rust" And that's ... very neat! You are very welcome! And I'm in that same boat, TBH. Zfs and btrfs look like good pooling solutions, but a lot that goes into them. Unraid is another option, but honestly, one I'm not fond of, mainly because of how the licensing works (I ... may be spoiled by our own licensing, I admit). And yeah, the recovery and such for the software is great. A lot of time and effort has gone into making it so easy, and we're glad you appreciate that! And the file recovery does make things nice, when you're in a pinch! And I definitely understand why people keep asking for a linux and/or mac version of our software. There is a lot to say about something that "just works".
    2 points
  2. No, it dosen't have any preference. But if you have read striping enabled, it should use that to optimize the reads. However, you might be able to use the file placement rules to hack this. Eg, add a rule for "\.covefs\*" should work (or at least, dosen't error out). This should allow you to specify a drive (and testing this, it looks to be respected)
    1 point
  3. You can also do this with power shell, and with "two" commands. In both cases, this gets a list of disks -> checks for any items that have the "read only property" -> and for each, run the command to disable read only. These are safe to run, and I have verified that they cause no issue on my personal system/server.
    1 point
  4. Shane

    Hardlinks

    Salim, what you're asking for isn't simple. To try to use a bad car analogy to give you an idea: "I don't know how this truck works that can tow multiple trailers with the abilities to redistribute the cargo between trailers and even switch out whole trailers, safely, while the truck is still being driven on the highway, but I would assume it can't be difficult to make it so it could also/instead run on train tracks with the press of a button." However looking at your previous thread: Presuming you still want this, on Windows, AND hardlinks, I'm again going to suggest a CloudDrive-on-DrivePool setup, something like: Create a DrivePool pool, let's call it "Alpha", add your "main drive" as the first disk (let's call it "Beta") and your "single drive" as the second disk (let's call it "Gamma"), enable real-time duplication, disable read-striping, set whole-pool duplication x2 (you could use per-folder only if you knew exactly what you were doing). note: if you plan to expand/replace Beta and/or Gamma in the future, and don't mind a bit of extra complexity now, I would suggest adding each of them to a pool of their own and THEN add those pools instead to Alpha to help with future-proofing. YMMV. Connect "Alpha" as a Local Disk provider in CloudDrive, then Create your virtual drive on it, let's call that "Zod"; make sure its chosen size is not more than the free space of each of Beta and Gamma (so if Beta had 20TB free and Gamma had 18TB free you'd pick a size less than 18TB) so that it'll fit. There might be some fiddly bits to expand upon in that but that's the gist of it. Then you could create hardlinks in Zod to your heart's content and they'll be replicated across all disks. The catch would be that you couldn't "read" Zod's data by looking individually at Alpha, Beta or Gamma because of the block translation - but if you "lost" either Beta or Gamma due to physical disk failure you could still recover by replacing the relevant disk(s), with Zod switching to a read-only state until you did so. You could even attach Gamma to another machine that also had CloudDrive installed, to extract the data from it, but you'd have to be careful to avoid turning that into a one-way trip.
    1 point
  5. servonix

    Hardlinks

    Since you don't use/haven't used drivepool I would suggest learning more about how it works so you can see why supporting hardlinks isn't as simple as you might think, and also why having drivepool try to support them conditionally isn't a great idea either. I've been using drivepool for about 12 years now myself. I have full confidence that if there was a simple way for Alex to add support for hardlinks on the pool it would have been done a long time ago. I also want to be 100% clear that I'd love to see hardlinks supported on the pool myself, but I also understand why it hasn't happened yet.
    1 point
  6. Whew, i don't know which one actually fixed it, but it suddenly showed up on my pool! I did everything above until the poolpart.guid cannot be renamed, so i tried putting the drive offline and online DP said one of the disk is missing, thats when i noticed it's back on the pool. Many thanks Shane saved me from massive headache of copying 20tb files! I almost reinstalled DP as well, i was too focused on finding the missing drive on the non-pooled Iist I didn't notice it's back on the pool
    1 point
  7. Try running the following in a command prompt run as administrator: dpcmd refresh-all-poolparts And where poolpartPath is the full path of the poolpart folder of the missing drive: dpcmd unignore-poolpart poolPartPath for example, dpcmd unignore-poolpart e:\poolpart.abcdefgh-abcd-abcd-abcd-abcdefgh1234 And where volumeid is that of the "missing" drive (you can use the command 'mountvol' to get a list of your volume identifiers): dpcmd hint-poolpart volumeid for example, dpcmd hint-poolpart \\?\Volume{abcdefgh-abcd-abcd-abcd-abcdefgh1234}\ while poolparts and volumes happen to use the same format for their identifying string they are different objects If that doesn't help, you may try one or more of the following, in no particular order of preference: repairing DrivePool (windows control panel, programs and features, select StableBit DrivePool, select Change, select Repair) uninstalling, rebooting and reinstalling DrivePool (may lose configuration but not duplication or content, and keep your license key handy just in case) resetting the pool (losing balancing/placement configuration, but not duplication or content): Open up StableBit DrivePool's UI. Click on the "Gear" icon in the top, right-hand corner. Open the "Troubleshooting" menu, and select "Reset all settings..." Click on the "Reset" button. Alternatively, you may try manually reseeding the "missing" drive: if - and only if - you already had and have real-time duplication enabled for the entire pool and you just want to get going again ASAP: remove the "missing" drive from DrivePool quick format it (just make sure it's the right drive that you're formatting!!!) re-add the "missing" drive, let drivepool handle re-propagating everything from your other drives in the background. otherwise: open the "missing" drive in Windows File Explorer (or other explorer of your choice) find the hidden poolpart.guid folder in the "missing" drive rename it from poolpart.guid to oldpart.guid (don't change the guid) remove the "missing" drive from DrivePool if the "missing" drive is now available to add again, proceed to: re-add the "missing" drive move all your folders and files (but not $recycle.bin, system volume information or .covefs) from the old oldpart.guid folder to the new poolpart.guid folder that's just been created tell the pool to re-measure once assured everything is showing up in the pool, you can delete the oldpart.guid folder. else if the "missing" drive is still not available to add again, proceed to: copy your folders and files (but not $recycle.bin, system volume information or .covefs) from the old oldpart.guid folder to the pool drive duplicates of your folders/files may exist on the pool, so let it merge existing folders but skip existing files you may need to add a fresh disk to the pool if you don't have enough space on the remaining disks in the pool check that everything is showing up in the pool if all looks good, quick format the "missing" drive and it should then show up as available to be re-added.
    1 point
  8. servonix

    Hardlinks

    Your examples seem to only take a 2 disk pool into consideration. Adding a 3rd drive (or more) into the pool immediately complicates things when it comes to the balancing and duplication checks that would be needed to make this possible. Also, even if you try to control the conditions under which hardlinks are allowed to be created, those conditions can change due to adding/removing a drive or any automatic balancing or drive evacuation that takes place. Allowing hardlinks to be created under certain conditions when those conditions could change at any point afterwards probably isn't a good idea. Any implementation to fully support hard links has to work properly and scale with large pools without any (or at least minimal) performance penalties, and be 100% guaranteed to not break the links regardless of dupication, balancing, etc.
    1 point
  9. Hi! If your storage is only HDD consider upgrading that to SSD or a mix of SSD and HDD (note that if you duplicate a file across a SSD and a HDD, DrivePool should try to pick the SSD to access first where possible). Also consider upgrading your network (e.g. use 5Ghz instead of 2.4GHz bands for your WiFi, and/or use 2.5, 5 or 10 Gbit instead of 1 Gbit links for your cabling and NICs, etc) depending on what your bottleneck(s) are. You can try prioritising network access over local access to DrivePool via the GUI: in Manage Pool -> Performance make sure Network I/O boost is ticked. (And also make sure 'Read Striping' is NOT ticked as that feature is bugged). Fine-tuning SMB itself can be complicated; see https://learn.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/ and https://learn.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/smb-file-server amongst others. PrimoCache is good for local use but doesn't help with mapped network drives according to its FAQ, and DrivePool user gtaus reported that they didn't see any performance benefits when accessing a pool (that had PrimoCache enabled) over their network.
    1 point
  10. Thronic

    Do you duplicate everything?

    Thx for replying, Shane. Having played around with unraid the last couple of weeks, I like the zfs raidz1 performance when using pools instead of the default array. Using 3x 1TB e-waste drives laying around, I get ~20-30MB/s writing speed on the default unraid array with single parity. This increases to ~60MB/s with turbo write/reconstruct write. It all tanks down to ~5MB/s if I do a parity check at the same time - in contrast with people on /r/DataHoarder claiming it will have no effect. I'm not sure if the flexibility is worth the performance hit for me, and I don't wanna use cache/mover to make up for it. I want storage I can write directly to in real time. Using a raidz1 vdev of the same drives, also in unraid, I get a consistent ~112 MB/s writing speed - even when running a scrub operation at the same time. I then tried raidz2 with the same drives, just to have done it - same speedy result, which kinda impressed me quite a bit. This is over SMB from a Windows 11 client, without any tweaks, tuning settings or modifications. If I park the mindset of using drivepool duplication as a middle road of not needing backups (from just a - trying to save money - standpoint), parity becomes a lot more interesting - because then I'd want a backup anyway just because of the more inherent dangerous nature of striping and how all data relies on it. DDDDDDPP DDDDDDPP D=data, P=parity (raidz2) Would just cost 25% storage and have backup, with even backup having parity. Critical things also in 3rd party provider cloud somewhere. The backup could be located anywhere, be incremental, etc. Comparing it with just having a single duplicated DrivePool without backups it becomes a 25% cost if you consider row1=main files, row2=duplicates. Kinda worth it perhaps, but also more complex. If I stop viewing duplication in DrivePool as a middle road for not having backups at all and convince myself I need complete backups as well, the above is rather enticing - if wanting to keep some kind of uptime redundancy. Even if I need to plan multiple drives, sizes etc. On the other hand, with zfs, all drives would be very active all the time - haven't quite made my mind up about that yet.
    1 point
  11. Shane

    Do you duplicate everything?

    Yes; x2 on all but the criticals (e.g. financials) which are x3 (they don't take much space anyway). It is, as you say, a somewhat decent and comfortable middle road against loss (in addition to the backup machine of course). Drive dies? Entire pool is still readable while I'm getting a replacement. I do plan to eventually - unless StableBit releases a "DrivePool v3" with parity built in - move to a ZFS NAS or similar but today is not that day (nor is this month, maybe not this year either).
    1 point
  12. A re-install of drivepool fixed the issue in my case.
    1 point
  13. 1) Mentioned in my OP they aren't marked as read only. 2) Funny thing. As I was preparing to do this and grabbed the SetACL stuff I did a laughable SFC /Scannow then for sanity sake another reboot and it corrected itself. SFC found no corrupted files but luckily the reboot fixed a potentially annoying problem. I will add the SetACL instructions under the read only section in my documentation in case this might happen again. After several years of using Drivepool in multiple different areas this is the first I've seen this and my usual troubleshooting didn't work. Thank you for the response.
    1 point
  14. For StableBit DrivePool, the only setting that can impact this is the "BitLocker" detection setting. https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings The setting in question is the example for how to edit and disable the setting, actually (and coincidentally). Also, make sure that you do not use A:\ or B:\ for the pool's drive letter, as Windows has some hard coded behavior that pings these drive letters frequently under the assumption that they're floppy drives. If you have StableBit Scanner installed, then in the Scanner Settings, enable the "throttle queries" option in the SMART tab, as this can cause the drives to wake up or stay active.
    1 point
  15. For any and all licensing requests, you should always head to https://stablebit.com/Contact first. And for these, we should get to licensing tickets within the day or so. And for support tickets, within a business day or so. I say "or so", given time zones, scheduling, etc. It may not be exactly a day. If it takes longer than that, please feel free to bump the ticket, or email me directly at christopher@covecube.com. (sorry, this may/does get a bit ranty) Some of those threads, we've looked into the issues, and can't reproduce. So without being able to do so, it is very hard to a) verify that the issue exists, b) verify that the issue is actually related to our software, and c) address the issues. And there is only so many ways to say that before it sounds canned and/or condescending. That said, if we could "wave our hands and make the issues go away", we'd absolutely do so (and not in a "delete the thread and pretend it never happened" sort of way). Also, in general, I have personally found forums and other public places a bad place to to do support. Too much dogpiling of issues that may not actually be related. And this isn't something that is strictly related to StableBit DrivePool. I've seen it in open source projects I've worked on too, for instance. It makes things messier and harder to actually deal with real issues. One on one is always ideal for support. If there are paterns that emerge, that can be reported on internally, and much more naturally/organically. Eg, "Hey, I have these few tickets that are exhibiting the same or very similar behavior, so there definitely looks to be an issue here" sort of thing (which has happened a number of times).
    1 point
  16. uninstalling doesn't remove or reset the settings, so you should be fine with just uninstalling and reinstalling. Specifically, no need to delete the ProgramData folder. The "covefs failed to start" is the crux of the issue here, and is the driver for the pool. Without this, the pool can't run. And if it's not started/installed, then the service will generate a version mismatch, and stall out. Specifically this version should work: https://dl.covecube.com/DrivePoolWindows/release/download/StableBit.DrivePool_2.3.8.1600_x64_Release.exe
    1 point
  17. Jaga

    DrivePool + Primocache

    I recently found out these two products were compatible, so I wanted to check performance characteristics of a pool with a cache assigned to it's underlying drives. Pleasantly, I found there was a huge increase in pool drive throughput using Primocache and a good sized Level-1 RAM cache. This pool uses a simple configuration: 3 WD 4TB Reds with 64KB block size (both volume and DrivePool). Here are the raw tests on the Drivepool volume, without any caching going on yet: After configuring and enabling a sizable Level-1 read/write cache in Primocache on the actual drives (Z: Y: and X:), I re-ran the test on the DrivePool volume and got these results: As you can see, not only do both pieces of software work well with each other, the speed increase on all DrivePool operations (the D: in the benchmarks was my DrivePool letter) was vastly greater. For anyone looking to speed up their pool, Primocache is a viable and effective means of doing so. It would even work well with the SSD Cache feature in DrivePool - simply cache the SSD with Primocache, and boost read (and write if you use a UPS) speeds. Network speeds are of course, still limited by bandwidth, but any local pool operations will run much, much faster. I can also verify this setup works well with SnapRAID, especially if you also cache the Parity drive(s). I honestly wasn't certain if this was going to work when I started thinking about it, but I'm very pleased with the results. If anyone else would like to give it a spin, Primocache has a 60-day trial on their software.
    1 point
  18. gtaus

    DrivePool + Primocache

    FWIW, if anybody is still interested in this topic, I tried PrimoCache with DrivePool on my home media server, and the transfer speeds did indeed show significant improvement within that computer. However, most of my workflow is from transferring files from client computers into my host server. In that case, PrimoCache has NO effect at all. PrimoCache does not work with network drives, which is how my client computers see the DrivePool volume on my server. So, transferring files from my client computers to DrivePool on my server has absolutely no improvement in transfer rate with PrimoCache. I have found my better option is to use the SSD Optimizer as a frontend cache for DrivePool, which allows me to transfer my files from a client computer into the SSD frontend of DrivePool on the server. Then the SSD will rebalance the files as needed on to the archive HDDs in the background. Of course, file transfers are still limited by your network speed, but having the SSD frontend on DrivePool pick up the files is usually much faster than writing to the archive HDDs directly over the network. I contacted PrimoCache support about this issue, and they verified that PrimoCache does not work with network drives. Since network transfers is the bottleneck in my system, I ended up not purchasing PrimoCache although I do think it's a very good solution for other setups.
    1 point
  19. I keep getting a notification on bottom right corner "update found" but it doesnt give me option to update.
    1 point
  20. Jaga

    DrivePool + Primocache

    I didn't really do it to compare speeds, but rather to show that a popular, robust, and established caching program works with DrivePool. And it's one I highly recommend (I run it on all my workstations, laptops, and server). The only caveat in the case of using it with a Drivepool, is that usually pool drives are very large, so you need a lot of RAM to get even a small hitrate on the cache. But for writes, most recently accessed files and frequent files, it's quite effective. I ran a similar test with Anvil Pro, despite it being geared more against SSD testing (which really is how you should benchmark Primocache). Here's the test with Primocache paused: And the test with Primocache re-enabled: It shows a ridiculous increase in speed, as you'd expect. The read/write speeds for both tests are exactly where I'd expect to see them. And since Windows' cache isn't all that aggressive (nor does it auto-scale well), I would not expect to see any difference using a benchmark that was intended to use it. Anvil Pro may - I don't know. Certainly the Windows cache wouldn't have much of an impact once you start scaling the test over 1GB. Feel free to run the test yourself, with whatever benchmark software you want. Primocache has a free 60-day trial license with full features so people can test it themselves.
    1 point
  21. Hey all, I've decided to migrate from Drivepool to Storage Spaces + ReFS recently due to file integrity concerns. One of the things I loved about drivepool is the Stablebit Scanner integration. Unfortunately, it seems that Scanner is unable to detect/read SMART data from drives in the storage pool. CrystalDiskInfo sees those drives just fine, so I was wondering if Scanner could implement that as well. Cheers.
    1 point
×
×
  • Create New...