Jump to content

Shane

Moderators
  • Posts

    1036
  • Joined

  • Last visited

  • Days Won

    105

Shane last won the day on July 5

Shane had the most liked content!

About Shane

Profile Information

  • Gender
    Not Telling
  • Location
    Australia
  • Interests
    Eclectic.

Recent Profile Visitors

274739 profile views

Shane's Achievements

  1. Hi Jeffamp, did you mean to add something to your quote of Ratti3?
  2. Have tested files stored in a CloudDrive (v1.2.12.1836) drive mounted on a DrivePool (v2.3.12.1683) pool; fileID (and objectID) is stable and persistent until the file is deleted. So you can safely use a clouddrive on top of a drivepool with respect to this bug. Pros: a fully NTFS-compliant drive that pools the capacity of multiple physical disks, would even let you write a file bigger than the capacity of any individual disk! Cons: slower than using drivepool directly, content is not readable directly from the physical disks, may need to be manually resized to take advantage of additional disks (or if disks are to be removed), some or all of the clouddrive may become unreadable if X disk(s) is lost unless (the clouddrive folder in) the pool is duplicated at an X+1 multiplier. Kind of like a software RAID that has no ECC but can still survive a certain amount of disk failure(s) and unlike a normal RAID can let the user safely add and remove disks?
  3. There's no mention of such in the changelog, which I'd expect if it had been.
  4. Shane

    Hardlinks

    Salim, what you're asking for isn't simple. To try to use a bad car analogy to give you an idea: "I don't know how this truck works that can tow multiple trailers with the abilities to redistribute the cargo between trailers and even switch out whole trailers, safely, while the truck is still being driven on the highway, but I would assume it can't be difficult to make it so it could also/instead run on train tracks with the press of a button." However looking at your previous thread: Presuming you still want this, on Windows, AND hardlinks, I'm again going to suggest a CloudDrive-on-DrivePool setup, something like: Create a DrivePool pool, let's call it "Alpha", add your "main drive" as the first disk (let's call it "Beta") and your "single drive" as the second disk (let's call it "Gamma"), enable real-time duplication, disable read-striping, set whole-pool duplication x2 (you could use per-folder only if you knew exactly what you were doing). note: if you plan to expand/replace Beta and/or Gamma in the future, and don't mind a bit of extra complexity now, I would suggest adding each of them to a pool of their own and THEN add those pools instead to Alpha to help with future-proofing. YMMV. Connect "Alpha" as a Local Disk provider in CloudDrive, then Create your virtual drive on it, let's call that "Zod"; make sure its chosen size is not more than the free space of each of Beta and Gamma (so if Beta had 20TB free and Gamma had 18TB free you'd pick a size less than 18TB) so that it'll fit. There might be some fiddly bits to expand upon in that but that's the gist of it. Then you could create hardlinks in Zod to your heart's content and they'll be replicated across all disks. The catch would be that you couldn't "read" Zod's data by looking individually at Alpha, Beta or Gamma because of the block translation - but if you "lost" either Beta or Gamma due to physical disk failure you could still recover by replacing the relevant disk(s), with Zod switching to a read-only state until you did so. You could even attach Gamma to another machine that also had CloudDrive installed, to extract the data from it, but you'd have to be careful to avoid turning that into a one-way trip.
  5. Hmm. Try the following from a command prompt run as an administrator: diskpart list volume Then for each volume: select volume # (e.g. select volume 1) attributes volume attributes disk You are looking for any and all volumes and/or disks that are Read-only. Each time you find such, if any, try: attributes volume clear readonly (and/or as appropriate) attributes disk clear readonly (then repeat attributes volume and/or attributes disk as above to check that it cleared) If you don't find any such then I'm currently out of ideas.
  6. My first guess would be that the pool's file permissions have been messed up by the bad disk. I would first try the solutions linked/described here: https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/#findComment-34550
  7. Try running the following in a command prompt run as administrator: dpcmd refresh-all-poolparts And where poolpartPath is the full path of the poolpart folder of the missing drive: dpcmd unignore-poolpart poolPartPath for example, dpcmd unignore-poolpart e:\poolpart.abcdefgh-abcd-abcd-abcd-abcdefgh1234 And where volumeid is that of the "missing" drive (you can use the command 'mountvol' to get a list of your volume identifiers): dpcmd hint-poolpart volumeid for example, dpcmd hint-poolpart \\?\Volume{abcdefgh-abcd-abcd-abcd-abcdefgh1234}\ while poolparts and volumes happen to use the same format for their identifying string they are different objects If that doesn't help, you may try one or more of the following, in no particular order of preference: repairing DrivePool (windows control panel, programs and features, select StableBit DrivePool, select Change, select Repair) uninstalling, rebooting and reinstalling DrivePool (may lose configuration but not duplication or content, and keep your license key handy just in case) resetting the pool (losing balancing/placement configuration, but not duplication or content): Open up StableBit DrivePool's UI. Click on the "Gear" icon in the top, right-hand corner. Open the "Troubleshooting" menu, and select "Reset all settings..." Click on the "Reset" button. Alternatively, you may try manually reseeding the "missing" drive: if - and only if - you already had and have real-time duplication enabled for the entire pool and you just want to get going again ASAP: remove the "missing" drive from DrivePool quick format it (just make sure it's the right drive that you're formatting!!!) re-add the "missing" drive, let drivepool handle re-propagating everything from your other drives in the background. otherwise: open the "missing" drive in Windows File Explorer (or other explorer of your choice) find the hidden poolpart.guid folder in the "missing" drive rename it from poolpart.guid to oldpart.guid (don't change the guid) remove the "missing" drive from DrivePool if the "missing" drive is now available to add again, proceed to: re-add the "missing" drive move all your folders and files (but not $recycle.bin, system volume information or .covefs) from the old oldpart.guid folder to the new poolpart.guid folder that's just been created tell the pool to re-measure once assured everything is showing up in the pool, you can delete the oldpart.guid folder. else if the "missing" drive is still not available to add again, proceed to: copy your folders and files (but not $recycle.bin, system volume information or .covefs) from the old oldpart.guid folder to the pool drive duplicates of your folders/files may exist on the pool, so let it merge existing folders but skip existing files you may need to add a fresh disk to the pool if you don't have enough space on the remaining disks in the pool check that everything is showing up in the pool if all looks good, quick format the "missing" drive and it should then show up as available to be re-added.
  8. Hmm, in theory even if it's duplicating the metadata to other drives, it should still prefer to read from the fastest drive - but that presumes the metadata gets written to a SSD in the first place. Looks like forcing it can't be done from the GUI. @Christopher (Drashna) does DrivePool internally prefer SSDs when it decides where to place the [METADATA] folder(s)? If not, can that be added as an option to DrivePool? Actually, that would be a useful GUI-definable option to add to the Manage Pool -> Balancing -> File Placement -> Folders/Rules -> File placement options dialog in general, between the "New drives" and "Overflow" sections: Preference ----------- [✓] Prefer fast drives from those selected (e.g. SSD) [✓] Prefer slow drives from those selected (e.g. HDD) (i) If both or neither are selected, no preference will be made. with the [METADATA] folder also visible and customisable here instead of being hidden (perhaps lock the other options for it).
  9. Hi, dpcmd delete-poolpart-reparse-points deletes all reparse points in the poolparts of the nominated pool drive. Using the command with no parameters will provide a (brief) description of how to use the command. E.g. dpcmd delete-poolpart-reparse-points p: will delete all such points on the poolparts of pool drive "p:"
  10. Hi! If your storage is only HDD consider upgrading that to SSD or a mix of SSD and HDD (note that if you duplicate a file across a SSD and a HDD, DrivePool should try to pick the SSD to access first where possible). Also consider upgrading your network (e.g. use 5Ghz instead of 2.4GHz bands for your WiFi, and/or use 2.5, 5 or 10 Gbit instead of 1 Gbit links for your cabling and NICs, etc) depending on what your bottleneck(s) are. You can try prioritising network access over local access to DrivePool via the GUI: in Manage Pool -> Performance make sure Network I/O boost is ticked. (And also make sure 'Read Striping' is NOT ticked as that feature is bugged). Fine-tuning SMB itself can be complicated; see https://learn.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/ and https://learn.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/smb-file-server amongst others. PrimoCache is good for local use but doesn't help with mapped network drives according to its FAQ, and DrivePool user gtaus reported that they didn't see any performance benefits when accessing a pool (that had PrimoCache enabled) over their network.
  11. Shane

    Hardlinks

    Those last six words are written on the gravestones of many a project. The problem with breaking hardlinks is that from the perspective of someone externally accessing the pool, this would mean that making changes to certain "files" would automagically update other "files" (because they're hardlinks on the same physical disk within the pool) right up until they suddenly didn't (because they're no longer hardlinks on the same physical disk within the pool). Any reliable implementation of hardlinks in DrivePool would have to ensure that hardlinks did not break. Note that if we're getting technical - and we need to when we're "opening up the hood" to look at how hardlinks actually work - there is no "File A and File B hardlinked together"; there is only one file with multiple links to it in the allocation system of the volume. If you make a change to the content of what you call "File A" then you are making a change to the content of what you call "File B", because it's one content. This is not unlike an inverse of DrivePool's duplication where one file on the pool can be stored as multiple instances in the disks of the pool and making a change to that file involves simultaneously propagating that change to all instances of that file in the disks of the pool. Now in theory this should "just" mean that (at minimum) whenever DrivePool performs balancing, placement, duplication or evacuation (so basically quite often by default) it would have to include something like the equivalent of "fsutil hardlink list" on the operational file(s) on the source disk(s) to check for hardlinks and then (de)propagate any and all such to the target disk(s) as part of the copy and/or delete process. But in practice this means (at minimum) squeezing more code into a part of the kernel that complains about every literal millisecond of performance sacrificed to have such nice things. And extrapolating hardlinks isn't a simple binary decision, it's more along the lines of a for-do array. The word "just" is doing a lot of work here - and we haven't even gotten into handling the edge cases Christopher mentioned. DrivePool needs to include code to handle "File A" potentially being in a folder with a different duplication level to a folder containing "File B" (and potentially "File C", "File D", etc as NTFS supports up to 1024 hardlinks per file). Even if we KISS and "just" pick the highest level out of caution, DrivePool also has to check whether "File A" is in a folder with a placement rule that is different to the folder with "File B" (or, again, potentially "File C", "File D", etc). What is DrivePool supposed to do when "File A" is in a folder that must be kept only on disks #1 and #2 while "File B" is in a folder that must be kept only on disks #3 and #4? That's a user-level call, which means yet more lookups in kernel space (plus additions to the GUI). TLDR? There is but Christopher is right: "the additional overhead of all of these edge cases" ... "make things a LOT more complex." That's generally the problem in a lot of these situations - the more things a tool needs to be able to do the harder it gets to make it do any/all of those things without breaking, and at the end of the day the people making the tools need to put food on their table. Maybe you could try a local CloudDrive-on-DrivePool setup? I don't know how much that would affect performance or resilience, but you'd get hardlinks (because the combination lets you shift the other features into a separate layer). Other alternatives... mount a NTFS-formatted iSCSI LUN from a ZFS NAS (e.g. QNAP, Synology, TrueNAS, etc)?
  12. Yes; x2 on all but the criticals (e.g. financials) which are x3 (they don't take much space anyway). It is, as you say, a somewhat decent and comfortable middle road against loss (in addition to the backup machine of course). Drive dies? Entire pool is still readable while I'm getting a replacement. I do plan to eventually - unless StableBit releases a "DrivePool v3" with parity built in - move to a ZFS NAS or similar but today is not that day (nor is this month, maybe not this year either).
  13. Does the problem go away if you revert the update (navigate to Settings > Windows Update > Update history > Uninstall updates and find the most recent, or if it was a full version upgrade, e.g. 23H2 to 24H2, you can do so within 10 days or so via Settings > System > Recovery > Go Back)? (either way, particularly the latter, I recommend you have a backup handy in case Windows gets worse instead of better) Also if you can identify the particular update causing the problem please let us know!
  14. 1) If you open Windows Disk Management, are any of the Disks "Read Only"? Try the solution in this post (it also shows a screenshot of what to look for). 2) If that doesn't help or isn't applicable, try resetting the pool's NTFs permissions? Try the solutions in this post. Please let us know how it goes, and if the above don't help I'd suggest opening a support ticket.
×
×
  • Create New...