Jump to content

Shane

Moderators
  • Posts

    876
  • Joined

  • Last visited

  • Days Won

    83

Everything posted by Shane

  1. The light blue, dark blue, orange and red triangle markers on a Disk's usage bar indicates the amounts of those types of data that DrivePool has calculated should be on that particular Disk to meet certain Balancing limits set by the user, and it will attempt to accomplish that on its next balancing run. If you hover your mouse pointed over a marker you should get a tooltip that provides information about it. https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Disks List (the section titled Balancing Markers)
  2. 1. It might be a damaged volume. You could try running a chkdsk scan to see if you need to run a repair, perhaps do a chkdsk /r to look for bad sectors. 2. It might be damaged file permissions. I would suggest trying these fixes (the linked wiki method and the SetACL method) to see if that solves things. 3. Another possibility is some sort of problem involving reparse points or symlinks. If none of the above help, I'd suggest opening a support ticket.
  3. As a general caution (I haven't tried kopia) be aware that some backup tools rely on fileid queries to identify files and unfortunately DrivePool's current implementation of fileid is flawed (primarily relevant is that files on a pool do not keep the same fileid between reboots and the resulting collision chance is very high). I would test thoroughly before committing.
  4. Hi David, DrivePool does not split any files across disks. Each file is stored entirely on a disk (or if duplication is enabled, each copy of that file is stored entirely on a separate disk to the others). You could use CloudDrive on top of DrivePool to take advantage of the former's block-based storage, but note that doing so would give up one of the advantages of DrivePool (that if a drive dies the rest of the pool remains intact); if you did this I would strongly recommend using either DrivePool's duplication (which uses N times as much space to allow N-1 drives to die without losing any data from the pool, though it will be read-only until the dead drive is removed) or hardware RAID5 in arrays of three to five drives each or RAID1 in pairs or similar (which allows one drive per array/pair to die without losing any data and would also allow the pool to remain writable while you replaced the dead drive).
  5. Hi, first do a Manage Pool -> Remeasure and a Cog icon -> Troubleshooting -> Recheck Duplication just in case. If that doesn't fix it and you want to just get rid of the duplication you can then Manage Pool -> File Protection -> Pool File Duplication -> Disable Duplication. If you want to see which folders are duplicated first, you can either: Manage Pool -> File Protection -> Folder Duplication to go through all the folders via the GUI. or if you have a LOT of folders and having trouble isolating the culprits, you could run the following two commands from a Command Prompt run as Administrator, where T is your pool drive: dpcmd check-pool-fileparts T: 3 true 0 >c:\poolcheck.txt <-- create a text file showing actual and expected dupe level of all files in pool, can take a long time especially if you have a lot of files in the pool type c:\poolcheck.txt | find /V "/x1] t:\" >c:\pooldupes.txt <-- create a text file from the above text file that shows only the duplicates, ditto.
  6. Hi haoma. The corruption isn't being caused by DrivePool's duplication feature (and while read-striping can have some issues with some older or... I'll say "edge-case" utilities, so I usually just leave it off anyway, that's also not the cause here). The corruption risk comes in if any app relies on a file's ID number to remain unique and stable unless the file is modified or deleted, as that number is being changed by DrivePool even when the file is simply renamed, moved or if the drivepool machine is restarted - and in the latter case being re-used for completely different files. TLDR: far as I know currently the most we can do is to change the Override value for CoveFs_OpenByFileId from null to false (see Advanced Settings). At least as of this post date it doesn't fix the File ID problem, but it does mean affected apps should either safely fall back to alternative methods or at least return some kind of error so you can avoid using them with a pool drive.
  7. If you have version 2.3.4.1542 or later release of DrivePool, when you click Remove on the old Ironwolf you have the option to tick "Leave pooled file copies on the removed drive (archival remove)" in the removal options dialog that pops up. So you could just add the new 20TB, click Remove on the old 8TB and tick that option before confirming. (if you feel anxious about it you could still copy the old 8TB to your external 14TB first)
  8. Hi Austin, I would either revert to the previous release (2.6.8.4044) or update to the newer releases (2.6.10.4074 or 2.6.11.4077) of Scanner and see if the problem remains or goes away (link to releases). Scanner's logs are written to "C:\ProgramData\StableBit Scanner\Service\Logs".
  9. It's not possible to 'replace' your boot drive with a pool drive; is that what you're wanting?
  10. That does sound like the app could be using FileID to do syncs; DrivePool's implementation of FileID reuses old IDs for different files after a reboot instead of keeping them until files are deleted (the latter is what real NTFS does). This confuses any apps that assume an ID will forever point to the same file (which is also naughty) but currently there's no setting in DrivePool to make it return the codes that officially tell any apps doing a FileID query to not use it for a particular volume or file. Please let us know what you find out / hear back.
  11. Hi Shinji, this might be related to a fileid problem some of us have encountered re DrivePool. Try opening the advanced settings file and setting the CoveFs_OpenByFileId override to false (edit: and then restart the drivepool machine and set up the sync again) to see if that helps?
  12. My testing so far hasn't seen DrivePool automatically generating Object-ID records for files on the pool; if all the files on your pool have Object-ID records you may want to look for whatever on your system is doing that. I suspect that trivial in theory ends up being a lot of work in practice. Not only do you need to do populate the File-ID records in your virtual MFT from the existing Object-ID records in the underlaying physical $ObjID files across all of your poolpart voumes, you also need to be able to generate new File-ID records whenever new files are created on the pool and immediately update those $ObjID files accordingly, you need to ensure these records are correctly propagating during balancing/placement/duplication changes, you need to add the appropriate detect/repair routines to your consistency checking, and you need to do all of this as efficiently and safely as possible.
  13. Object-id follows RFC 4122 so expect the last 52 bits of those last 64 bits to be the same for every file created on a given host and the first 4 of those last 64 bits to be the same for every file created on every NTFS volume. You'd want to use the 60 bits corresponding to the object-id's creation time and, hmm, I want to say the 4 least significant bits of the clock sequence section? The risk here would be if the system's clock was ever faulty (e.g. dead CMOS battery and/or faulty NTP sync) during object-id creation but that's a risk you're supposed to monitor if you're using object-id anyway. First catch would be the overhead. Every file in NTFS automatically gets a file-id as part of being created by the OS and it's a computationally simple process; creating an object-id (and associated birthobject-id, birthvolume-id and domain-id) is both optional and computationally more complex. That said, it is used in corporate environments (e.g. for/by Distributed Link Tracking and File Replication Service); I'd be curious as to how significant this overhead actually is. Second catch would again be overhead. DrivePool would have to ensure that all duplicates have the same birthobject-id and birthvolume-id, with queries to the pool returning the birthobject-id as the pool's object-id, I think... which either way means another subroutine call upon creating each duplicate. Again, I don't know how significant the overhead here would be. But together they'd certainly involve more overhead than just "hey grab file-id". How much? Dunno, I'm not a virtual file system developer. ... I'd still want to beta test (or even alpha test) it.
  14. I should think good practice would be to respect a zero value regardless (one should always default to failing safely), but the other option would be to return maxint which means "this particular file cannot be given a unique File ID" and just do so for all files (basically a way of saying "in theory yes, in practice no"). DrivePool does have an advanced setting CoveFs_OpenByFileId however currently #1 it defaults to true and #2 when set to false any querying file name by file id fails but querying file id by file name still returns the (broken) file id instead of zero. I've just made a support request to ask StableBit to fix that. Note that if any application is using File ID to assume "this is the same file I saw last whenever" (rather than "this is probably the same file I saw last whenever") for any volume that has users or other applications independently able to delete and create files, consider whether you need to start looking for a replacement application. While the odds of collision may be extremely low it's still not what File ID is for and in a mission-critical environment it's taunting Murphy. A direct passthrough has the problem that any given FileID value is only guaranteed to be unique within a single volume while a pool is almost certainly multiple volumes. As the Microsoft 64-bit File ID (for NTFS, two concatenated incrementing DWORDs, basically) isn't that much different from DrivePool's ersatz 64-bit File ID (one incrementing QWORD, I think) in the ways that matter for this it'd still be highly vulnerable to file collisions and that's still bad. ... Hmm. On the other hand, if you used the most significant 8 of the 64 bits to instead identify the source poolpart within a pool, theoretically you could still uniquely represent all or at least the majority of files in a pool of up to 255 drives so long as you returned maxint for any other files ("other files" being any query where the File ID in a poolpart set any of those 8 bits to 1, the file only resided on the 256th or higher poolpart or no File ID returned by the first 255 poolparts was not maxint) and still technically meet the specification for Microsoft's 64-bit File ID? I think it should at least "fail safely" which would be an improvement over the current situation? Does that look right? @Christopher (Drashna) does CloudDrive use File ID to track the underlying files on Storage Providers that are NTFS or ReFS volumes in a way that requires a File ID to remain unique to a file across reboots? I'd guess not, and that CloudDrive on top of DrivePool is fine, but...
  15. As I understood it the original goal was always to aim for having the OS see DrivePool as of much as a "real" NTFS volume as possible. I'm probably not impressed nearly enough that Alex got it to the point where DrivePool became forget-it's-even-there levels of reliable for basic DAS/NAS storage (or at least I personally know three businesses who've been using it without trouble for... huh, over six years now?). But as more applications exploit the fancier capabilities of NTFS (if we can consider File ID to be something fancy) I guess StableBit will have to keep up. I'd love a "DrivePool 3.0" that presents a "real" NTFS-formatted SCSI drive the way CloudDrive does, without giving up that poolpart readability/simplicity. On that note while I have noticed StableBit has become less active in the "town square" (forums, blogs, etc) they're still prompt re support requests and development certainly hasn't stopped with beta and stable releases of DrivePool, Scanner and CloudDrive all still coming out. Dragging back on topic - any beta updates re File ID, I'll certainly be posting my findings.
  16. Hi ToooN, try also editing the section of the 'Settings.json' file as below, then close the GUI, restart the StableBit DrivePool service and re-open the GUI? 'C:\ProgramData\StableBit DrivePool\Service\Settings.json' "DrivePool_CultureOverride": { "Default": "", "Override": null to "DrivePool_CultureOverride": { "Default": "", "Override": "ja" I'm not sure if "ja" needs to be in quotes or not in the Settings.json file. Hope this helps! If it still doesn't change you may need to open a support ticket directly so StableBit can investigate.
  17. "Am I right in understanding that this entire thread mainly evolves around something that is probably only an issue when using software that monitors and takes action against files based on their FileID? Could it be that Windows apps are doing this?" This thread is about two problems: DrivePool incorrectly sending file/folder change notifications to the OS and DrivePool incorrectly handling file ID. From what I can determine the Windows/Xbox App problem is not related to the above; an error code I see is 0x801f000f which - assuming I can trust the (little) Microsoft documentation I can find - is ERROR_FLT_DO_NOT_ATTACH, "Do not attach the filter to the volume at this time." I think that means it's trying to attach a user mode minifilter driver to the volume and failing because it assumes that volume must be on a single, local, physical drive? TLDR if you want to use problematic Windows/Xbox Apps with DrivePool, you'd have to use CloudDrive as an intermediary (because that does present itself as a local physical drive, whereas DrivePool only presents itself as a virtual drive). "But this still worries me slightly; who's to say e.g. Plex won't suddenly start using FileID and expect consistency you get from real NTFS?" Well nobody except Plex devs can say that, but if they decide to start using IDs one hopes they'll read the documentation as to exactly when it can and can't be relied upon. Aside, it remains my opinion that applications, especially for backup/sync, should not trust file ID as a sole identifier of (persistent) uniqueness on a NTFS volume, it is not specced for that, that's what object ID is for, and while its handling of file ID is terrible DrivePool does appear to be handling object ID persistently (albeit it still has some problems with birthobject ID and birthvolume ID, however it appears to use zero appropriately for the former when that happens). P.S. "I have had server crashes lately when having heavy sonarr/nzbget activity. No memory dumps or system event logs happening" - that's usually indicative of a hardware problem, though it could be low-level drivers. When you say heavy, do you mean CPU, RAM or HDD? If the first two, make sure you have all over-clocking/volting disabled, run a memtest and see if problem persists? Also a new stable release of DrivePool came out yesterday, you may wish to try it as the changelog indicates it has performance improvements for high load conditions.
  18. Windows Server 2019 (any version) doesn't have the Windows Home Server Console nor the Client PC Backup app. If it helps, I've been using Windows 10 Pro as a headless home server for some years now instead of WHS 2011, and it works quite well; 10 Pro allows up to 20 concurrent devices to access its shares, which is usually plenty for a household (YMMV), and the free version of Veeam Agent is installed on my family's PCs to back them up nightly to a share on the server's pool (that particular share is restricted to be writable only by a 'vbackup' user account I created on the server and use only with the Agents) and I also do separate backups of the server itself (in case someone deletes something in a family share). It sits in the rear of the house, gets a midnight reboot once a month, and I access it remotely if I need to check anything.
  19. Hi, if the drive is coming back healthy but is not being filled, try removing and re-adding the drive.
  20. Basically with all balancers/placement off, DrivePool prefers to fill whatever disk has the most free space at the time of accepting a file. "now my 8TB contain 4.45TB data and my other 4TB contain 0.82TB data so drivepool will work correctly when the 8TB reach 4.82 TB if I turn off all the balancers?" Yes, after that point DrivePool will tend to alternate between them as each gets less free space than the other from receiving new files. "Or which balancer you recommended to use it for minimal balancing while using snapraid?" With Snapraid, and considering no other factors? Only the Scanner balancer, if at all. I feel the extra load during a sync from dealing with an evacuated drive is preferable to having to rebuild a drive from parity, but that's just my feeling and your use case and mileage may vary. "For your Point 2, if i did not get it wrong, i can cut out 2 4TB partitions from the 8TB and add them all to the pool so that i should get what I want right XD?" If you're just wanting to get all the bar graphs in DrivePool to line up prettily, yep that's a way. Just note you'd then need to use a minimum of two parity levels of protection instead of one in your SnapRaid config (I'm assuming you currently have a single 8TB parity disk or multiple split disks totalling 8TB of single parity somewhere since you have an 8TB data disk) because if your 8TB disk partitioned as 2x4TB partitions failed you'd simultaneously lose two "disks" from SnapRaid's perspective. If however you were wanting to maximise that free space on the 8TB to keep in reserve for some reason... maybe you could do something like use the SSD Optimizer balancer and time/script it so that your regular SnapRaid syncs happen just after the 8TB disk finishes emptying the new files into all the 4TB disks. And/or put in a feature request for the Drive Space Equalizer to be able to set real-time file placement limits and hope that's feasible. DrivePool and SnapRaid can work very well together... so long as you are careful to line them up nicely.
  21. Hi! The short answer is that the goal of "all new files distribute across the storage array in real time while the location of old files in the array remain unchanged whien balancing" can't generally be accomplished because DrivePool considers "old files" to be "all files already in the pool" while SnapRaid considers "old files" to be "any file that is neither new nor changed since the last sync". Before continuing, it may be useful to understand the following about DrivePool: Placement of existing files on the pool, a.k.a. balancing, is controlled by the Balancers and the File Placement rules. Placement of new files on the pool, a.k.a. real-time file placement, defaults to whichever drive(s) have the most free space at the time. This is done primarily to minimise the risk of having insufficient free space to complete the writing of a new file. The real-time file placement limits that can be set, by the File Placement rules and by some Balancers, can override this. If you hover your mouse cursor over each of the Balancers the tool-tip will tell you if it can set real-time file placement limits. Unfortunately the Drive Space Equalizer balancer is not one of those able to set a real-time file placement limit. So it's generally recommended that DrivePool's balancers be turned off (except maybe Scanner) when used together with SnapRaid, to avoid the problems it causes with syncing. DrivePool will still distribute new files across a pool's disks according to their free space (or under whatever placement limits you've set) and that still results in an even distribution of used space across those disks over time if they are of the same size. In your case however you have one data disk that is significantly larger than the rest, and that skews the new file distribution accordingly. I'd suggest turning off the balancers and either: Accept the skew; once the free space remaining on the 8TB data disks falls to the same level as the 4TB data disks then additional new files will tend to spread evenly over time, there'll just always be 4TB extra data on the disk that is 4TB bigger than the others. The simplest alternative. Shrink the partition on the 8TB disk to 4TB for now or replace the 8TB disk with a 4TB disk; new files will spread evenly without the skew. The next simplest alternative. I hope this helps.
  22. Hi Lanti! DrivePool only reports the used, free and total size of a pool drive from the totals of the disks that form it, and the size of any given file as if it were only on one disk. There are multiple reasons for this: it's much simpler and faster to calculate, DrivePool can't guarantee that per-folder duplication settings won't be changed by the user, DrivePool's metadata is stored on the disks at varying duplication levels depending on the number of disks in the pool and this may not match the user's level(s), and reporting any other size for individual files or a pool drive can prevent Windows file operations from working properly. I hope that helps.
  23. Manage Pool -> File Protection -> Folder Duplication... shows you the duplication status of folders in the pool (and allows you to change them). That sounds maybe like you might be running the command line as a standard user but have UAC disabled. dpcmd needs to be run in a command line that's already being run as an administrator - otherwise its console output goes to a temporary window if it works at all - so try opening the command line as an administrator (e.g. via Windows Start Menu, Command Prompt, Run as administrator) and then running dpcmd from that.
  24. C:\Windows\System32\dpcmd.exe You should access it by opening a Command Prompt, run as Administrator, and simply entering "dpcmd" or "dpcmd [command] [parameter1 [parameter2 ...]]" where "[command] [parameter1 [parameter2 ...]]" are what you're wanting it to do (and you'll get the list of those options by entering "dpcmd" by itself).
  25. Yes, a volume needs to be Simple and its disk needs to be Basic to be usable by DrivePool. Windows Disk Management unfortunately does NOT support converting an in-use disk from Dynamic back to Basic without erasing it first. To perform the conversion without data loss you would need to use a third-party partition manager with that capability, and there is still a risk; I would strongly advise having a backup first.
×
×
  • Create New...