Jump to content

Shane

Moderators
  • Posts

    730
  • Joined

  • Last visited

  • Days Won

    64

Everything posted by Shane

  1. Hi Steve, please use https://stablebit.com/Contact form and choose "An issue with licensing" for what needs to be contacted about.
  2. You could try forcing a remeasure: Manage Pool -> Re-measure. Otherwise, you could take a look at Manage Pool -> Balancing and look through the Settings, Balancers and File Placement tabs to see if anything looks responsible. If all of the above fails, and you don't mind having to retweak anything to your preferences, there is also (cog icon) -> Troubleshooting -> Reset All Settings which will reset all of the balancing and application preferences to default values while leaving your pool(s) intact.
  3. Hmm. How long is long? Or (wild guess) the long directory/filename contains unusual/mangled characters which the duplication process can't parse correctly? Do the drives in the pool all pass a "chkdsk" command?
  4. Any luck with (cog icon)->Troubleshooting->Recheck duplication? You could also try a tool like Everything from Voidtools running as administrator to see if there are any hidden system files in the poolpart folders (other than the [Metadata] subfolder). "Other" usually refers to data stored on the pooled drives outside of the pool - e.g. if you've added drive "F:" to a pool, anything in F:\PoolPart.guid is inside the pool while anything in (for example) F:\SomeFiles\ is outside the pool - as well as all the standard filesystem metadata and overhead that takes up space on a formatted drive. For example, the hidden protected system folder "System Volume Information" created by Windows will report a size of zero even if you are using an Administrator account, despite possibly being many gigabytes in size (at least if you are using the built-in Explorer; other apps such as JAM's TreeSize may show the correct amount).
  5. To clear the Driver Verifier entries, you can open a Command Prompt and enter the following: Verifier /Reset
  6. 1. I believe that's normal and expected. 2. This is a quirk of how the "partition" geometry is seen by Windows. If you look at the volume section, you can see Windows can still see the true "volume" capacity. 3. I believe SB decided to prioritise DP's reliability and performance over the additional code necessary to reflect the impact of duplication - especially since it is possible to set pool duplication per-folder and to use a disk for both pool and non-pool storage, which would make it impossible to accurately show how much free "post-duplication" space remains. Pretty much the "collected location" is these forums? There's also the dev wiki.
  7. In the "what happens when I write to the pool" scenario, I imagine it would depend on whether you have duplication set to real-time or not; if it's real-time it will immediately notice it can't write to multiple drives, while if it's deferred it will only run into the problem when it does the nightly duplication pass. In the "what happens if I remove a bad drive" scenario, the pool should keep functioning but warn you that it can't duplicate the files due to insufficient space. At worst you would have to try again and this time tick the "Duplicate files later" box in the Remove options to proceed.
  8. It may work? Stop the "Stablebit DrivePool Service" service beforehand, rename the folder, start the service again afterwards. Be prepared to Undo the rename if it doesn't work. Though honestly if there's no desktop.ini file involved I don't know how it's happening in the first place.
  9. There are at least two methods to do this. Simplest method would be to create a separate pool for each grouping of disks (e.g. "enclosure A disks" and "enclosure B disks") then create a super-pool consisting of those pools and turn on duplication just for the super-pool (my preference). If you want more granular control, you could use the Ordered File Placement balancer and stagger the Duplicated placement priority with disks inside and outside the enclosure. You could also combine the two methods (e.g. use a super-pool for duplication and then use the OFP balancer for fine control of each of the sub-pools). Since the balancer algorithm can occasionally run into issues, if you just want guaranteed fire-and-forget I'd go with the first method.
  10. Hmm. I vaguely recall this has come up before. Basically the "total" (unduplicated + duplicated + free remaining) in DrivePool's gui is based on the file sizes rather than the size they take up on the disk, which aren't necessarily the same due to the way NTFS works. DrivePool is still (as you can see from the top left of the tooltip) aware of the actual size of the physical drive. Aha, found it: TLDR the DrivePool uses "file size" rather than "size on disk" for some of its calculations, and this is reflected in the GUI. If you want DrivePool to recalculate these just in case, you can use the GUI's "Manage Pool" -> "Re-measure..."
  11. If I had make a wild guess, it's possible that the Windows 7 Backup is assuming the destination is a physical drive and attempting to use VSS or special NTFS metadata operations which aren't supported by DrivePool. For whatever it's worth, I'm using Veeam for my own system image backups and haven't encountered that issue.
  12. gtaus is correct: if chkdsk unmounts the drive, Drivepool will notice the drive is missing and change the pool to read-only until the drive is remounted at which point the drive will merge back into the pool and the pool will become writeable again it is unlikely but if chkdsk goes wrong and effectively formats the drive, the pool will need to be told that the drive is permanently gone and whatever was only on that drive will be lost it is also unlikely but if chkdsk goes wrong and mangles the drive contents but the poolpart folder itself is intact enough for drivepool to recognise it, drivepool will attempt to merge it back into the pool which may result in conflicts Protips: use Windows Disk Management to take one of the pool's other good drives offline first, that way your pool will stay read-only until you put the good drive back online and you can thus check the suspect drive for yourself whether chkdsk did its job properly before you toggle the good drive back online to make the pool writable again if you don't already have backups, you can also take advantage of this read-only state to backup the current contents of the hidden poolpart folder on the suspect drive to somewhere else before running the chkdsk
  13. I'd recommend adding the new drives or creating a new pool with them, and then moving the contents inside each old poolpart to the new poolpart - minus any system folders - rather than attempting to moving the poolparts themselves. Trying the latter may result in Windows complaining about the hidden system folder ("System Volume Information") that Windows creates within the poolpart folder. Alternatively, if you've got any spare USB ports, you could use USB drive dock(s) to help perform the migration? Preferably USB3, because USB2 would take a lot longer. Especially, don't try to cut and paste poolpart folders on a machine that has DrivePool installed. I just tested that to see what would happen, and ended up having to delete and recreate the physical drive partitions to get DrivePool to recognise them again.
  14. The OS writes files by going (approximately) "create/open entry for file on drive's index, stream data to file from program, write details (e.g. blocks used so far) to drive's index, repeat previous two steps until program says it's done or the program says it's encountered an error or the drive runs out of room or insert-other-condition-here, write final details in the index and close entry for file". Or in even simpler terms: at the system level all files are written one block at a time, no matter how many blocks they'll eventually involve. Now a workaround for programs that deal with fixed file sizes is to ask the OS in advance "how much free space is on drive X" so that they can know whether there's going to be room before they start writing (well, "know" as in "guess" because other programs might also write to the drive and then it becomes a competition). But the catch there is that when the OS in turn asks the drive pool "how much free space do you have", DrivePool reports its total free space rather than the free space of any particular physical drive making up the pool. This is because it can't know why it's being asked how much free space it has (DP: "am I being asked because a user wants to know or because a program wants to write one big file or because a program wants to write multiple small files or because some other reason, and oh also if I've got any placement rules those might affect my answer too?" OS: "I don't know").
  15. I haven't tested whether it will rebalance in that way; hopefully it doesn't, as that would cause the same sort of problem as above, but if it does there is the option to set OFP to only place new files and leave existing files where they are.
  16. Ah, I see. Ordered File Placement overrides the normal "write new files to the disk with the most free space" rule. You've got Ordered File Placement set to "fill each drive until there is only 1% or 5 GB free on it" and you've still got 5.25 GB free on the first disk (and 5.44 GB on the next). So it's going to try to write files to the first disk until it's got less than 5 GB free. And since the file you're trying to write is 100 GB, it can't fit so DrivePool can't hit the limit that will let it switch to the next drive (which in this case wouln't be able to fit it either). Note that due to the way Windows (or any OS that allows files of pre-undefined size to be written) works, DrivePool can't know the size of any file before it's first (fully) written to the pool. First option: if you know the incoming files will always be 100 GB in size then set the plugin's "Or this much free space" to be 101 GB (or whatever figure will end up such that DrivePool can fill the disk past the limit without running out of space on that disk altogether. Second option: trying the SSD Optimizer plugin instead? It also supports ordered placement, so you could assign a fast disk that has enough space as an "SSD" to act as a buffer; set a very high fill limit on the SSD and archive drives in the plugin but then set the Automatic balancing trigger size (in the Settings tab) to be low enough, and either un-tick the "Not more often than every" option or make the time period low enough, that it can prevent the SSD from filling up. That way DrivePool can see the size of the file in the SSD and thus be able to figure out whether it's got room on the first (archive) drive or needs to try the next (and so on). So it (should) go: program -> writing files into "SSD" disk -> DrivePool can now see the finished sizes of each file and move them into the "archive" disks in order without getting stuck. Just make sure the "SSD" is fast enough that it can handle being the buffer.
  17. Can you post your Settings tab?
  18. You have to tick "Balancing plug-ins respect file placement rules" and un-tick "Unless a drive is being emptied" in the "File placement settings" category on the "General" tab of the Balancing settings.
  19. It would run out of space before it could finish, and the program would get an error. However, normally the automatic balancing defaults include the Prevent Drive Overfill balancer being set to detect any drive with less than 100 GB free and emptying it into other drives until there is at least 200 GB free. So that situation is unlikely to happen unless you've changed the relevant defaults or you don't have enough space left on every drive in the pool.
  20. If you haven't already, I'd start with turning off automatic balancing and copying each drive to a backup.
  21. That's disturbing. Do you have a separate backup? Can you contact Stablebit and ask them to investigate?
  22. Hmm. I suspect the folder is still actually named PoolPart.guid at the system level but there is now a hidden system file named desktop.ini inside the folder that is telling Windows Explorer to show the name as Contacts instead (because that's a thing in Windows). These hidden system files can sometimes accidentally get brought along with visible files during moves or copies by users (because Windows). If that's the case, deleting or renaming that file should fix the problem (I suggest doing this at the pool drive level rather than the physical drive level). You might need to go into Windows Explorer's View menu -> Options -> View tab and temporarily tick "Show hidden files" and untick "Hide protected operating system files" for it to be visible to you.
  23. It seems good at first glance, though I'm not sure about the scheduled task part. How would you do that? Direct movement between the poolparts, or using File Placement to tell LargePool that any file the task moves to an 'Old' folder is to be balanced into the CloudPool half of the LargePool?
  24. In order: Pools become read-only while a drive is missing (or if a trial version expires) until that drive is returned or formally removed. This may be a problem for your plan (see bullet point four). Drivepool never splits files. It may split folders. If you try to remove a drive from the pool via the GUI, it will attempt to first move the files on that drive to a different drive (if any) in the pool. If duplication is turned on, duplicated folders will be read-only for the duration. If you remove a drive from the pool any other way, the pool becomes read-only until this is resolved. If a files are being copied to the pool when this happens, the result will depend on the copy method (e.g. a standard Windows Explorer copy window will request permission to continue and will only be able to do so once the pool is writable again). DrivePool uses standard Windows NTFS format (or ReFS if you want) so removed drives can be read by any matching Windows system. Based on my testing, "dpcmd ignore-poolpart" does not wait for open file handles to be closed. The results are as per bullet point four above. Renaming a poolpart will also 'disconnect' that drive (volume) from the pool, but poolparts cannot be renamed if they contain open file handles, so this may be a partial workaround (partial, because the pool will still become read-only until you formally remove the 'missing' drive via the GUI (or via dpcmd). Can you pause the copying at all? Alternatively, can you at least throttle the copying speed from the source? Then you could have the source filling the pool while you empty the pool to destination drives that you swap out as the latter fill (i.e. the drives in the pool act as a buffer so the source never needs to stop copying, so long as you can empty the pool faster than the source can fill it)? Edit: A third option might be to make a feature request for StableBit to add an advanced setting (advanced because it would be a 'use at own risk' setting) to DrivePool that enables forcing pools to remain writable even when a drive is missing. I don't know how feasible that would be.
×
×
  • Create New...