Jump to content

Shane

Moderators
  • Posts

    750
  • Joined

  • Last visited

  • Days Won

    68

Everything posted by Shane

  1. If the Windows 10 Fast Startup feature is enabled, which is the default, when you do a normal Shutdown it will snapshot an image of the current kernel, drivers, and system state in memory to disk and then on boot it'll load from the image but if you do a Restart it doesn't take the snapshot and instead performs a Normal Start where it goes through the normal process of loading the kernel, the drivers and system state component-by-component from the boot drive etc. So if it's the Restart that makes the pool drive unavailable, that would suggest the issue is occurring during the normal full boot process. I'd try looking in DrivePool's Cog->Troubleshooting->Service Log after a Restart fails to make the drive available, to see if there are any clues there - e.g. it might just be something is timing out the find-and-mount routine, in which case you can increase the CoveFs_WaitFor***Ms entries as described in https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings until it picks up. If you're still stuck after looking at the Log, you can request a support ticket via StableBit's contact form. EDIT: made it clearer that only the CoveFs_WaitFor***Ms entries (i.e. the ones ending in Ms) are to be changed. The wiki mentions CoveFs_WaitForVolumesOnMount without the Ms, that's a deprecated entry that isn't used by newer versions of DrivePool.
  2. It also occurs to me that some users may have the opposite issue: where they use apps that rely on the presence of folders that may be empty at times for whatever reason, so they wouldn't want those folders being automatically removed by DrivePool. So any "cleanup empty folders" function implemented by DrivePool would have to ensure that it only removes "excess" instances of such folders within the poolparts, the same as it does with files. How tricky would it be to extend DrivePool's duplication-checking to check for excess instances of empty folders in addition to excess instances of files (e.g. currently, if duplication is set to x3 and there are x4 instances of a file, DrivePool will remove the 4th instance of that file - can DrivePool be made to apply that to empty folders too)?
  3. You could request it as a feature via the contact form? @Christopher (Drashna) perhaps there could be an option to schedule it alongside the regular pool consistency checks, and/or as a dpcmd command which could be run manually or scheduled (e.g. via Task Scheduler) by those for whom it's a significant issue?
  4. Hi baChewie, if you'd like Stablebit to take a look at / ask for logs from your affected setup directly, you could submit a support request via the contact form to get a ticket going.
  5. Hmm. I tried adding the entry and experienced the same result as you. I've looked at the change log for DrivePool and I suspect that CoveFs_WaitForVolumesOnMount was actually replaced with CoveFs_WaitForVolumesOnMountMs when the latter was added as of version 2.2.0.891 - so if your version is that or later then I believe you should use the latter entry (and setting a value of 0 for it would be equivalent to setting a value of false for the former entry). @Christopher (Drashna) does the wiki need to be amended to indicate that the first entry has been deprecated in favor of the second entry, or is something else going on?
  6. Hi Sacred, the entry should be CoveFs_WaitForVolumesOnMountMs and that entry should already be present in the Settings.json file. You might also want to try stopping the service before editing the file, only then restarting the service afterwards.
  7. It's not possible; the operating system has to be loaded before DrivePool can start. You could mount a pool as a folder on the boot drive (e.g. have it show up as "c:\pool" instead of "d:\") but not as the boot drive itself.
  8. I know the overhead's more noticeable when moving smaller files, but I haven't kept any hard numbers sorry. I also try to always keep at least one spare port (internal or external) available so that I can just plug in a new drive and let DrivePool take care of the rest, and everything at least x2 duplicated (and backed up) so if I'm in a hurry for some reason (e.g. a drive failing in a way that panics the OS) I can simply just ditch the culprit and let DrivePool handle re-duplication. I'm shifting things around at the moment but I've a mostly-full 4TB drive (internal SATA) I could test removing both normally and manually afterwards if you're interested (not that such a test would be rigorous, since pool content varies, but the offer's there). Do you use duplication?
  9. Sorry, I'm stumped. Your settings look like they should be resulting in an even distribution. Is there anything active at all in the File Placement tab, or lower down in the Balancers tab? Is "Balancers -> SSD Optimizer -> Ordered placement -> Duplicated" unticked? Does changing to "Equalize by the free space remaining" have a better result?
  10. Sadly no. As I understand it, only "single thread" (so to speak) balancing was implemented due to the considerable extra complexity that would be involved in getting "multiple thread" balancing to work well.
  11. Shane

    Scanner default settings

    It's every 30 days for all three (edit: except see Christopher's post below) and I believe the logic is attempting to find a reasonable balance between load/wear and frequency for most users; you can adjust to your personal situation/preferences.
  12. I don't know if it's related, but Stablebit just had a Microsoft-related issue with their cloud infrastructure that lasted several hours. If you're still having problems you might want to get in touch with Stablebit via the contact form.
  13. Shane

    I cannot work

    If you're having licensing issues, please use the Contact form to request help directly from Stablebit. ... once their server comes back up. Ouch. P.S. I've rebooted my home server (YOLO) and DrivePool and Scanner are still showing as licensed, and I can reach the stablebit.cloud site, so @PBUK and @Tullerian perhaps try again in case the part affecting you is working again? EDIT: this link https://status.stablebit.cloud/ shows that some services are down and has the following message at the top:
  14. It's come up a few times over the years. This post mentions reasons why DP might leave empty (and "empty") folders, while this post mentions a really neat trick with robocopy that can be used to clean them up if you decide you want them gone anyway - note that if you use it but you've got any empty folders you actually want to keep you'll have to exclude them somehow (e.g. via the /XD switch). Hope that helps.
  15. Yes, that's correct. If you're using Snapraid to protect your pool then normally you'd want DrivePool's balancing plug-in for Scanner either turned off or at least set to not move files out.
  16. Shane

    Drives

    If it's happening on a regular schedule you might want to check if anything is running in the background (e.g. via Windows Task Scheduler) at those times? Otherwise I'd suggest using the Contact form to request support from Stablebit.
  17. Shane

    FreeFIleSync error

    By default a file will normally be copied into the pool using whichever drive within that has the most free space at the time, so if they're all empty it would presumably use the 10TB drives first (and this is the behaviour I see using FreefileSync to back up files to my pool). How much RAM does the sending and receiving computer(s) have? Do any of the suggestions at https://www.makeuseof.com/insufficient-system-resources-exist-error-windows/ help? I also found this topic https://freefilesync.org/forum/viewtopic.php?t=8868 in the FFS forums, in which the OP discovered the issue was a faulty RAM stick in their computer, so that's another possibility. Another poster suggested editing the LanManServer record in the registry. Note that if editing the registry doesn't help, I recommend reverting the change in case it causes other issues later.
  18. It would complain every time it does a health check of the pool, and offer to delete all the older file(s) under the assumption that there was an error in duplicating the newer instance of the file across the pool, but it won't delete any automatically unless you tick the box that tells it to do so for future checks. However note too that if you yourself update the "single" file that shows up in the pool (or move it out of the pool) then only one of the actual files in the poolparts will actually be moved/updated and the rest will be lost. E.g. if you manually put test.txt (contains the word "apple") into d:\poolpart1 and manually put test.txt (contains the word "orange") into e:\poolpart2 and then you open p:\test.txt you might get the one that contains "apple" or you might get the one that contains "orange", and if you moved p:/test.txt to c:/test.txt or edited it to say "banana" then only one would be moved/edited and the other one would be lost or overwritten. (at least, when read-striping is off; I'm not sure whether something more odd might happen if read-striping is on and the files involved are large). And thankyou!
  19. Q: I understand now that these apparently surprising duplicated files in my pie chart were in fact mine from the beginning. Is it then a problem to leave them there ? If they're actually duplicates, i.e. the exact same file with the same path in different poolparts, then no problem. Q: I then don't quite understand the duplication warning that I get during the check : what can be the "duplicated files mismatching parts" ? I also noticed that when theses duplicates files just have the same name but are not really the same binary file (for example 2 different videos with the same name), then DrivePool just shows one of the two files in the pool. Which one does DrivePool choose ? Is this the case seen by DrivePool as a "duplicated files mismatching parts" case during the check ? Yes, this indeed occurs when different files with the same path and name have been moved into different poolparts. For example, let's say you have a photo of a cat saved as d:\photos\cute24.jpg and a photo of a dog saved as e:\photos\cute24.jpg and you manually move them into the hidden poolpart folder on the respective drives like so: d:\photos\cute24.jpg <- cat -> d:\poolpart1\photos\cute24.jpg --> shows up in the pool as p:\photos\cute24.jpg e:\photos\cute24.jpg <- dog -> e:\poolpart2\photos\cute24.jpg --> also shows up in the pool as p:\photos\cute24.jpg If you had moved the cat photo into p:\photos the normal way (d -> p) and then moved the dog photo into p:\photos the normal way (e -> p), Windows would pop up a warning that there's already a file there with that name and ask if you wanted to replace it). But by accessing the hidden poolparts directly (d -> d:\poolpart and e -> e:\poolpart) you bypass the normal safety procedures. As to which one DrivePool chooses to show, I believe it would be whichever drive that DrivePool accesses first (which would depend on various factors). Q: And finally one last question : how to know the physical path of a file seen in the pool ? (i.e. when browsing the pool, how to know on which physical disk is a file located ?) There are various ways, for example: Manually check the equivalent path in each hidden poolpart folder. Open a command prompt run as an administrator and enter dpcmd get-duplication filepath where filepath is the fully pathed name of the file ( e.g. dpcmd get-duplication "p:\photos\cute pets\oscar the turtle.jpg" ) <-- note this shows the volume numbers, not the drive letters, so you'd have to look it up in Windows Disk Management or similar to find the corresponding drive letters (dpcmd does this because DrivePool can pool volumes without them requiring a drive letter). Use a tool which can quickly scan lettered NTFS volumes and show all files on all drives that match the search string, e.g. Everything by Voidtools can do this.
  20. If you're manually moving new files into the pool via the hidden poolpart folders as per Q4142489, it is up to you to ensure they do not overlap existing folders/files in the pool. This is because DrivePool's duplication works via having the same file exist in the same path on multiple drives in the pool. For example, say you have a pool P consisting of drives D and E, whose contents are as follows: d:\poolpart.1\folder1\file1 --> p:\folder1\file1 <-- this is a duplicated file d:\poolpart.1\folder1\file2 --> p:\folder1\file2 d:\poolpart.1\folder1\file3 --> p:\folder1\file3 e:\poolpart.2\folder1\file1 --> p:\folder1\file1 <-- this is a duplicated file e:\poolpart.2\folder1\file4 --> p:\folder1\file4 e:\poolpart.2\folder2\file1 --> p:\folder2\file1 e:\poolpart.2\folder2\file2 --> p:\folder2\file2 If you then had a new drive F you wanted to manually seed into the pool as per Q4142489, with new (i.e. different to the above) content as follows: f:\folder1\file2 - - -> f:\poolpart.3\folder1\file2 f:\folder2\file3 - - -> f:\poolpart.3\folder2\file3 You would have to first change the name of F's folder1, folder2, file2 and/or file3 before moving \folder1\file2 into any hidden poolpart as otherwise it would overlap with the existing \folder1\ and \folder2\ as follows: d:\poolpart.1\folder1\file1 --> p:\folder1\file1 <-- this is a duplicated file d:\poolpart.1\folder1\file2 --> p:\folder1\file2 <-- this existing file is in conflict with a new file d:\poolpart.1\folder1\file3 --> p:\folder1\file3 e:\poolpart.2\folder1\file1 --> p:\folder1\file1 <-- this is a duplicated file e:\poolpart.2\folder1\file4 --> p:\folder1\file4 e:\poolpart.2\folder2\file1 --> p:\folder2\file1 e:\poolpart.2\folder2\file2 --> p:\folder2\file2 f:\poolpart.3\folder1\file2 --> p:\folder1\file2 <-- this new file is now in conflict with an existing file f:\poolpart.3\folder2\file3 --> p:\folder2\file3 <-- this new file is now in the same folder as two existing files @Christopher (Drashna) I recommend that the Q4142489 wiki entry should mention this explicitly; e.g. by instructing the user in step 4 to "First, check that the folder structure you intend to move into the pool does not already exist in the pool, unless your goal is to merge the content of those folder structures together."
  21. Shane

    iDrive e2 error

    Perhaps you could ask Stablebit for an extension of the trial, to test clouddrive with idrive again, via the contact form?
  22. The method would work, though you'd want to ensure the old drives weren't reconnected later to the drivepool computer without either stopping the DrivePool service and renaming the hidden poolpart folder (e.g. just put an 'x' in front of 'poolpart') on the drive you plan to swap out before shutting the computer down to physically swap out that drive or renaming/removing the hidden poolpart folder on the old drive via another computer after the drive is taken out of the drivepool computer to avoid any potential issues shoud the old drive(s) be later reconnected to the drivepool computer.
  23. Have you tried ticking "Force damaged drive removal" when clicking Remove presents you with the options? I would also tick "Duplicate files later"; it will still copy files that aren't already on the remaining disks.
  24. This thread by MitchC goes into the file indexing issues with DrivePool. TLDR is DrivePool currently generates its own file index table for files opened from the pool, held in RAM, with each file being assigned an ID incrementing from zero as they are opened (edit: the first time after each boot), renamed or moved within the pool, which is not how NTFS does it (a file should retain its FileID regardless of being renamed or moved within a volume and do so regardless of reboots), which means until this is fixed any program that assumes normal FileID behaviour from a pool drive (because DrivePool presents the pool as a NTFS volume) may behave in an unplanned manner.
  25. It's the same for both local and cloud drives being removed from a pool: "Normally when you remove a drive from the pool the removal process will duplicate protected files before completion. But this can be time consuming so you can instruct it to duplicate your files later in the background." So normally: for each file on the drive being removed it ensures the duplication level is maintained on the remaining drives by making copies as necessary and only then deleting the file from the drive being removed. E.g. if you've got 2x duplication normally, any file that was on the removed drive will still have 2x duplication on your remanining drives (assuming you have at least 2 remaining drives). With duplicate files later: for each file on the drive being removed it only makes a copy on the remaining drives if none already exist, then deletes the file from the drive being removed. DrivePool will later perform a background duplication pass after removal completes. E.g. if you've got 2x duplication normally, any file that was on the removed drive will only be on one of your remaining drives until the background pass happens later. In short, DFL means "if at least one copy exists on the remaining drives, don't spend any time making more before deleting the file from the drive being removed." Note #1: DFL will have no effect if files are not duplicated in the first place. Note #2: if you don't have enough time to Remove a drive from your pool normally (even with "duplicate files later" ticked), it is possible to manually 'split' the drive off from your pool (by stopping the DrivePool service, renaming the hidden poolpart folder in the drive to be removed - e.g. from poolpart.identifier to xpoolpart.identifier - then restarting the DrivePool service) so that you should then be able to set a cloud drive read-only. This will have the side-effect of making your pool read-only as well, as the cloud drive becomes "missing" from the pool, but you could then manually copy the remaining files in the cloud poolpart into a remaining connected poolpart and then - once you're sure you've gotten everything - fix the pool by forcing removal of the missing drive. Ugly but doable if you're careful.
×
×
  • Create New...