Jump to content

Beaker1024

Members
  • Posts

    83
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Beaker1024

  1. Awesome! I'll try this tomorrow after two more question is answered. So I do this and completely delete the $Recycle.bin folder on my DrivePool (G:\) so that deletes it on all the PoolParts too? Does the DrivePool just magically regenerate a fresh Recycle BIn? I guess I'm asking if it'll all go back to normal on it's own. Thanks!
  2. Chris - I'm hoping your experience with file servers will help me out. I have found that the DrivePool "folder duplication" GUI settings with hidden files shown clearly shows a folder in the "Recycle Bin." The folder's name is a SID that from a quick search means it's from the "Administrator" user account on WHS2011 (SID S-1-5-21-______ -500). So it's likely from a time when I tried changing a folders security permission or something (date stamp is many months ago and size is a tiny 128B). My question to you is how to delete it out? When I Remote Desktop into WHS2011 and turn on seeing hidden folders/files in the OS I can navigate to all the hard drives Recycle bins and not find this at all. I also checked and the WHS2011 main recycle bin on the desktop is empty already. I don't think it's causing any issue or a sign of any issue at all just looking to clean it up. Can I use this commandline: rd /s /q C:\$Recycle.bin on all the drives (hidden DrivePool Parts bins?) [Found command here: http://www.winhelponline.com/blog/fix-corrupted-recycle-bin-windows-7-vista/ ] I found the SID information here: [ https://support.microsoft.com/en-us/kb/243330(Well known security identifiers) https://helgeklein.com/blog/2012/07/finding-removing-orphaned-sids-in-file-permissions-or-busting-the-ghosts-built-into-windows-7/?PageSpeed=noscript (Finding and Removing Orphaned SIDs....)] NOTE: Since the DrivePool is on a WHS2011 for shares I never have anything goto the Recycle bin so I'm not sure how to work with the DrivePool Recycle Bin.
  3. On a hunch I forced StableBit Scanner to do a "File System Check" on all volumes. Found one volume as "damaged" Did the "Forced Windows Repair" and it said it was successful. I'm hoping that this volume being damaged is what caused my folder duplication issues (repaired to all 3x).
  4. Ok set DrivePool duplication to 2x and it didn't change anything for the Folder levels. I'm thinking it would be good to not have anything as 3x anymore (so it can't "repair" to 3x just 2x) but the "MetaData" folder doesn't let you change it's duplication level anymore. Other thought would be to go through the effort of making another DrivePool (never had but one pool) and have it be only for RecordedTV at 1x and leave original drivepool for the other shares with what ever duplication I want. Do you think going back to an earlier Beta might help? I had used *.612 for a long time without issue.
  5. Thinking of changing over from Folder Level to Pool level duplication (2x). I assume that would eliminate the possibility of this bug causing me this issue. Any advice on converting over?
  6. It happened again last night. Been about 2 weeks. This time I've been running Beta *.659 first time was the latest stable beta. Exact same thing happened. 3x applied to all folders. simply goto folder duplication settings and try to apply 2x put all the correct 1x, 2x, 3x back to all folders. From what I posted in another thread (didn't think I'd post in this thread again): Thanks for sharing the link to Alex's details. From his post on repairs: "If it finds an inconsistency, such as a duplication tag that's missing or incorrect, it will automatically repair all of the tags falling back to the highest duplication level detected in the case of a conflict." From this I am assuming what happened in my case was a Recorded TV share was 1x but because I had a very tiny document share as 3x the TV share was set to 3x on a "repair." This flooded my drives. Prior to bumping folder duplication settings up the highest level could a math check on free space vs needed space (for new duplication levels) be calculated and cause the "increase" to hault (with user notification)? I can understand the intent for which this was written this way but could it be possible to put in a request to Alex to modify how this repair is handled? Can the current behavior be a option in a GUI or config file setting to do the "repair" behavior as default with the alternative option having the user receive an email alert, or something, to check and reset the folder duplication levels manually? Just seems dangerous to let the folder duplication change on it's own without the user knowing first or an intelligent free space check.
  7. Thanks for sharing the link to Alex's details. From his post on repairs: "If it finds an inconsistency, such as a duplication tag that's missing or incorrect, it will automatically repair all of the tags falling back to the highest duplication level detected in the case of a conflict." From this I am assuming what happened in my case was a Recorded TV share was 1x but because I had a very tiny document share as 3x the TV share was set to 3x on a "repair." This flooded my drives. Prior to bumping folder duplication settings up the highest level could a math check on free space vs needed space (for new duplication levels) be calculated and cause the "increase" to hault (with user notification)? I can understand the intent for which this was written this way but could it be possible to put in a request to Alex to modify how this repair is handled? Can the current behavior be a option in a GUI or config file setting to do the "repair" behavior as default with the alternative option having the user receive an email alert, or something, to check and reset the folder duplication levels manually? Just seems dangerous to let the folder duplication change on it's own without the user knowing first or an intelligent free space check.
  8. agentwalker - Can you share the code for the batch file (generic version of it)? Do you have it write to a text file on a share in your DrivePool? Would that work or be an "access" conflict (and make you write output file to server drive then do a copy command and end of batch)?? Thanks!
  9. You are not alone. About the middle of Dec (2015) myself and at least one other user reported a very similar issue with duplication settings (per folder settings) changing on their own. In my case I had mostly 2x, some 3x and one 1x for the settings I wanted. DrivePool changed them all to 3x and I got messages regarding drives being full. I was using the latest stable beta at the time. Since then I've upgraded to the newest beta. I hope that the cause of this issue can be tracked down. Maybe put some protection in place to prevent changing of folder duplications without user confirmation?? (or similar tactic?)
  10. If you ahven't already, try "remeasuring" the pool (Pool Options -> remeasure). This will forcibly refresh the information and may fix the problem. I have already tried this and the "other" data size (and there rest of the stats remain the same. So no assistance there. However, if you are seeing "other" data, that could definitely generate "unusable for duplication", depending on the exact setup. I've upgraded to the newest beta (*.659) and a remeasure has made the "unusable for duplication" go away. I still have unexplained "Other" data though. But yes I was thinking the same thing. Any data outside of the "poolPart.GUID" folder is considered other, as well as is "still in use" data, such as downloading/seeding torrents. These hard drives have never had anything be outside the DrivePool poolPart.GUID folder period. Not sure about "still in use" these are all mostly archived files nothing being actively used (no one is watching a *.wtv file at the moment). You can see about deleting the system volume information folders, as that *may* help (you'll need to take ownership of the folder first). System Volume Info Folders are all reading 0 in size but are also access deniied as you guessed. Also, running something like "windirstat" may be helpful here. Yes I'll have to take a moment to install it on my WHS2011 machine. Additionally, if you could grab the logs from the system, just in case? http://wiki.covecube.com/StableBit_DrivePool_2.x_Log_Collection If you used the "Force damage disk" removal or the "duplicate later" options, it will actively leave data on the drive. I used the "default" removal of a drive options so neither of the two options were checked. I assumed that meant it would take its time and remove all the "Pool"art.xxxx" folders and files. See attached image for screen capture Otherwise, it may leave the PoolPart.GUID folder on the drive, and empty folders on it. That is normal. If there is actually data here, that may be a problem, and could be a symptom of whatever is causing the issue you're seeing. The PoolPart. Folder dated 12/17 is from the removal and still has 75 files and 18 folders (~63MB of data). Looking at the DrivePool GUI when you hoover over a drives bar graph and it pops up the Duplicated, Unduplicated, Other, etc... data. I've been keeping an eye on things and the "Other" data that doesn't belong or have any reason to be there is slowing going away over time. I had about 2.8 GB in "other" on a drive that is now reporting only 854 MB. So bottom line right now is things are getting better and I"m looking forward to more Beta releases! BTW Your scanner software saved my bacon (for an external drive on a different machine not this WHS2011) the other day and I inspired me to buy another license for another machine! Thanks for everything!
  11. I had a smaller 5th hard drive partition (partiion from OS physical disk) that I removed (default removal settings) from the same DrivePool (only use/have one pool) to simplifly my hunting this down. It's been completely removed for about 12 hrs now and I see that DrivePool has left some folders and files still in a "Pool.Part.xxxxxx" folder on the removed partition. Could the act of orphening files and folders also occurred during unduplication? Could they have become "other" outside of Pool data? Or maybe this symptom is seperate from what generated the "other" data on my main 4 drives.
  12. Ok I've looked at this a bit more and I'm getting more and more convinced that this is a case of "Other" data being generated by DrivePool on my 4 Pooled disks. I see they are uneven and would result in the "unusable for duplication" My goal now is to see if I can remove all the junk generated by the massive file duplication and unduplication (possibly in Pool.Gui or MetaData folders??) that is now "Other data" The DrivePool GUI interface calculates and see this other data but I don't when I explore the folders/files on the pooled disks (with show hidden and system folders/files as on). Is it possible to get ride of this excess "other" files? Reset the MetaData of System volume info or remove the extras? BTW The total of new "Other" data that the 4 drives have now that was not there before is: 4.49 GB Just to be clear I have never hard any data on these 4 disks that wasn't part of the DrivePool. I am certain it is DrivePool itself that made this "other" data.
  13. Edited due to finding more details: See 2nd post for question / help needed. Thank you. I've used my own work around for auditing my files and folders (RoboCopy in "list" mode compared to backup offline external hard drive). I'm fairly confident I have not lost any important files or gained any junk files in my ShareFolders. So before the error I only had Unduplicated and Duplicated Files on the 4 hard drives in Pool. Now I have all sorts of "Other" data that I can't find/identify on the disks. Can you help me get those drives back to having no "other" and no "unusable for duplication" space?
  14. Regarding the new auditing features in the beta. Do you see them getting a GUI settings for generating a TXT file in a selected directory and options to have the scheduled task made within DrivePool GUI? Or has a detailed example of the command-line already been created to help make a batch file to manually create the scheduled task?
  15. Thank you for moving this forward! One thing that I find interesting enough to mention again (as I think it could be a clue as to what went wrong) is that changing the top level folder to 2x again didn't set all subfolders to 2x it actually put _every_ duplication value back to what it was before the error! Like something just "masked" my settings (some 1x some 2x and 3x) with 3x for all and changing the top folder removed the improper 3x for all setting.
  16. DrivePool & Scanner installed on: WHS2011 fully updated/patched; No antivirus installed except on-demand free version of MalwareBytes (which I had not loaded or ran scan in 2 weeks). DrivePool is: 2.2.0.651 StableBit Scanner: 2.5.3062 [All disks are checked every 30 days and are completely healthy] In event viewer on the WHS2011 computer looking at the "System" section I see the "srv" warning entries for HD drive letters becoming full (~9pm 12/14). I see no entries for NTFS, disk or controller errors earlier during 12/14 or previous day 12/13. Looking at the "Applicatoin" section I see an ERROR for VSS: (BUt this seems to have been going on for a long time (many previous days)) VssAdmin: Unable to create a shadow copy: Either the specified volume was not found or it is not a local volume. Command-line: 'C:\Windows\system32\vssadmin.exe Create Shadow /AutoRetry=15 /For=\\?\Volume{xxxxx-xxx-xxxx-xxxx-xxxxxxxx}\'. [NOTE I replaced the volume ID with "x"] Next Entry is VSS shutting down due to idle timeout. This morning I did move the Duplication Space Optimizer up one and did a rebalance. Still have Unusable for duplication space that wasn't there before. It took a long time to do a nice balance this morning but it's done. I also had it "re-measure" afterwards.
  17. I just had a similar issue. Tonight at 8pm EST I got an email from my WHS2011 that I was completely out of HD space! I have Folder level duplication on for over a year without issue (some 3x (small shares); some 2x; one 1x (Recorded TV, not enought room to duplicate)) Well DrivePool lost it's mind and set the entire Share folders to ALL be 3x Not cool. Seems we see a trend that Server OSes might have been given a MS update/patch or something that reset DrivePools dupilcation settings? Just speculation but from this thread it's Servers and within the last week or so. BTW I have the latest pushed Beta of DrivePool running (2.2.0.651) Also when I went in and saw all the Folders set to 3x I did a quick change duplication at the highest level to be 2x (just to get things started to be back to normal). But the strangest thing happened. All my Shares went back to the correct duplication settings. My 1x recorded TV share, my 2x folders and some 3x folders. All from just doing one change at the highest level from 3 (which was all) to 2x. Also before this mess I had a clean and tidy duplication/distribution of files/folders on the hard drives. Now I have about 0.5TB less free space and I even have some "Unusable for duplication" space now! Which I never had before. Can I get things back to the nice clean distribution again?
  18. How are the Beta builds doing? I currently have version: 2.2.0.612 x64 installed on WHS2011 and it's doing ok. But it's from April and a Beta so am thinking of upgrading to a newer build. I couldn't find a true "Beta build" thread so I checked here and found the last comments on HDD clicking,etc.... not something I'd like to upgrade to having happen. Is 651 (man that's a few updates from 612!!) have a fix for this BitLocker check issue? Would you say 636 is the newest "safe/most stable" Beta? If not what build is?
  19. I put in bold the part of the quote I'd like to expand on with a question. Can you have an option (GUI settigs) for DrivePool to NOT use any CouldDrive drives (that are in the pool obviously) for the "Read Stripe" feature of DrivePool? I'd like to have it ensured that "Read Strip" option in DrivePool will completely ignore any file (file location(s)) that is not a real physically local drive. I apologize if this is already there as I haven't been brave enough to install CouldDrive but I've read all othe Blog postings and most of the online manual along with all the forum posts. Thanks!
  20. Humm.... From my point of view I only really want software that can do the complete secure encrypted trust no one part. Honestly the smallest cache possible would be best as well the files are already on the same computer and I only want a backup (and if needed retrieve from backup) tool. That's why I'd like to use RoboCopy. Unless you have a way of saying this DrivePool Folder needs to be backed-up to this CloudDrive (as an encrypted backup) and I've missed it somewhere? I know CloudDrive is designed for so much more and I get that. Unfortunately I'm only looking for very sold/robust encrypted backup that is "off-site" in the cloud. If CouldDrive had a way to being setup in this mode (by tweaking features/settings) I'd pay for it for sure. You have my trust for my files and would prefer to have such a tool from you than anywhere else.
  21. Christopher - Thank you for the post about OneDrive and the reasons. For my use case I was hoping to use CloudDrive for only a fully encrypted way to put a "BACKUP ONLY" mirror of folders into my mostly unused 1TB MS OneDrive account. Bandwidth doesn't matter as I will only be sending up Mirror duplicates for backup storage and only retreiving if an issue occurs. Not something that will be accessed by any programs. Most important question would be: If I enable OneDrive via the registry will it continue to be available with updates out of the Beta? Maybe into the GUI settings with a bouch of warnings / prompts? Can you see a way of doing this on a WHS2011 with DrivePool installed and Stablebit_Scanner installed. Should the CloudDrive to OneDrive be setup outside the DrivePool and use TaskManager to run RoboCopy batch file to do MIR of the server shares on the DrivePool? Or combine CloudDrive into DrivePool? But it'd be sweet to have a DrivePool Add-in for CloudDrive to easily say: "Pooled CloudDrive = Duplicated Only and force a full duplicate onto the CloudDrive without messing up the # of duplicates on Local" Ideally I could have a share folder in DrivePool be set to 2 or 3 duplicates. Then also say CloudDrive an additional duplicate (as a backup) for that folder or by the share. At the moment I'm thinking keeping the CloudDrive out of the Pool and doing RoboCopy is the way I'd like to go. Keeps things working for how I want to use the offsite backup. EDIT: Second most important question: Do you think enabling OneDrive and using CloudDrive but only having the OS do copy/moves via RoboCopy (with a high # of retries (/r) and waits (/w) and /z for startup where last stopped options through the CloudDrive and the throttled OneDrive will work? Or will CloudDrive be so upset with the throttling that the RoboCopy will just fail. NOTE: I excpet it to taking days and days being so slow for the initial upload. If it really looks broken (API stability) etc... I might have to consider setting up a new provider that you fully support.
  22. Beaker1024

    robocopy

    To complete the thread for my part I can report that BETA 612 now has the dashboard working correctly. NOTE: Reboot client computers too before using dashboard after updating Drivepool.
  23. Ok Thanks for the detailed info. Yes you covered the "ready" part of my questions. I did a read only "Chkdsk" on all hard-drives on the WHS2011 computer and kept the output as a Log but they all say no errors found / no issues. Unless it happens again (which I doubt) I'm going to caulk this up as a fluke and be done with this issue. Thanks again!
  24. Thanks for the suggestions. I'm a bit concerned about running "chkdsk" on my server HDDs in the pool. Do I have to dismount them or remove them from the pool first? Anything else to be ready to do "chkdsk" for a HDD/volume that's in the DrivePool??? Also concerned about how long "Chkdsk" can take and if it makes the HDDs be offline from the server? I've done Chkdsk pleanty on small HDDs on client machines but never on a server with DrivePool (w/2TB volumes instead of 500GB). I assume running chkdsk without "/r /f" switched it'll be faster. Just looking for a check to see if an issue is there at this time. When this happened I was using the current stable 2.x DrivePool. I'm now on the BETA 602. I have had no issues with duplication settings since this 1st time and have since gone to all 2x (added another 4TB drive) for all folders and a couple small ones as 3x. So far it's solid. I'll be getting you the log on the dashboard crashing for BETA 602 soon today or early tomorrow.
  25. Beaker1024

    robocopy

    FYI - I just emailed it directly. I'm hoping that the screen capture of the error message "Dashboard.exe - Application Error" message will help as it says the exception # and application location of error.
×
×
  • Create New...