
580guy
Members-
Posts
19 -
Joined
-
Last visited
-
Days Won
1
580guy last won the day on December 12 2023
580guy had the most liked content!
Profile Information
-
Gender
Not Telling
Recent Profile Visitors
802 profile views
580guy's Achievements

Member (2/3)
2
Reputation
-
Hard Disk Sentinel & Stablebit Drivepool issue
580guy replied to ImperialDog1999's question in General
''Problems occurred between the communication of the disk and the host 56 times'' would seem to indicate to me a cabling/connection/enclosure problem. For my setup, a re-install would be days of work, so an absolute last resort. (That's why daily image backups are invaluable.) I hope your problem remains resolved!! -
580guy started following Does Rules under File Placement apply to folders or files alone? , Hard Disk Sentinel & Stablebit Drivepool issue , drivepool access and 7 others
-
Hard Disk Sentinel & Stablebit Drivepool issue
580guy replied to ImperialDog1999's question in General
One other thing, since you run HD Sentinel, is to check the log for the drive, and look for errors, particularly Ultra ATA CRC Error count, which indicates a possible cabling/connection/enclosure problem. I usually have a bunch of these before a drive starts acting up to the point of dropping out of the pool. Yes, I also run Stablebit Scanner too, I do use Crystal disk info, but only on demand (performance tests), not in the background. I also use a Fan control program called Argus, which also accesses HD SMART information too, but I like HD Sentinel for that. -
Hard Disk Sentinel & Stablebit Drivepool issue
580guy replied to ImperialDog1999's question in General
I run both Drivepool and HD Sentinel with 29 drives. The only time I have had behavior like that was due to a disk failing, or corrupted, and even a cable/connection going bad. The system would freeze when trying to access that particular drive, and then unfreeze. Sometimes the drive would disappear from the pool too. Can you remove the drive, and test it in another system? (one of the advantages of drivepool)? I would first suspect the disc that is dropping out, before trying a bunch of software troubleshooting/fixes. Just my $.02. -
Great info. I just upgraded the OS on my server, and was thinking about this very thing while I was de-activating and re-activating on the new drive! You read my mind!
-
Shane reacted to an answer to a question: Stablebit Drivepool with Snapraid, evacuating a failing drive?
-
I also use Snapshot Raid with Drivepool. If a drive failure, I physically remove the drive, THEN remove it on Drive pool. Install a new drive, put it in the pool and restore.
-
That's what do. I just pull the drive out of the pool. Sometimes I copy as much as I can to the new drive (faster). Then restore the rest. I also use snapshot Raid on my server (FlexRaid). It still works marvelously well.
-
I know this is an ancient thread, but the information still applies to us still running WHS 2011 (like me!). So instead of posting a new thread I am replying in this one. Running WHS 2011 for almost 10 years now, has resulted in the system partition filling up slowly, mainly because of the winsxs folder. I have already tried many options (disk cleanup, dism etc), and can't get it any smaller. I have around 14GB of space on the C: drive now. So, I was thinking about resizing the system partition to get more space. From what I have gathered, it is possible to re-size the system partition using various methods to increase the size > 60GB. Running Server manager with disk management also seems to allow the shrinking of the D: partition and increasing the size of the system partition, but haven't tried it yet. But, will you have a problem during a bare metal restore to a larger system partition? It's been awhile since I've done one, but I remember WHS 2011 restore can be pretty finicky at times. I think I remember having to initialize only the partition, and not format it for restore to work reliably. If you create, say a 100GB partition first, and then select if for restore, I wonder if WHS restore would balk at the partition size, or maybe re-create it to the 60GB size, in which case your restore data wouldn't fit, and you would be stuck? Anyone know offhand? I hate to be the Guinea pig!!
-
Thanks for all the help, Chris. I tried out the Beta of 2.x for a week or so, and guess what? I ended up going back to 1.x. Even with the possibility of the inaccessible folder problem showing up occasionally again. My reasons are: 1. Better performance. With 2.x, I noticed at times activity on the pool would just stop for 5-10 seconds, even though I was running stuff like WinRar extraction while I was watching pool activity, it would just pause then finally resume. Never have that happen with 1.x. 2. Purely Manual Folder Duplication I need this. I run a drive defragger that runs based on an activity threshold, and when 2.x decides to do a duplication check, it shuts down the defragger due to the disk activity. Have tried changing the threshold levels in the defragger but it's not what I really what I wish to do. Additionally, I also burn discs with Imgburn, and even though it has burn proof, the disk activity will again, pause it or delay it's resumption while it waits for the activity threshold to decrease. I am sorry this is the case, as I know I won't be on WHS2011 forever, and will have to go to Drivepool 2.x sometime in the future. I wish folder duplication could be put in strictly manual operation. I prefer to either manually tell Drivepool to do a duplication check as per 1.x, or know that duplication will run also once per day. The data in my duplicated folders doesn't change without my knowledge, except for Client Backups, which are done just prior to the once a day duplication time so that takes care of it for me. Otherwise, if not sure, I just run a duplication pass manually. Maybe take this a suggestion for future versions of 2.x? Maybe I am just an oddity in the way I do things!! Again thanks for you help.
-
Thanks Chris. I will try the 2.x Beta. The inaccessible folder problem cropped up again even with drivepool 2.x stable last night again. I am wondering if it has something to do with the number of drives in my pool (21) and the wacky windows permissions (WHS 2011). May have to upgrade server to 2012 Essentials down to road. Need more memory and already up to 8GB (max for WHS 2011). Never had this problem until the last few months. Still I believe I like drivepool 2.x version better than 1.x. It's nice to be able to run the interface without waiting to load the Dashboard to access/check it. I will post results on using the new Beta.
-
I run WHS 2011 and have been running Drivepool 1.x for some years. Lately I have had some folder permission problems with folder moves between shares, and the folder that is "moved" is left empty on it's share and I cannot access the folder without a server reboot. Cannot take ownership, etc. even with Administrator account. Anyhow, after much troubleshooting I decided to try Drivepool 2.x and so far so good. Still not sure where the system was glitching, maybe a large pool of 21 drives? But.... I always preferred to run manual duplication on Drivepool 1.x, and also without background I/O priority so when I chose to duplicate, it would do it a full speed. I did this with settings, and also edited the config file for background priority. Worked great! I noticed that Drivepool 2.x does it differently, even when NOT selecting Real time duplication under performance. I read that duplication will be performed later anyway in the manual. Apparently from time to time Drivepool 2.x does a check and duplicates anyway. Then I found this setting in the Drivepool 2.x Wiki: FileDuplication_AlwaysRunImmediate which I editied to FALSE. Is this setting still valid in Drivepool 2.x? I ask because it wasn't in the default config file, so I copied it from the wiki and edited an entry in my config file. But, it doesn't seem to keep duplication from sensing a file that needs to be duplicated and running a check. The reason I do not want to do this, is because I duplicate some folders which receive downloads of archive sets, which will be in these folders temporarily until they are checked and extracted, then they are deleted. But Drivepool is still trying to duplicate these in meantime and wasting disk access and possibly resources. Is there any way to strictly make duplication manual as in Drivepool 1.x?
-
Christopher (Drashna) reacted to an answer to a question: Will this work? Drivepool & Flexraid upgrading with new Drive
-
I know my post above is from a couple of years ago, but just wanted to confirm that it DOES INDEED work if there are any out there using this combination. I had my first drive failure in my server of 21 drives. I know I have been extremely lucky as I have drives that have over 6 1/2 years of 24/7 operation and are still going strong. Anyhow.... The drive that failed, didn't fail completely. I had been receiving warnings from Stablebit Scanner about remapping sectors for the last several months. I decided to wait and see how long the drive would go before it failed completely. This drive had about 4 1/2 years of service. In the end there were 2 files which became unreadable and I decided to pull the drive and replace it. Rather than remove it from the pool normally and allow Stablebit scanner to migrate files to other drives in the pool, I just physically removed the drive from the server. I did this because so as not to disturb the array parity I had with Flexraid. Then removed the "missing" drive with Stablebit Drivepool. Put in a new drive, made sure same drive letter assigned by WHS 2011, and waited for duplication to finish restoring files. I then took the Old Drive, renamed the poolpart.xxxx folder to prevent it from rejoining the pool, and copied my unduplicated ServerFolders back to the new drive, with the exception of the 2 unreadable files. Then, deleted old symbolic link, and created a new one with the same name pointing to the new PoolPart.xxxx folder of the new drive. Restored the 2 bad files from Flexraid. Done! Parity is still unchanged for the new disk, avoiding a lengthy update to get the new disk files into the parity array. Of course had the drive failed totally without being able to access any files at all, then I would just allow Flexraid to restore it completely. But, since I was able, I think that copying the old drive's contents was slightly faster so I did it this way. Just wanted to confirm this works great if there are any others that are using this combination of Stablebit Drivepool and Flexraid parity for data protection.
-
580guy reacted to an answer to a question: How to get my Folder Sizes to display again?
-
580guy reacted to an answer to a question: How to get my Folder Sizes to display again?
-
I migrated my pool to my new Lenovo TS440 Server from an HP MediaSmart Ex495. Everything went smoothly with no problems. Transferred my license for Drivepool and the pool was recognized right off the bat and has been running great for 3 weeks now. But, as shown on my screenshot of the WHS2011 Dashboard, my folder sizes are ZERO except for Client Computer backups. I have re-measured, re-indexed to my heart's content, but nothing seems to fix this. I have no other problems at all, running fine, but I always enjoyed having these Stats available. What else can I do to get this to work?
-
Here is what I did to run FlexRaid Snapshot parity along with Stablebit Drivepool to help with drive replacements in your pool. When you replace a drive in your drivepool, a new PoolPart.xxxx folder is created automatically by Drivepool. FlexRaid's Data config is pointing to the "old" Poolpart.xxxx folder of the removed drive. While FlexRAID can restore the files to the new drive, they will be restored to the "old" PoolPart.xxxx folder, and it's parity will not be current. You can move the files to the new PoolPart.xxx folder, but FlexRaid is still looking at the old one when updating and maintaining parity. To make it current, you will have to REMOVE the DRU from FlexRaid data configuration (which takes hours to recalculate parity to remove the drive from the snapshot array), then add the new PoolPart.xxx path back to the DRU (which takes more hours recalculating parity to add the drive back into the snapshot array) or use the DRU rename command in FlexRAID to rename the DRU to the new Folder (still takes hours to recalculate parity). I tried renaming the "new" Poolpart.xxxx to the "old" Poolpart.xxxx of the old drive as suggested by Drashna, but Drivepool will then add the drive to a NEW POOL and you will have TWO drivepools. So, I was trying to find a way to get FlexRAID to "point" to the new PoolPart.xxxx folder. The solution is to create a hard link with the MKLINK command from the command line. C:\>mklink Creates a symbolic link. MKLINK [[/D] | [/H] | [/J]] Link Target /D Creates a directory symbolic link. Default is a file symbolic link. /H Creates a hard link instead of a symbolic link. /J Creates a Directory Junction. Link specifies the new symbolic link name. Target specifies the path (relative or absolute) that the new link refers to. I created a directory junction for each PoolPart.xxxx folder for each DRU on the C drive. Example: Drive D: (DRU1) PoolPart.ded39be4-44bd-4122-b772-34e15fd05d40 Open a command line then create a link to this PoolPart Folder: C:\>mklink DRU1SF-D /J D:\PoolPart.ded39be4-44bd-4122-b772-34e15fd05d40\ServerFolders A file (directory junction) called DRU1SF-D will be created on your C drive. (you can name it anything you want, but that is what FlexRAID will use). I called mine DRU1SF-D, DRU2SF-E, DRU3SF-F, etc. That tells me it's for DRU1, ServerFolders, and the drive letter, etc, etc,. Now go to FlexRAID data configuration and use the "path" C:\DRU1 as your path for this DRU. The same for the other DRU's you have. Do this for each DRU you have. This way, if you have to replace a drive, and a new PoolPart.xxxx is created, you just need to delete and re-create a link to point to this new PoolPart.xxxx folder and leave the FlexRaid data configuration unchanged. You could create the link file on each DRU, but FlexRAID advises against having hard links on the DRU as it might think the DRU is larger than it really is. So I prefer to keep the links on C drive, which is not in any array or pool. I hope this helps others who may run into this problem running this configuration. As I type this, I am stil creating the new array. (going to take over 40 hours). I will advise how it ends up. But I am confident this will work.
-
Thanks, Drashna for your reply. However, nothing much worked. If I rename the PoolPart.xxxx folder, the drive goes into a second pool. Tried this many times, and always the same. So, decided to re-create my FlexRaid snapshot array, but this time I am setting up some hard links on C drive, pointing to the PoolPart.xxxx folders on each drive in the pool, with the MKLINK command from a Dos shell. That way I can always change my link if a drive is upgraded/replaced (new PoolPart.xxx folder created) and leave FlexRaid config the same, as it just uses the link named DRU1 etc which points to the PoolPart.xxxx folder on that drive. Same for all my other drives. Will take around 40 hours or so to create the new array, but I think it will be better for the future. I was just trying to avoid this just for ugrading a single drive. Thanks again for your help.