Jump to content

580guy

Members
  • Posts

    15
  • Joined

  • Last visited

  • Days Won

    1

580guy last won the day on December 12 2023

580guy had the most liked content!

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

516 profile views

580guy's Achievements

Member

Member (2/3)

2

Reputation

  1. I also use Snapshot Raid with Drivepool. If a drive failure, I physically remove the drive, THEN remove it on Drive pool. Install a new drive, put it in the pool and restore.
  2. That's what do. I just pull the drive out of the pool. Sometimes I copy as much as I can to the new drive (faster). Then restore the rest. I also use snapshot Raid on my server (FlexRaid). It still works marvelously well.
  3. I know this is an ancient thread, but the information still applies to us still running WHS 2011 (like me!). So instead of posting a new thread I am replying in this one. Running WHS 2011 for almost 10 years now, has resulted in the system partition filling up slowly, mainly because of the winsxs folder. I have already tried many options (disk cleanup, dism etc), and can't get it any smaller. I have around 14GB of space on the C: drive now. So, I was thinking about resizing the system partition to get more space. From what I have gathered, it is possible to re-size the system partition using various methods to increase the size > 60GB. Running Server manager with disk management also seems to allow the shrinking of the D: partition and increasing the size of the system partition, but haven't tried it yet. But, will you have a problem during a bare metal restore to a larger system partition? It's been awhile since I've done one, but I remember WHS 2011 restore can be pretty finicky at times. I think I remember having to initialize only the partition, and not format it for restore to work reliably. If you create, say a 100GB partition first, and then select if for restore, I wonder if WHS restore would balk at the partition size, or maybe re-create it to the 60GB size, in which case your restore data wouldn't fit, and you would be stuck? Anyone know offhand? I hate to be the Guinea pig!!
  4. Thanks for all the help, Chris. I tried out the Beta of 2.x for a week or so, and guess what? I ended up going back to 1.x. Even with the possibility of the inaccessible folder problem showing up occasionally again. My reasons are: 1. Better performance. With 2.x, I noticed at times activity on the pool would just stop for 5-10 seconds, even though I was running stuff like WinRar extraction while I was watching pool activity, it would just pause then finally resume. Never have that happen with 1.x. 2. Purely Manual Folder Duplication I need this. I run a drive defragger that runs based on an activity threshold, and when 2.x decides to do a duplication check, it shuts down the defragger due to the disk activity. Have tried changing the threshold levels in the defragger but it's not what I really what I wish to do. Additionally, I also burn discs with Imgburn, and even though it has burn proof, the disk activity will again, pause it or delay it's resumption while it waits for the activity threshold to decrease. I am sorry this is the case, as I know I won't be on WHS2011 forever, and will have to go to Drivepool 2.x sometime in the future. I wish folder duplication could be put in strictly manual operation. I prefer to either manually tell Drivepool to do a duplication check as per 1.x, or know that duplication will run also once per day. The data in my duplicated folders doesn't change without my knowledge, except for Client Backups, which are done just prior to the once a day duplication time so that takes care of it for me. Otherwise, if not sure, I just run a duplication pass manually. Maybe take this a suggestion for future versions of 2.x? Maybe I am just an oddity in the way I do things!! Again thanks for you help.
  5. Thanks Chris. I will try the 2.x Beta. The inaccessible folder problem cropped up again even with drivepool 2.x stable last night again. I am wondering if it has something to do with the number of drives in my pool (21) and the wacky windows permissions (WHS 2011). May have to upgrade server to 2012 Essentials down to road. Need more memory and already up to 8GB (max for WHS 2011). Never had this problem until the last few months. Still I believe I like drivepool 2.x version better than 1.x. It's nice to be able to run the interface without waiting to load the Dashboard to access/check it. I will post results on using the new Beta.
  6. I run WHS 2011 and have been running Drivepool 1.x for some years. Lately I have had some folder permission problems with folder moves between shares, and the folder that is "moved" is left empty on it's share and I cannot access the folder without a server reboot. Cannot take ownership, etc. even with Administrator account. Anyhow, after much troubleshooting I decided to try Drivepool 2.x and so far so good. Still not sure where the system was glitching, maybe a large pool of 21 drives? But.... I always preferred to run manual duplication on Drivepool 1.x, and also without background I/O priority so when I chose to duplicate, it would do it a full speed. I did this with settings, and also edited the config file for background priority. Worked great! I noticed that Drivepool 2.x does it differently, even when NOT selecting Real time duplication under performance. I read that duplication will be performed later anyway in the manual. Apparently from time to time Drivepool 2.x does a check and duplicates anyway. Then I found this setting in the Drivepool 2.x Wiki: FileDuplication_AlwaysRunImmediate which I editied to FALSE. Is this setting still valid in Drivepool 2.x? I ask because it wasn't in the default config file, so I copied it from the wiki and edited an entry in my config file. But, it doesn't seem to keep duplication from sensing a file that needs to be duplicated and running a check. The reason I do not want to do this, is because I duplicate some folders which receive downloads of archive sets, which will be in these folders temporarily until they are checked and extracted, then they are deleted. But Drivepool is still trying to duplicate these in meantime and wasting disk access and possibly resources. Is there any way to strictly make duplication manual as in Drivepool 1.x?
  7. I know my post above is from a couple of years ago, but just wanted to confirm that it DOES INDEED work if there are any out there using this combination. I had my first drive failure in my server of 21 drives. I know I have been extremely lucky as I have drives that have over 6 1/2 years of 24/7 operation and are still going strong. Anyhow.... The drive that failed, didn't fail completely. I had been receiving warnings from Stablebit Scanner about remapping sectors for the last several months. I decided to wait and see how long the drive would go before it failed completely. This drive had about 4 1/2 years of service. In the end there were 2 files which became unreadable and I decided to pull the drive and replace it. Rather than remove it from the pool normally and allow Stablebit scanner to migrate files to other drives in the pool, I just physically removed the drive from the server. I did this because so as not to disturb the array parity I had with Flexraid. Then removed the "missing" drive with Stablebit Drivepool. Put in a new drive, made sure same drive letter assigned by WHS 2011, and waited for duplication to finish restoring files. I then took the Old Drive, renamed the poolpart.xxxx folder to prevent it from rejoining the pool, and copied my unduplicated ServerFolders back to the new drive, with the exception of the 2 unreadable files. Then, deleted old symbolic link, and created a new one with the same name pointing to the new PoolPart.xxxx folder of the new drive. Restored the 2 bad files from Flexraid. Done! Parity is still unchanged for the new disk, avoiding a lengthy update to get the new disk files into the parity array. Of course had the drive failed totally without being able to access any files at all, then I would just allow Flexraid to restore it completely. But, since I was able, I think that copying the old drive's contents was slightly faster so I did it this way. Just wanted to confirm this works great if there are any others that are using this combination of Stablebit Drivepool and Flexraid parity for data protection.
  8. Thanks, Drashna. Actually, what I found there was my shares weren't selected to be indexed. Now all is fine. Thank you again for pointing the way!
  9. I migrated my pool to my new Lenovo TS440 Server from an HP MediaSmart Ex495. Everything went smoothly with no problems. Transferred my license for Drivepool and the pool was recognized right off the bat and has been running great for 3 weeks now. But, as shown on my screenshot of the WHS2011 Dashboard, my folder sizes are ZERO except for Client Computer backups. I have re-measured, re-indexed to my heart's content, but nothing seems to fix this. I have no other problems at all, running fine, but I always enjoyed having these Stats available. What else can I do to get this to work?
  10. Here is what I did to run FlexRaid Snapshot parity along with Stablebit Drivepool to help with drive replacements in your pool. When you replace a drive in your drivepool, a new PoolPart.xxxx folder is created automatically by Drivepool. FlexRaid's Data config is pointing to the "old" Poolpart.xxxx folder of the removed drive. While FlexRAID can restore the files to the new drive, they will be restored to the "old" PoolPart.xxxx folder, and it's parity will not be current. You can move the files to the new PoolPart.xxx folder, but FlexRaid is still looking at the old one when updating and maintaining parity. To make it current, you will have to REMOVE the DRU from FlexRaid data configuration (which takes hours to recalculate parity to remove the drive from the snapshot array), then add the new PoolPart.xxx path back to the DRU (which takes more hours recalculating parity to add the drive back into the snapshot array) or use the DRU rename command in FlexRAID to rename the DRU to the new Folder (still takes hours to recalculate parity). I tried renaming the "new" Poolpart.xxxx to the "old" Poolpart.xxxx of the old drive as suggested by Drashna, but Drivepool will then add the drive to a NEW POOL and you will have TWO drivepools. So, I was trying to find a way to get FlexRAID to "point" to the new PoolPart.xxxx folder. The solution is to create a hard link with the MKLINK command from the command line. C:\>mklink Creates a symbolic link. MKLINK [[/D] | [/H] | [/J]] Link Target /D Creates a directory symbolic link. Default is a file symbolic link. /H Creates a hard link instead of a symbolic link. /J Creates a Directory Junction. Link specifies the new symbolic link name. Target specifies the path (relative or absolute) that the new link refers to. I created a directory junction for each PoolPart.xxxx folder for each DRU on the C drive. Example: Drive D: (DRU1) PoolPart.ded39be4-44bd-4122-b772-34e15fd05d40 Open a command line then create a link to this PoolPart Folder: C:\>mklink DRU1SF-D /J D:\PoolPart.ded39be4-44bd-4122-b772-34e15fd05d40\ServerFolders A file (directory junction) called DRU1SF-D will be created on your C drive. (you can name it anything you want, but that is what FlexRAID will use). I called mine DRU1SF-D, DRU2SF-E, DRU3SF-F, etc. That tells me it's for DRU1, ServerFolders, and the drive letter, etc, etc,. Now go to FlexRAID data configuration and use the "path" C:\DRU1 as your path for this DRU. The same for the other DRU's you have. Do this for each DRU you have. This way, if you have to replace a drive, and a new PoolPart.xxxx is created, you just need to delete and re-create a link to point to this new PoolPart.xxxx folder and leave the FlexRaid data configuration unchanged. You could create the link file on each DRU, but FlexRAID advises against having hard links on the DRU as it might think the DRU is larger than it really is. So I prefer to keep the links on C drive, which is not in any array or pool. I hope this helps others who may run into this problem running this configuration. As I type this, I am stil creating the new array. (going to take over 40 hours). I will advise how it ends up. But I am confident this will work.
  11. Thanks, Drashna for your reply. However, nothing much worked. If I rename the PoolPart.xxxx folder, the drive goes into a second pool. Tried this many times, and always the same. So, decided to re-create my FlexRaid snapshot array, but this time I am setting up some hard links on C drive, pointing to the PoolPart.xxxx folders on each drive in the pool, with the MKLINK command from a Dos shell. That way I can always change my link if a drive is upgraded/replaced (new PoolPart.xxx folder created) and leave FlexRaid config the same, as it just uses the link named DRU1 etc which points to the PoolPart.xxxx folder on that drive. Same for all my other drives. Will take around 40 hours or so to create the new array, but I think it will be better for the future. I was just trying to avoid this just for ugrading a single drive. Thanks again for your help.
  12. Well, drive arrived Saturday. So, tried this method, and it worked, sort of. I manually created the PoolPart.xxxx folder with the Drivepool service stopped. I restarted the service, but the drive was not added to the pool. I was only running this one drive, and had all my other disks off. Again, running WHS2011, so this is the System drive C, with another partition (D). I rebooted, and the drive (D) was added to a pool! I noticed the Drive letter of the pool was G, instead of Z (as I like to use). I changed it to Z thru Disk Management. I then shut down, turned on my other 10 drives, and booted up. But now I have 2 drive pools, one with 10 drives (previous pool) and one with only the new drive. Doesn't seem to be a way to get the new drive into the previous pool using the same PoolPart.xxxx folder name that is also used in FlexRaid to protect those unduplicated shares in it's array. Kinda stumped. Might have to start all over and try something else. If I just add the drive to a pool letting Stablebit create a new PoolPart.xxxxx folder, then I have to change it in FlexRaid too. I will have to "remove" the DRU, which will result in many hours recalculating the new parity for the missing files. Then add the new PoolPart.xxxx name, which will result in many hours again, recalculating new parity for the "new" files appearing because of the new PoolPart.xxxx folder name. If I could get Stablebit to just recognize the "old" PoolPart.xxxx folder and place the new drive into my existing pool, I would avoid all this. I didn't forsee this problem using FlexRaid along with Stablebit. I haven't had a drive fail yet running these together, but I can see lots of hassles now that I am just trying to upgrade a good drive with a newer faster one. I know a few here run FlexRaid, although some of the posts are old now, and possibly they have moved on to something else. Sure would appreciate any input or guidance. Hate having the Media Server down. I
  13. I am running a WHS2011 server using Stablebit Drivepool 1.3 and Flexraid snapshot parity for unduplicated shares. I want to upgrade/replace an existing drive. This is how I would like to do it. 1. Stop Drivepool service, remove drive. 2. Install new hard drive 3. Manually create Folder: PoolPart.7ff9720d-58fa-4081-9ebc-70ae65a7a336 and make hidden 4. Restart Drivepool service. 5. If added to pool, wait for duplicate file re-build 6. copy back shares from old drive to new drive My Question is, will this new drive be added into the pool? After some research, I read this older blog on Stablebit: http://blog.covecube.com/2011/03/stablebit-drivepool-technical-overview/ It is old, but I am assuming the basic structure and operation of drivepool hasn't changed from this? It says there are 2 requirements to add a drive to the pool: Formatted in NTFS, and have a PoolPart folder. The reason I want to do this, is that FlexRaid requires that exact path of the old drive to protect the underlying shares, and I am trying to avoid a 20+ hour update after I copy back all my files from the old drive to the new drive. If I use the Stablebit GUI Add-in, it will create the PoolPart.xxxxxxxx but the xxxxxxx won't be the same as the old number recorded in FlexRaid Data Configuration. I would like the new drive to use the exact same PooPart.xxxxxx path as the old drive. Can I manually create the PoolPart folder in this manner? Currently waiting for new drive to arrive, and am trying to plan accordingly. Appreciate any insight or help with this.
  14. Thanks so much for your reply. Just what I needed to know. I may try the "File Placement Limiter" to move the data off the drive first, and simplify the removal from the pool. Thanks again.
  15. I haven't been able to ascertain this while searching throught the forums and blogs. While removing a drive from the pool, is the pool still useable through the remaining drives? In other words, is my server unuseable until Drivepool finishes removing the drive from the pool? As this could take a day or so to perform, I am trying to figure out if I will still have PC Backup, FTP, Media streaming, etc available during this time. Is Drivepool smart enough to allow you to still use the remaining drives in the pool while removing a single drive from the pool? This will help me plan a little more if I am not going to have the use of my server for a day or so. The drive I am planning to remove is fine, but I just don't want it in the pool any longer as it's a USB 2.0 drive and way too slow. This could take quite some time to remove a 2TB USB 2.0 drive so I hate to have the server down totally during this process. But if it's the only way....
×
×
  • Create New...