Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


sspell last won the day on August 3 2013

sspell had the most liked content!

sspell's Achievements

Advanced Member

Advanced Member (3/3)



  1. +1 on the ReFS support I'm converting a server to hyper-v core 2016 all disk are ReFS and using the latest drivepool beta. Storage pool by ms is such a convoluted mess. It works when set up but using feeder ssd and parity you have to be prepared for powershell. Besides I like more control with a ui that stablebit has just hate powershell can't remember all the commands and have to reference notes. The pool you can't see what's going on or move files to disk like the ui works with drivepool and of course the scanner that emails when there is a problem on individual disk and notifications from drivepool. You can set a script in powershell for storage pools but unless you're into maintenance and a lot of fiddlin well there you go. I like to leave it alone and it works just me. KISS of drive pool and scanner is great. Now for more ReFS support from stablebit drivepool will be good cause ReFS is the future and with 2016 latest versions it is pretty good. NTFS is on it way out the door Ha!. Also I would not mind paying a little more for drivepool with the self healing properties from bit rot and corruption that a mirrored pool can have with ReFS.
  2. Here's a few things I done with snapraid and drive pool getting them to work nice with each other. 1. Only allowed one snapraid.content file on the pool. Seems even thou drive pool does not write the file sees it as a duplicate and keeps trying to balance it if out of sync. other snapraid.content files go on parity disk, c drive, or any other spare disk not in pool with proper pointers. 2. Used the poolPart id as the root directory for disk. 3. For failed disk replace with new drive or partition of same or larger size and drive letter of failed disk 4. Install new disk to drive pool and copy the new poolPart id to snapraid.conf file of the replaced disk. Note: you have to right click the hidden poolPart id selecting properties to be able to copy the file name from the dialog box. 5. run the fix command. I'm using the ordered file placement, scanner with only warning no movement of files except overheat don't know if that is good not tested, Drive Limiter unchecked duplication don't know if needed hasn't hurt, for balancer settings as stated by Christopher above. Did a 2tb restore by shutting down and removed the disk from server bay. Then restarted and removed the missing disk from drive pool very fast. Installed a 4tb disk and changed the drive letter to match the remove drive installed into pool. Edited the snapraid.conf file appropriately with new poolPart id. Run the fix and everything in pool where it should be all's well. Only thing the files showed up as grey other files till I did a re-measure nice green bar the pool is synced great.
  3. We are writing about snapraid here a good question I have least to me. Christopher or anyone with knowledge do you think the .covefs file should be synced with each drive in a snapraid backup? It is a hidden system file and therefore ignored by snapraid the pool part file is synced even though it is hidden. Only system files are not. Can change that if it is needed. If it is written with every new disk install to drive pool then I think it is not needed when the pool data is restored and can use that default .covefs. The pool part file is unique and can be changed in the snapraid .conf file when restoring a new disk. What to do with this file .covefs is the question? Thanks
  4. Ha! Thanks was having a problem with using snapraid and the ordered file placement plugin respecting file placement rules solved it in a jiffy thanks. Trying snapraid because my limited server filling up with dup files. I have yet to try a restore with snapraid still trying to get things in order and this really helps out. I also haven't overfilled a drive yet to reproduce habe's question I'm sure it won't be long before that happens. I can set the limiter down to see if I can reproduce the problem here if you all would like me to try. I will be doing a snapraid sync soon so let me know.
  5. Board is Supermicro H8DGI-F with 2 amd 6320 Opteron cpu 16 cores total, lsi 9240-8i controller & 16 sata expander, 32gb non ecc memory.
  6. Finished up with the install of server 2012r2e on the metal working good so far all sockets and processors and previous drives available. Stablebit drivepool working great as usual with data intact . One of the issues with vm of the server was an 8 processor limit. With whs2011 it was not an issue because it won't handle any more than 4 anyway so I have it happily running in vm with my mail sever and web sites. Seems like the way to go if you have more than 8 processors and want them all available to windows server 2012r2e it needs to be the host on bare metal. Dane if you ever get that gen2 issue figured out let me know. I'm like you stumped over why moving a existing disk over from whs2011 gen1 won't work. Our setups must have been similar.
  7. I've had a little time rebuilt a new gen2 with more os space and same issue. May be something in the boot order on the existing hard disk or security issues or what? Anyway it looks like a format and copy files to use gen2. The time this will take too long plus there is a 8 processor limit in vm anyway. So with these issues at hand wanting to take full advantage of the processors for the server decided to bare metal essentials and run whs2011 in vm. Hope this takes care of it will see soon. Let you know how the process goes.
  8. I haven't had time to work on it. Probably something simple my luck. I not an expert either far from it. I have a LSI MegaRAID 9240-8i flashed JBOD & a 16 port expander. Think these cards are compatible so not an issue. Christopher is your main server O S bare metal. Thinking it may be something going on with the vm server on free hyper-v core mine is running on 128gb ssd so not a lot of expanding room have 100gb of that dedicated to the vm. Working like a dream on gen1. Dane I don't think there is much benefit once the gen1 is running. My understanding is faster boot times with gen2. So I could live without it. Don't think I'll give up just yet when I have a day to kill will try to fix it.
  9. Well would like to figure it out still have that gen2 vm. Right now busy getting the gen1 setup it's working good. When I have time will work on it some more see if I can find what's wrong. oh yea! the boot disk was on 0 scsi. Looks like it tries to loop boot a few times and shuts down. Take the extra disk off and boots up probably nothing to do with drivepool because as Dane said does the same with any drive. I have had it boot with an extra drive added so beats me no rhyme or reason.
  10. Yea so I'm not the only one that's encouraging. I have the gen1 vm server up and running all is great performs as expected. Added all the processors and server2012er2 nice much faster now than whs2011 happy so far. Question what's up with gen2 and adding drives anyone have a working system and knowledge of what is happening here with the gen2?
  11. Thought I fixed it added drive to new pool in 2012 now won't boot again. Don't know what's up tried everything I know. this drive had no previous pool and after adding to the new drivepool server won't boot. Is there a problem with gen2 and drivepool? Created a new server2012 vm gen1 see how that goes.
  12. Well seem to have fixed the problem nothing to do with drive pool at all. Had a boot file in the first order "bootmgfw.efi" moved the hard drive with the 2012er2.vhdx up to the top in boot order now boots fine with all drives. Has anyone had a problem with the bootmgfw.efi is it needed should this be edited? Something is amiss with that boot file. Is this file for enabling booting from usb? hmmm!
  13. Ok loved the whs 2011 running in vm on hyper v 2012r2 core. Now thinking of moving on to 2012r2 essentials as vm on the core price has come down some. I have a 2 cpu supermicro motherboard and want more memory and cpu processors than 2011 allows. So I have set the 2012 up as a gen2 and all is running great. Here's the problem when I add any hard drive from drive pool to the new vm server 2012er2 from whs2011 the new server won't start. Ironically I can add the drives while 2012er2 is running soon as a reboot it won't start again. It will start with a drive added that was not part of the pool from 2011. The new vm server drive is set to gpt and dedicated to that vm and not mbr as all the other drives don't know if that is an issue. Would like to move on to all gpt. Is their some reason the drivepool disc will not load to 2012er2? I'm only running one vm at a time. The disc are installed as physical drives passed thru. I have tried removing them from 2011 and that don't work either while it is shut down it really shouldn't matter anyway. Put the disc back into whs2011 and working fine. Any help appreciated hate to have to transfer all the data and reformat but that is looking like what I may have to do. Also should I bare metal the server 2012 essentials is another question? I like the versatility of the server running in vm that is my preference. Bare metal running just the hyper-v core is so stable ever minding the ease of rebuilding server software in vm makes life simple. Now just to get 2012er2 up and running it looks promising except this disc migration problem. I could bare metal 2012er2 and run the whs2011 virtually this is not really what I want. Your suggestions please.
  14. whs 2011oem $49 10 cal now that was a steal. Does everything I need it to with drive pool and scanner. Running it on hyperv 2012 free. wow a virtualized home server that is as stable as can be. The only limitation it has is 4 cores and 8gig memory and the 2tb drive limit. Have 8 cores and 16gig memory so hyperv is happy too. And ssd's on server os and one dedicated to web sites make the server very snappy. One day may try the essentials out but right now I see no need in my small environment as a home and web server. Wish I had another copy to set up a small data server dedicated to MySQL running out of places to plug more disk lol. Looked on eBay no luck still can't believe the value for 50 bucks.
  • Create New...