Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  1. I appreciate the fact that you've already looked into it, I figured you might would have. I could see a Scanner WinPE disk as useful though, so that is some good news. In theory I think you could do the restore from another machine mounted to the cloud drive and with the target disk inside using the Server Backup console. Is that something you've personally tested? The end goal is to have the backups on a network share on a separate machine rather than an external HD. My pool is actually full and I have no single spare drive large enough for the server backups, and due to budget constraints I can't buy another drive just yet.
  2. I have been running 2 copies of scanner for a while now....Hyper-V on standard with an essentials VM, and separate DC, along with a few other VM's booting from raid 1 Velociraptors and a 240GB ssd as primary VM storage. All data drives are passed through to the essentials VM due to the Client backup limitations. I was merely suggesting a software work around so that when drives are passed through directly to the VM the host copy of scanner can communicate the SMART data to the guest, restoring full functionality of drive evacuation, etc. It still won't solve the failure to boot the VM when a drive dies in a weird way lol. I guess this topic got derailed, While PCIE passthrough would be ideal, it has it's own issues.
  3. I think I'm gonna try it on my backup server if i can ever get enough time away from work. I'm hoping to be able to stick with Hyper-V as my host. Drashna, I think I mentioned something a while back about implementing a feature for remote control that would allow the host to pass SMART to the VM in a scenario where the disk is passed through.How hard would that be to implement? that may be a more elegant solution than trying to mess with iommu and all that, especially considering that Microsoft will probably never officially support it, so it is likely to be buggy if it works at all.
  4. Has anyone successfully passed through an LSI based controller under Hyper-V to a VM so that Scanner can get all SMART data off of each drive? Even though Server 2016 has been out for a while I'm still finding little documentation on DDA/PCIE passthrough under Hyper-V, and most of it seems GPU centric. I'm running Essentials 2016 as a VM with each individual drive passed through, and would really like to just pass through the entire controller to the guest to alleviate headaches when a drive dies/drops out.
  5. I was just wondering what the recovery scenario would be when using a CloudDrive drive as the server backup target disk under Windows Server 2016 Essentials, particularly with regards to doing a bare metal restore. My assumption is that the CloudDrive would have to be mounted on another system, along with the disk being restored to, do the recovery, then boot off of the restored disk. Someone please confirm or deny whether my assumption is correct. Furthermore, if that is the recovery scenario, would it theoretically be possible to have a WinPE based boot disk that has a version of CloudDrive baked in that would allow a more seamless recovery, and if so is that something someone with a little more skill working with images would be willing to make?
  6. Glad to hear you're not having any issues with it. I know the support isn't quite there yet, but the pros seem to outweigh the cons with regards to file integrity. Well for WSUS I intend to keep the sql database on a drive outside of the pool, I just want the actual content files on the pool, and that requires a drive that reports as NTFS. As of now Drivepool is still reporting the pool drive itself as NTFS, so in theory it should work even with the underlying drives as REFS. I guess you might not have a clear answer to whether or not it does as that's something you probably haven't tested. I might either bite the bullet and make the transition, or do a small-scale test in a VM and see the results for myself. Assuming that it works, would it be possible to implement a way to specify what the pool reports as (NTFS or REFS) with a pool consisting of REFS formatted drives in later versions of Drivepool? All that aside, write performance isn't terribly important, as I will have multiple ssd cache drives. What I'm really concerned about is if there is any penalty reading with integrity checking on vs NTFS.
  7. So I have my server migrated to 2016E and I'm contemplating shuffling my pool around and reformatting my drives as REFS with integrity streams on.. I know I've seen Drashna post somewhere that his pool is using REFS now, so I'm assuming that development is far enough along to be fairly stable. My server is primarily a media and backup server, and the data integrity stuff REFS should provide would be a major plus. I'm just wondering which build version of DP I should use, as well as any caveats I should know about before making the move(especially if there are any drastic hits to performance VS 64K NTFS) Also on the same note, one of the few services I haven't set up yet on the server is WSUS. I have run the content store on the pool before with success, but was wondering if that would still be possible with REFS as the underlying filesystem of the drives since WSUS requires an NTFS formatted partition. Thanks in advance to Drashna and anyone else who can provide some input! Edit: I should also note that my network is 10Gb capable, and although I'm not quite maxing it out, I am taxing it a bit at times, so read performance is quite important, write not quite as much since I'll be adding a few SSD's for cache before too long.
  8. I have a feature suggestion regarding Hyper-V and smart data. It is well known that Hyper-V emulates the disk controller and cannot pass along the smart data, and the workaround being to have 2 copies of scanner, 1 on the VM host and the other in the VM itself. With the remote control feature it is possible to monitor both instances. I was wondering if it is technically possible to implement deeper integration from multiple copies in what would effectively be a "VM host mode" that would allow you to associate the passed through drives from the host to the corresponding drive from within the VM so that the SMART data can effectively be passed through to the VM and enable the drive evacuation feature on drivepool. I hope the description of my concept is clear enough, and would like to hear feedback from either Alex or Drashna on whether this is possible or would be considered for a future release. Thanks again for these amazing products!!!
  • Create New...