Jump to content

stuart475898

Members
  • Posts

    17
  • Joined

  • Last visited

Recent Profile Visitors

724 profile views

stuart475898's Achievements

  1. I don't know sorry. I'm running the v2.2.3.1012 beta and had no issues however.
  2. The fix isnt in the latest stable yet, so you need to download a beta version: http://dl.covecube.com/DrivePoolWindows/beta/download/
  3. Correction to my post above - I just needed to put a : in after the volume letter. Drivepool + ReFS + Data Dedupe is working on Server 2019.
  4. I downloaded v2.2.3.1012 yesterday, as this issue appears to have been fixed under 28178. Unfortunately I am still unable to use data deduplication: [hyper-v1]: PS C:\Users\stuart.admin\Documents> Enable-DedupVolume -Volume V -UsageType HyperV Enable-DedupVolume : MSFT_DedupVolume.Volume='V' - HRESULT 0x80070057, The parameter is incorrect. + CategoryInfo : InvalidArgument: (MSFT_DedupVolume:ROOT/Microsoft/...SFT_DedupVolume) [Enable-DedupVolume], CimException + FullyQualifiedErrorId : HRESULT 0x80070057,Enable-DedupVolume [hyper-v1]: PS C:\Users\stuart.admin\Documents> (Get-Item 'C:\Program Files\StableBit\DrivePool\DrivePool.Service.exe').VersionInfo ProductVersion FileVersion FileName -------------- ----------- -------- 2.2.3.1012 2.2.3.1012 C:\Program Files\StableBit\DrivePool\DrivePool.Service.exe However, I am no longer getting any errors in the deduplication operational log with the new version: [hyper-v1]: PS C:\Users\stuart.admin\Documents> get-date 22 August 2019 18:31:44 [hyper-v1]: PS C:\Users\stuart.admin\Documents> Enable-DedupVolume -Volume V -UsageType HyperV Enable-DedupVolume : MSFT_DedupVolume.Volume='V' - HRESULT 0x80070057, The parameter is incorrect. + CategoryInfo : InvalidArgument: (MSFT_DedupVolume:ROOT/Microsoft/...SFT_DedupVolume) [Enable-DedupVolume], CimException + FullyQualifiedErrorId : HRESULT 0x80070057,Enable-DedupVolume [hyper-v1]: PS C:\Users\stuart.admin\Documents> Get-WinEvent -LogName "Microsoft-Windows-Deduplication/Operational" -MaxEvents 1 | fl TimeCreated : 21/08/2019 18:45:01 ProviderName : Microsoft-Windows-Deduplication Id : 4105 Message : Data Deduplication error: Unexpected error. Operation: Starting File Server Deduplication Service. Error-specific details: Error: DeviceIoControl (FSCTL_GET_REFS_VOLUME_DATA), 0x80070001, Incorrect function. So it appears something may still be broken. For reference, I can sucessfully reproduce this in a Server 2019 Datacentre VM. I tried a fresh install of 1012 in a new Server 2019 VM, and I get similar results.
  5. Was just about to raise this as an issue, as some troubleshooting had led me to believe the Drivepool was involved. I've updated to the 1008 beta, but it appears the similar needs doing for ReFS: [hyper-v1]: PS C:\Users\stuart.admin\Documents> (Get-Item 'C:\Program Files\StableBit\DrivePool\DrivePool.Service.exe').VersionInfo ProductVersion FileVersion FileName -------------- ----------- -------- 2.2.3.1008 2.2.3.1008 C:\Program Files\StableBit\DrivePool\DrivePool.Service.exe [hyper-v1]: PS C:\Users\stuart.admin\Documents> Get-WinEvent -LogName "Microsoft-Windows-Deduplication/Operational" -MaxEvents 1 | fl TimeCreated : 01/08/2019 08:46:59 ProviderName : Microsoft-Windows-Deduplication Id : 4105 Message : Data Deduplication error: Unexpected error. Operation: Starting File Server Deduplication Service. Error-specific details: Error: DeviceIoControl (FSCTL_GET_REFS_VOLUME_DATA), 0x80070001, Incorrect function.
  6. I'd like to have real-time parity stored on a dedicated parity disk. Snapraid exists but that doesnt do real-time parity, and if a disk goes you lose availability of those files until you can get a new disk and rebuild from parity. Drivepool with built in parity beats RAID, as if a disk goes along with the parity one, you have still only lost the files on the original disk. While the rebuild goes on, you still have availability of the pool and files on the faulty disk.
  7. Hello, I'm having trouble removing a CloudDrive disk from DrivePool, which I believe is largely due to a ~215GB file that DrivePool keeps failing to copy to a local disk. I figure I may have more success with Robocopy, and was wondering if its possible to Robocopy the file directly from the CloudDrive PoolPart folder to the PoolPart folder on one of the local disks? If not, I'll Robocopy it to a non-pooled disk. Thanks Stuart
  8. Hello, I need to remove all files from my CloudDrive, as some backups have ended up on there which is slaughtering my internet allowance. I'm trying to remove the CloudDrive from DrivePool, but keep hitting errors about missing files and CloudDrive is complaining about issues with the cache. I'd like to move the CloudDrive cache off the spinning disks its on now and put it onto an SSD. When I tell CloudDrive to detach the disk however, it states that Access is denied once everything queued for upload is complete. Is there a solution to this problem? Thanks Stuart
  9. Having tried a few different types of storage 'systems' for my VMs, the most recent being SS, I've decided to go back to straight up SSDs + dedupe as its given me the best performance + capacity. Now I would go with a RAID1 or a RAID5, but some people appear to have had success with storing VMs on DP. I've done some searches and found some older posts about VMs + DP and dedupe + DP, but wanted to try and get an update or idea of where things are with it all now. The plan is to start off with 2x SSDs added to a pool. Both SSDs will have NTFS dedupe enabled on them. I will then selectively duplicate VMs, although as I grow the storage I will likely duplicate the whole of the pool. When I want to grow, I'll add another SSD and let re-balancing do its job. Is this viable? Has anyone had any experience with this? Ultimately I'd like to see if I can extend it to a failover cluster using Starwind VSAN, but I'm a while off of that yet. Thanks for the help
  10. Thanks for the reply. Thinking about it, you can more or less ignore the Virtual SAN bit actually - I am just asking can I present 4 normal file shares to a VM, have CloudDrive 'mount' them as a disk and then add them into Drivepool? Will CloudDrive/Drivepool gracefully handle losing 2 of the file shares in the event a host has to go down? Stuart
  11. Hello, I have recently been playing with a pair of microservers, and have put Server Core 2016 on them + Starwind Virtual SAN giving me a nice Hyper-V failover cluster. If testing this turns out well, I'd like to use it for both the home lab and normal home server. The issue is all of our films, TV series, files etc come in at around 8TB, which wont be part of the virtual SAN. This means that 4x 3TB disks I currently have will be distributed between the microservers. I'd like to continue to use Drivepool, and it looks like the best way to achieve what I want is to share out each individual hard disk, and add the lot into CloudDrive running on a highly available VM running Drivepool. I don't think I will want to use any features in CloudDrive such as pre-fetching or local caching; I only want the shares presented as physical disks for Drivepool. What are everyone's thoughts on this? When one of the microservers is shutdown, the VM running Drivepool should failover and continue to provide a service. I take it CloudDrive will gracefully handle the shares going missing from the shutdown server? I have included a diagram below to try and articulate what I am getting at better: ^ | Drivepool shared via SMB | | +---------+----------+ +------------> |VM running Drivepool| <------------------+ Each disk|SMB shared | and CloudDrive | Each disk SMB shared | +------> | | <------------+ | | | +--------------------+ | | | | | | | | | | +--------------------+ +--------------------+ | +---+ +---+ +---+ | | +---+ +---+ +---+ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |SSD| |HDD| |HDD| | | |SSD| |HDD| |HDD| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | +---+ +---+ +---+ | | +---+ +---+ +---+ | +--------------------+ +--------------------+ | Microserver +-------------+ | Microserver | | Virtual SAN | | +---------------------> | CSV for VMs | <----------+ +-------------+ Hope that makes sense. Thanks for the help! Stuart
  12. Hello, So I finally snapped up some 4TB external drives in today's sales, and I want to use them for off site backups. To do this, I would like to bring them home, plug them into the storage server and then simply robocopy everything on. If I create a new pool solely for these disk, will drive pool 'gracefully' handle them being disconnected when I take them off site, and then reconnected every couple of weeks when they are brought home? Thanks for the help Stuart
  13. stuart475898

    robocopy

    Thank you for the reply. Not a problem. For now, I am using the following Powershell script to move everything. It wont copy files at the top level, only folders however. get-childitem p: | foreach-object{ robocopy p:\$_ \\tv-server\p$\$_ /e /W:5 /R:10 /MT:32 > p:\backup.log }
  14. stuart475898

    robocopy

    Hello, Having exactly the same issue as described here. I take it this still isn't fixed in the current stable release of drivepool, and I need to use the beta to robocopy from the pool root? Thanks Stuart
  15. Never mind. I ran the Re-Measure function on the pool and it is now reporting correct usage.
×
×
  • Create New...