Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

About stuart475898

  • Rank

Recent Profile Visitors

346 profile views
  1. I don't know sorry. I'm running the v2.2.3.1012 beta and had no issues however.
  2. The fix isnt in the latest stable yet, so you need to download a beta version: http://dl.covecube.com/DrivePoolWindows/beta/download/
  3. Correction to my post above - I just needed to put a : in after the volume letter. Drivepool + ReFS + Data Dedupe is working on Server 2019.
  4. I downloaded v2.2.3.1012 yesterday, as this issue appears to have been fixed under 28178. Unfortunately I am still unable to use data deduplication: [hyper-v1]: PS C:\Users\stuart.admin\Documents> Enable-DedupVolume -Volume V -UsageType HyperV Enable-DedupVolume : MSFT_DedupVolume.Volume='V' - HRESULT 0x80070057, The parameter is incorrect. + CategoryInfo : InvalidArgument: (MSFT_DedupVolume:ROOT/Microsoft/...SFT_DedupVolume) [Enable-DedupVolume], CimException + FullyQualifiedErrorId : HRESULT 0x80070057,Enable-DedupVolume [hyper-v1]: PS C:\Users\stuart.admin\Document
  5. Was just about to raise this as an issue, as some troubleshooting had led me to believe the Drivepool was involved. I've updated to the 1008 beta, but it appears the similar needs doing for ReFS: [hyper-v1]: PS C:\Users\stuart.admin\Documents> (Get-Item 'C:\Program Files\StableBit\DrivePool\DrivePool.Service.exe').VersionInfo ProductVersion FileVersion FileName -------------- ----------- -------- C:\Program Files\StableBit\DrivePool\DrivePool.Service.exe [hyper-v1]: PS C:\Users\stuart.admin\Documents> Get-WinEvent -LogName "Micros
  6. I'd like to have real-time parity stored on a dedicated parity disk. Snapraid exists but that doesnt do real-time parity, and if a disk goes you lose availability of those files until you can get a new disk and rebuild from parity. Drivepool with built in parity beats RAID, as if a disk goes along with the parity one, you have still only lost the files on the original disk. While the rebuild goes on, you still have availability of the pool and files on the faulty disk.
  7. Hello, I'm having trouble removing a CloudDrive disk from DrivePool, which I believe is largely due to a ~215GB file that DrivePool keeps failing to copy to a local disk. I figure I may have more success with Robocopy, and was wondering if its possible to Robocopy the file directly from the CloudDrive PoolPart folder to the PoolPart folder on one of the local disks? If not, I'll Robocopy it to a non-pooled disk. Thanks Stuart
  8. Hello, I need to remove all files from my CloudDrive, as some backups have ended up on there which is slaughtering my internet allowance. I'm trying to remove the CloudDrive from DrivePool, but keep hitting errors about missing files and CloudDrive is complaining about issues with the cache. I'd like to move the CloudDrive cache off the spinning disks its on now and put it onto an SSD. When I tell CloudDrive to detach the disk however, it states that Access is denied once everything queued for upload is complete. Is there a solution to this problem? Thanks Stuart
  9. Having tried a few different types of storage 'systems' for my VMs, the most recent being SS, I've decided to go back to straight up SSDs + dedupe as its given me the best performance + capacity. Now I would go with a RAID1 or a RAID5, but some people appear to have had success with storing VMs on DP. I've done some searches and found some older posts about VMs + DP and dedupe + DP, but wanted to try and get an update or idea of where things are with it all now. The plan is to start off with 2x SSDs added to a pool. Both SSDs will have NTFS dedupe enabled on them. I will then selectively
  10. Thanks for the reply. Thinking about it, you can more or less ignore the Virtual SAN bit actually - I am just asking can I present 4 normal file shares to a VM, have CloudDrive 'mount' them as a disk and then add them into Drivepool? Will CloudDrive/Drivepool gracefully handle losing 2 of the file shares in the event a host has to go down? Stuart
  11. Hello, I have recently been playing with a pair of microservers, and have put Server Core 2016 on them + Starwind Virtual SAN giving me a nice Hyper-V failover cluster. If testing this turns out well, I'd like to use it for both the home lab and normal home server. The issue is all of our films, TV series, files etc come in at around 8TB, which wont be part of the virtual SAN. This means that 4x 3TB disks I currently have will be distributed between the microservers. I'd like to continue to use Drivepool, and it looks like the best way to achieve what I want is to share out each individual
  12. Hello, So I finally snapped up some 4TB external drives in today's sales, and I want to use them for off site backups. To do this, I would like to bring them home, plug them into the storage server and then simply robocopy everything on. If I create a new pool solely for these disk, will drive pool 'gracefully' handle them being disconnected when I take them off site, and then reconnected every couple of weeks when they are brought home? Thanks for the help Stuart
  13. stuart475898


    Thank you for the reply. Not a problem. For now, I am using the following Powershell script to move everything. It wont copy files at the top level, only folders however. get-childitem p: | foreach-object{ robocopy p:\$_ \\tv-server\p$\$_ /e /W:5 /R:10 /MT:32 > p:\backup.log }
  14. stuart475898


    Hello, Having exactly the same issue as described here. I take it this still isn't fixed in the current stable release of drivepool, and I need to use the beta to robocopy from the pool root? Thanks Stuart
  15. Never mind. I ran the Re-Measure function on the pool and it is now reporting correct usage.
  • Create New...