Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won

  1. I have been thinking about this more over the last month or so. Having looked seriously at Freenas (new hardware, expensive), Unraid (less new bits, issues with performance when parity scanning, frequent hardware freezes for me, not virtual). Here is what I have come up with. Drive Pool A - New data landing zone Balancer = SSD Optimiser 2 x Duplication 2 x SSD as write cache 2 x HDD as Archive (maybe if SSD space is not enough) Drive Pool B - Long term data Balancer = Prevent Drive Overfill or None, do not want data moving often. No Duplication Snapraid enabled.
  2. @Christopher (Drashna) I understand Stablebits stance on this, you have a great product. Developing parity would be time consuming and costly. Unfortunately the parity options out there are just not user friendly and lack the possibility of integrating into drivepool properly. This is something I have thought about everytime I need to add drives to my server, the cost of mirroring data with current large capacity drives is getting a bit much. I cannot see my data reducing, but perhaps my only option is less data.
  3. I am not sure realtime parity in an archive/parity pool setup would be an option without killing performance or having some custom ASIC chip to do the calculations on the fly like most RAID controllers. Hence having an active pool for changed files and an archive pool/parity pool for older non changing files. The difficult part would be managing an archive file being modified, somehow saving the new file/data to the active pool while keeping the archive pool and parity data from needing to be changed. You would need a way of displaying the active file to users instead of the older archive
  4. Sorry if this makes no sense just typing thoughts. I know this has been talked about before on more than one occasion but is it time to reanalyze the market and see how people feel about it. Drivepool + parity as an option I would pay for. That the savings on storage would be well worth it. What I think would be a huge upgrade or addin for stablebit is a parity option. The fact that you can have sub pools and direct data to certain drives/pools allows for so many configuration options. What I am thinking for my own situation is this, btw I have never lost data with Stablebit since it
  5. I have been researching building a new server for a while now. Just wondering if anyone is using Hyper-V with replication? What I am looking into, 2 x Servers with Hyper-V, possibly Ryzen 5 2400G, 32GB, HPE Smart Array with 1GB read cache (raid 0 for each disk), 10Gbe. 1 x 5Gbe NIC per server dedicated to replication via crossover. Identical disks per server, 12TB, 6TB, and 4TB disks. Windows VM with Stablebit to pool the disks with no duplication. Hyper-V Replication to replicate/snapshot/sync the underlying VHD files from one server to the other every 5 minutes. If a
  6. browned


    Apparently KB4077525 is supposed to fix the ReFS problems. See https://support.microsoft.com/en-us/help/4077525/windows-10-update-kb4077525
  7. I currently have a particular server setup, ESXi with basic AMD desktop hardware, 2 HP Smart Arrays controllers passed through to a Windows server , 7 disks on each HP controller (14 in total) with 7 Raid 0 arrays per controller. 3 x drive pools, 7 disks on one controller in pool A, 7 disks on the other controller in pool B, Pool C = Pool A + Pool B with duplication. This allows for a controller or a disk to fail and the server is still able to work and no data is lost. What I would like is to separate the system into physical 2 servers. A HP Smart Array controller in each server passed t
  8. I believe every time an reFS system is booted the file system is checked with some scheduled tasks. Might need to disable them as some others have, but you have to think if MS set them up and enabled them they are important and should be run at some stage.
  9. I am not 100% but I am pretty sure that ReFS metadata is the problem, this does not show up in task manager as used memory. Look at the animated pink metadata map half way down this page https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629-375.html The MS registry settings relate flushing this data. Use rammap to get an idea of what is actually using your memory.
  10. From my works perspective (Windows 2016, Veeam, 90+TB, Moved to 64k ReFS format, Add 64GB Ram) MS has fixed the issue as long as you use 64k ReFS format, fit as much RAM as possible, and add a registry key or two. https://support.microsoft.com/en-us/help/4016173/fix-heavy-memory-usage-in-refs-on-windows-server-2016-and-windows-10 We are using Option 1 registry key only.
  11. A single disk HP Raid 0 array can be moved to any system, it will have a 8mb raid information partition but the rest of the disk and data will be readable if it is in NTFS format.
  12. Won't be an issue. I have 2 x P410 with 7 drives attached to each. Each drive has a RAID 0 array created on it and drivepool uses these for the pool. On the later P series cards (P44x) they have options for running in HBA or RAID mode.
  13. This may be related. https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629.html There are a couple of fixes for 2016 Server and ReFS in there, also some registry settings. Sorry 30+ pages to read through.
  14. Mine does the same, Got 24 notifications from the 24 checks done every 1 hour for the last day.
  15. Confirmation since my last post a couple of days ago I have had no issues.
  • Create New...