Jump to content

browned

Members
  • Posts

    28
  • Joined

  • Last visited

  • Days Won

    1

Reputation Activity

  1. Like
    browned reacted to Christopher (Drashna) in Hyper-V Replication   
    I should do this with my VM lab...
    But no, I've never had the opportunity to set this up. 
  2. Thanks
    browned reacted to Christopher (Drashna) in Specific CloudDrive usage   
    StableBit CloudDrive could sort of do this. But not well.
    However, it sounds more like you want to sync the contents between the two systems.  And in this case, StableBit CloudDrive wouldn't be great.
    It sounds like you want syncing software.  Something like Free File Sync, GoodSync, or the like. 
    Another option is to look into the DFS replication. As that may be closer to what you want. 
  3. Like
    browned got a reaction from Christopher (Drashna) in Any update on REFS?   
    I am not 100% but I am pretty sure that ReFS metadata is the problem, this does not show up in task manager as used memory. Look at the animated pink metadata map half way down this page https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629-375.html  The MS registry settings relate flushing this data. Use rammap to get an idea of what is actually using your memory.
  4. Like
    browned got a reaction from jmone in Any update on REFS?   
    From my works perspective (Windows 2016, Veeam, 90+TB, Moved to 64k ReFS format, Add 64GB Ram) MS has fixed the issue as long as you use 64k ReFS format, fit as much RAM as possible, and add a registry key or two.
     
    https://support.microsoft.com/en-us/help/4016173/fix-heavy-memory-usage-in-refs-on-windows-server-2016-and-windows-10
     
    We are using Option 1 registry key only.
  5. Like
    browned got a reaction from Christopher (Drashna) in HP P212 Raid card and Drivepool   
    Won't be an issue. I have 2 x P410 with 7 drives attached to each. Each drive has a RAID 0 array created on it and drivepool uses these for the pool. On the later P series cards (P44x) they have options for running in HBA or RAID mode.
  6. Like
    browned got a reaction from Christopher (Drashna) in Beta after 747 have memory issues?   
    Confirmation since my last post a couple of days ago I have had no issues.
  7. Like
    browned got a reaction from Antoineki in Beta after 747 have memory issues?   
    Just a quick note to say that any beta I have tried after 747 has after a few hours caused the server to die a terrible and slow death of memory exhaustion.
     
    I do not have logs and have not bothered to mention it before as I have been to busy. But on trying most builds after 747 to see if the performance and memory issues are fixed I thought I better post since I had some time.
     
    I have VMWare based Windows 2016, 4 Cores, 8GB RAM (now 12GB), 2 x HPE P410 smart array cards with 7 x raid 0 arrays. The I have a pool within a pool, so Pool A consists of 7 x Raid 0 disks 20TB, Pool B consists of 7 x Raid 0 Disks 20TB, Pool C consists of Pool A and Pool B duplicated.
     
    One thing I noticed is that stablebit was checking Pool C and it was at 3% for many hours, file details changing every 5 to 10 seconds.
     
    Hopefully you have noticed something similar, or can replicate without logs as the impact on my server is too drastic for it to run more than a few hours.
  8. Like
    browned got a reaction from Christopher (Drashna) in Subpools or Drive Groups Within a Pool   
    Yes sorry, one copy of each file on each sata card. Two reasons for this, the obvious duplication of data on separate cards in case of failure. The second, sending and receiving data from 2 cards at the same time should be faster than duplicating on the same card. The duplication seems to have settled down after the balancing, so I will leave the logging. But it seemed to be balancing pool a or b and duplicating pool c at the same time. It would be better if those tasks functioned individually.
     
    An idea for the future, detect the drive pool within a drive pool and automatically disable features not required or prioritize the tasks based on user preferences. I.E. Disk replacement or addition will start a balancing process, in the case of a replacement disk and data is not duplicated in pool c, it would be better to duplicate pool c than to balance pool a, after successful duplication, balance pool a. This would also create less balancing work as the duplication process should use the empty drive in pool a therefore requiring less balancing.
     
    At the moment, I have Pool A and B configured with all performance options enabled, balancing for drive overfill and volume equalization, no rules.  Pool C has performance options enabled except bypass system filters, pool duplication enabled, no balancing and no rules.
×
×
  • Create New...