Jump to content
Covecube Inc.
  • Announcements

    • Christopher (Drashna)

      Login issues   11/07/17

      If you have issues with logging in, make sure you use your display name and not the "username" or email.  Or head here for more info.   http://community.covecube.com/index.php?/topic/3252-login-issues/  
    • Christopher (Drashna)

      Getting Help   11/07/17

      If you're experiencing problems with the software, the best way to get ahold of us is to head to https://stablebit.com/Contact, especially if this is a licensing issue.    Issues submitted there are checked first, and handled more aggressively. So, especially if the problem is urgent, please head over there first. 

browned

Members
  • Content count

    22
  • Joined

  • Last visited

  • Days Won

    1
  1. Specific CloudDrive usage

    I currently have a particular server setup, ESXi with basic AMD desktop hardware, 2 HP Smart Arrays controllers passed through to a Windows server , 7 disks on each HP controller (14 in total) with 7 Raid 0 arrays per controller. 3 x drive pools, 7 disks on one controller in pool A, 7 disks on the other controller in pool B, Pool C = Pool A + Pool B with duplication. This allows for a controller or a disk to fail and the server is still able to work and no data is lost. What I would like is to separate the system into physical 2 servers. A HP Smart Array controller in each server passed through to Windows servers. This will allow more resilience, and I will also be able to split the work load on each server. Again I am thinking of basic AMD Desktop parts to keep costs down. My questions are if this is possible as the two drive pools will now be a 10Gbe network link away. Can Stablebit CloudDrive be used in this situation, connect to a drivepool drive on another server? What would happen if a disk failed on the cloud copy? What would happen re syncing if a disk failed on the primary server? How long would the syncing take between systems on 10Gbe network? I am not too interested in other tech like unraid or VSAN stuff, I haven't looked into a Windows Cluster or a stablebit drive pool as that maybe an option. I like to be able to pull a dead or dying drive and plug it into any Windows machine and recover data if needed, the wife understands if things are not working she can press the power button and wait, then press it again and start it all up with. If that fails to fix it then it must be more. Thanks for any help.
  2. Any update on REFS?

    I believe every time an reFS system is booted the file system is checked with some scheduled tasks. Might need to disable them as some others have, but you have to think if MS set them up and enabled them they are important and should be run at some stage.
  3. Any update on REFS?

    I am not 100% but I am pretty sure that ReFS metadata is the problem, this does not show up in task manager as used memory. Look at the animated pink metadata map half way down this page https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629-375.html The MS registry settings relate flushing this data. Use rammap to get an idea of what is actually using your memory.
  4. Any update on REFS?

    From my works perspective (Windows 2016, Veeam, 90+TB, Moved to 64k ReFS format, Add 64GB Ram) MS has fixed the issue as long as you use 64k ReFS format, fit as much RAM as possible, and add a registry key or two. https://support.microsoft.com/en-us/help/4016173/fix-heavy-memory-usage-in-refs-on-windows-server-2016-and-windows-10 We are using Option 1 registry key only.
  5. HP P212 Raid card and Drivepool

    A single disk HP Raid 0 array can be moved to any system, it will have a 8mb raid information partition but the rest of the disk and data will be readable if it is in NTFS format.
  6. HP P212 Raid card and Drivepool

    Won't be an issue. I have 2 x P410 with 7 drives attached to each. Each drive has a RAID 0 array created on it and drivepool uses these for the pool. On the later P series cards (P44x) they have options for running in HBA or RAID mode.
  7. REFS causes memleak?

    This may be related. https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629.html There are a couple of fixes for 2016 Server and ReFS in there, also some registry settings. Sorry 30+ pages to read through.
  8. Drivepool keeps checking and duplicating

    Mine does the same, Got 24 notifications from the 24 checks done every 1 hour for the last day.
  9. Beta after 747 have memory issues?

    Confirmation since my last post a couple of days ago I have had no issues.
  10. Beta after 747 have memory issues?

    I had some time and grabbed this yesterday. Been running for almost 24 hours and total system ram use is 3.2GB, paged pool 850MB, Non Paged 256MB. Memory leak seems to be resolved as my system used to die after a few hours before. Great work thanks.
  11. Beta after 747 have memory issues?

    Not sure it is a leak, on my system seems more like a river. As I said in my first post I had 8GB ram, increased to 12GB. This was exhausted in a matter of hours. I did noet that there was 4.5GB paged pool used, and about 4GB non page pool memory used. Nothing else stood out as no actual applications listed excessive memory usage. As a comparison, with my server reverted to 747 and it has been running for a day and a half, is using 3.4GB of 12GB ram. Paged Pool is 659MB and non paged pool is 257MB. Maybe next time I upgrade, won't be for a while, I will hopefully have more time to investigate.
  12. Beta after 747 have memory issues?

    Just a quick note to say that any beta I have tried after 747 has after a few hours caused the server to die a terrible and slow death of memory exhaustion. I do not have logs and have not bothered to mention it before as I have been to busy. But on trying most builds after 747 to see if the performance and memory issues are fixed I thought I better post since I had some time. I have VMWare based Windows 2016, 4 Cores, 8GB RAM (now 12GB), 2 x HPE P410 smart array cards with 7 x raid 0 arrays. The I have a pool within a pool, so Pool A consists of 7 x Raid 0 disks 20TB, Pool B consists of 7 x Raid 0 Disks 20TB, Pool C consists of Pool A and Pool B duplicated. One thing I noticed is that stablebit was checking Pool C and it was at 3% for many hours, file details changing every 5 to 10 seconds. Hopefully you have noticed something similar, or can replicate without logs as the impact on my server is too drastic for it to run more than a few hours.
  13. Subpools or Drive Groups Within a Pool

    Well after leaving the server for a few days it seems to have stopped the repeated duplication runs. There is only 4kb that is unduplicated now. Does Drivepool support the Windows 2016 long path names feature, they have finally got rid of the 260 char limit. Can I use DPCMD to find the "other" files on a in a pool or pool drive?
  14. Subpools or Drive Groups Within a Pool

    Ok. I am about to upload some more logs. This is the process I have been doing to cause the pool issues. My Setup again. Pool A and Pool B 7 disks each no duplication. Pool C contains Pool A and B with duplication enabled. There have been files in the pool that according to drivepool have not been duplicated, I thought the directory structure might be to long so renamed a folder say from Stuff to St, note the contents of the folder was around 100k+ small files ranging from1kb to 500kb. This seems to kick off some events. The other time I have seen this process I was deleting a large directory structure full of 500k metadata files, txt, xml, jpg etc. My visual monitoring suggests - Pool A and B are fine. - Pool C has issues, a duplication run starts. - I see the unduplicated space growing. - I check the folder duplication, even though I use pool duplication, random folders are set to 1x not 2x. - I wait and eventually the unduplicated space stops growing. - Checking the folder duplication again I see that all folders are back to 2x. - Then duplication starts again after drivepool checking and saying it is not consistent. - I have manually check the files and folder structure and see there are missing files and folders. - After a day or so the data is duplicated again. So it looks like large processes to delete or rename or move files cause strange events to happen with duplication settings. Hopefully someone can reproduce this issue as I think it has happened about 4 or more times now.
  15. Subpools or Drive Groups Within a Pool

    I have just upgraded to 746. Changed the $recycle.bin to 1 x dup and checked the pool and folder duplication was still 2x for everything else. Also, the task failed error has gone as per the 746 change log. I am doing some logging at the moment so will post it through it I see any odd behavior. Edit: Actually, something I am seeing a lot is that the real time duplication doesn't seem to work. This may sound more drastic than it actually is. I am seeing large files copied or moved or overwritten in the pool and then the master pool will say, as an example 8.81GB is not duplicated, so it kicks off a duplication run, when it is finished I have 5xMB unduplicated. I am wondering it the <random file names>.copytemp from pool A and Pool B are being picked up when and if those pools balance themselves and if a balance in Pool A or Pool B will affect the Duplication in the Master Pool. I will post some logs when I have the time to work on this. Edit 2: Logs uploaded for continual duplication runs.
×