Jump to content

browned

Members
  • Posts

    28
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by browned

  1. I have been thinking about this more over the last month or so. Having looked seriously at Freenas (new hardware, expensive), Unraid (less new bits, issues with performance when parity scanning, frequent hardware freezes for me, not virtual). Here is what I have come up with. Drive Pool A - New data landing zone Balancer = SSD Optimiser 2 x Duplication 2 x SSD as write cache 2 x HDD as Archive (maybe if SSD space is not enough) Drive Pool B - Long term data Balancer = Prevent Drive Overfill or None, do not want data moving often. No Duplication Snapraid enabled. 6 x HDD Data Drive Snapraid Parity- (Not a drivepool disk) External to drive pool for Snapraid parity data only 2 x HDD Parity drives. Drive Pool C - Drive presented to clients. Balancer = ??? custom/new No Duplication - Include Pool A as initial write dump - Include Pool B with data being moved here from Pool A after x days/months/years or drive space issue on pool A The balancer on Drive Pool C could be aware of Snapraid, even running the parity scans after files are moved. Maybe helping with Snapraid config and drive replacements, rebuilds? So the questions are 1 What are the possibilties of a balancer that would work with snapraid? 2 Would the above configuration work? 3 Would multiple pool A files be able to be dumped to different disks on pool B to speed up archiving? 4 Would the Prevent Drive Overfill balancer move files between disks on pool B causing parity issues? The Pros with this potential setup - Writing speed due to SSD landing. - Less costs on disk space. - Integration allows more config options for drivepool and snapraid. - Less work for stablebit to enable parity and enhance their product. - Still NTFS disks that can be put into any system and data read. The cons with this setup - Difficult first setup. - Single disk performance on data pool B. - Parity calcs always slow data pool down, especially of they take a day or so. - Drive restores could take longer - During disk failure/restore archiving is disabled - During disk failure on pool B files from missing disk are not available
  2. @Christopher (Drashna) I understand Stablebits stance on this, you have a great product. Developing parity would be time consuming and costly. Unfortunately the parity options out there are just not user friendly and lack the possibility of integrating into drivepool properly. This is something I have thought about everytime I need to add drives to my server, the cost of mirroring data with current large capacity drives is getting a bit much. I cannot see my data reducing, but perhaps my only option is less data.
  3. I am not sure realtime parity in an archive/parity pool setup would be an option without killing performance or having some custom ASIC chip to do the calculations on the fly like most RAID controllers. Hence having an active pool for changed files and an archive pool/parity pool for older non changing files. The difficult part would be managing an archive file being modified, somehow saving the new file/data to the active pool while keeping the archive pool and parity data from needing to be changed. You would need a way of displaying the active file to users instead of the older archive file, until the parity data can be recalculated with the old archive file removed. The active pool is your realtime parity data for your active data as it can be duplicated 2 3 or more times according to the settings of the pool.
  4. Sorry if this makes no sense just typing thoughts. I know this has been talked about before on more than one occasion but is it time to reanalyze the market and see how people feel about it. Drivepool + parity as an option I would pay for. That the savings on storage would be well worth it. What I think would be a huge upgrade or addin for stablebit is a parity option. The fact that you can have sub pools and direct data to certain drives/pools allows for so many configuration options. What I am thinking for my own situation is this, btw I have never lost data with Stablebit since it's release. Pool A 2 x 4TB with 2 x duplication. As per current setup. This will be for recently created, or modified files. Pool B is archive pool, no duplication. 4 x 6TB, 2 x 4TB, = 29.2TB usable storage. Pool C is the parity pool, 2 x duplication. 2 x 12TB disks or more. Parity pool has to have the largest individual disk size to protect the entire pool or smaller disks for individual folder protection. The parity could be calculated from the archive pools individual disks, or individual folders. Now this is where things get smart. Obviously all new and frequently accessed files are stored on pool A, these are duplicated against failure. Simple file and folder created and modified dates can be used to track the files readiness for archiving. Once a file is ready for archiving Stablebit can do this on a schedule out of hours or based on user settings or number of files or the amount of storage that is free on Pool A, etc options are endless. The benefits of Stablebit doing this over products already out there are in it's already great user interface. - Simple drive removal and addition from any pool. - Simple failed drive replacement. - Simple rules and options for file placements. - Parity Pool could be duplicating a single large parity file for all archive disks, or possibly just parity for some folders on the archive drive. - Less user interaction required as Stablebit does the work, set and forget, notify for problems. - Archive drive addition will increase capacity by the size of that added drive, no loss of capacity due to mirroring. - Pool B capacity would be 32.7TB with mirror of 12TB drives vs 65.4TB with archive + Parity. As most storage experts will tell you, drives over 8TB-12TB should really be in an RAID6 array, with hot spare. Thus allowing for multiple failures at a time, most will state that the larger the parity rebuild the more likely a second failure will take place during this time. At what point is mirroring going to be a risk, in my mind we could already be there at 12-14TB. I know I do not want to use larger than 12 TB disks without having at least 3 copies. I cannot afford to have 3 copies on large disks, nor can I have endless small disks as I do not want the heat/power usage and do not have the room. I know there are other options out there, Synology do something on their NAS boxes, FlexRAID, SnapRAID, Unraid. But none would have the ease that Stablebit could create. Thoughts anyone?
  5. I have been researching building a new server for a while now. Just wondering if anyone is using Hyper-V with replication? What I am looking into, 2 x Servers with Hyper-V, possibly Ryzen 5 2400G, 32GB, HPE Smart Array with 1GB read cache (raid 0 for each disk), 10Gbe. 1 x 5Gbe NIC per server dedicated to replication via crossover. Identical disks per server, 12TB, 6TB, and 4TB disks. Windows VM with Stablebit to pool the disks with no duplication. Hyper-V Replication to replicate/snapshot/sync the underlying VHD files from one server to the other every 5 minutes. If a disk fails I believe I should be able to change the disk, start the replicated server and resync back to the failed drive. Anyone else done this before with large slow storage?
  6. browned

    Refs?

    Apparently KB4077525 is supposed to fix the ReFS problems. See https://support.microsoft.com/en-us/help/4077525/windows-10-update-kb4077525
  7. I currently have a particular server setup, ESXi with basic AMD desktop hardware, 2 HP Smart Arrays controllers passed through to a Windows server , 7 disks on each HP controller (14 in total) with 7 Raid 0 arrays per controller. 3 x drive pools, 7 disks on one controller in pool A, 7 disks on the other controller in pool B, Pool C = Pool A + Pool B with duplication. This allows for a controller or a disk to fail and the server is still able to work and no data is lost. What I would like is to separate the system into physical 2 servers. A HP Smart Array controller in each server passed through to Windows servers. This will allow more resilience, and I will also be able to split the work load on each server. Again I am thinking of basic AMD Desktop parts to keep costs down. My questions are if this is possible as the two drive pools will now be a 10Gbe network link away. Can Stablebit CloudDrive be used in this situation, connect to a drivepool drive on another server? What would happen if a disk failed on the cloud copy? What would happen re syncing if a disk failed on the primary server? How long would the syncing take between systems on 10Gbe network? I am not too interested in other tech like unraid or VSAN stuff, I haven't looked into a Windows Cluster or a stablebit drive pool as that maybe an option. I like to be able to pull a dead or dying drive and plug it into any Windows machine and recover data if needed, the wife understands if things are not working she can press the power button and wait, then press it again and start it all up with. If that fails to fix it then it must be more. Thanks for any help.
  8. I believe every time an reFS system is booted the file system is checked with some scheduled tasks. Might need to disable them as some others have, but you have to think if MS set them up and enabled them they are important and should be run at some stage.
  9. I am not 100% but I am pretty sure that ReFS metadata is the problem, this does not show up in task manager as used memory. Look at the animated pink metadata map half way down this page https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629-375.html The MS registry settings relate flushing this data. Use rammap to get an idea of what is actually using your memory.
  10. From my works perspective (Windows 2016, Veeam, 90+TB, Moved to 64k ReFS format, Add 64GB Ram) MS has fixed the issue as long as you use 64k ReFS format, fit as much RAM as possible, and add a registry key or two. https://support.microsoft.com/en-us/help/4016173/fix-heavy-memory-usage-in-refs-on-windows-server-2016-and-windows-10 We are using Option 1 registry key only.
  11. A single disk HP Raid 0 array can be moved to any system, it will have a 8mb raid information partition but the rest of the disk and data will be readable if it is in NTFS format.
  12. Won't be an issue. I have 2 x P410 with 7 drives attached to each. Each drive has a RAID 0 array created on it and drivepool uses these for the pool. On the later P series cards (P44x) they have options for running in HBA or RAID mode.
  13. This may be related. https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629.html There are a couple of fixes for 2016 Server and ReFS in there, also some registry settings. Sorry 30+ pages to read through.
  14. Mine does the same, Got 24 notifications from the 24 checks done every 1 hour for the last day.
  15. Confirmation since my last post a couple of days ago I have had no issues.
  16. I had some time and grabbed this yesterday. Been running for almost 24 hours and total system ram use is 3.2GB, paged pool 850MB, Non Paged 256MB. Memory leak seems to be resolved as my system used to die after a few hours before. Great work thanks.
  17. Not sure it is a leak, on my system seems more like a river. As I said in my first post I had 8GB ram, increased to 12GB. This was exhausted in a matter of hours. I did noet that there was 4.5GB paged pool used, and about 4GB non page pool memory used. Nothing else stood out as no actual applications listed excessive memory usage. As a comparison, with my server reverted to 747 and it has been running for a day and a half, is using 3.4GB of 12GB ram. Paged Pool is 659MB and non paged pool is 257MB. Maybe next time I upgrade, won't be for a while, I will hopefully have more time to investigate.
  18. Just a quick note to say that any beta I have tried after 747 has after a few hours caused the server to die a terrible and slow death of memory exhaustion. I do not have logs and have not bothered to mention it before as I have been to busy. But on trying most builds after 747 to see if the performance and memory issues are fixed I thought I better post since I had some time. I have VMWare based Windows 2016, 4 Cores, 8GB RAM (now 12GB), 2 x HPE P410 smart array cards with 7 x raid 0 arrays. The I have a pool within a pool, so Pool A consists of 7 x Raid 0 disks 20TB, Pool B consists of 7 x Raid 0 Disks 20TB, Pool C consists of Pool A and Pool B duplicated. One thing I noticed is that stablebit was checking Pool C and it was at 3% for many hours, file details changing every 5 to 10 seconds. Hopefully you have noticed something similar, or can replicate without logs as the impact on my server is too drastic for it to run more than a few hours.
  19. Well after leaving the server for a few days it seems to have stopped the repeated duplication runs. There is only 4kb that is unduplicated now. Does Drivepool support the Windows 2016 long path names feature, they have finally got rid of the 260 char limit. Can I use DPCMD to find the "other" files on a in a pool or pool drive?
  20. Ok. I am about to upload some more logs. This is the process I have been doing to cause the pool issues. My Setup again. Pool A and Pool B 7 disks each no duplication. Pool C contains Pool A and B with duplication enabled. There have been files in the pool that according to drivepool have not been duplicated, I thought the directory structure might be to long so renamed a folder say from Stuff to St, note the contents of the folder was around 100k+ small files ranging from1kb to 500kb. This seems to kick off some events. The other time I have seen this process I was deleting a large directory structure full of 500k metadata files, txt, xml, jpg etc. My visual monitoring suggests - Pool A and B are fine. - Pool C has issues, a duplication run starts. - I see the unduplicated space growing. - I check the folder duplication, even though I use pool duplication, random folders are set to 1x not 2x. - I wait and eventually the unduplicated space stops growing. - Checking the folder duplication again I see that all folders are back to 2x. - Then duplication starts again after drivepool checking and saying it is not consistent. - I have manually check the files and folder structure and see there are missing files and folders. - After a day or so the data is duplicated again. So it looks like large processes to delete or rename or move files cause strange events to happen with duplication settings. Hopefully someone can reproduce this issue as I think it has happened about 4 or more times now.
  21. I have just upgraded to 746. Changed the $recycle.bin to 1 x dup and checked the pool and folder duplication was still 2x for everything else. Also, the task failed error has gone as per the 746 change log. I am doing some logging at the moment so will post it through it I see any odd behavior. Edit: Actually, something I am seeing a lot is that the real time duplication doesn't seem to work. This may sound more drastic than it actually is. I am seeing large files copied or moved or overwritten in the pool and then the master pool will say, as an example 8.81GB is not duplicated, so it kicks off a duplication run, when it is finished I have 5xMB unduplicated. I am wondering it the <random file names>.copytemp from pool A and Pool B are being picked up when and if those pools balance themselves and if a balance in Pool A or Pool B will affect the Duplication in the Master Pool. I will post some logs when I have the time to work on this. Edit 2: Logs uploaded for continual duplication runs.
  22. Hmm monitoring a problem at the moment. Pool Duplication is enabled on Pool C, I had a look at folder duplication on Pool C, set the $Recycle.bin to 1x and then noticed the unduplicated data growing. Rechecked the folder duplication and random folders were set to 1x. Tried changing a single folder back to 2x and got a task failed error. Disabled pool duplication, enabled it again and rebooted the server and checked folder duplication is all now 2x. It now seems to be duplicating the data again.
  23. Yes sorry, one copy of each file on each sata card. Two reasons for this, the obvious duplication of data on separate cards in case of failure. The second, sending and receiving data from 2 cards at the same time should be faster than duplicating on the same card. The duplication seems to have settled down after the balancing, so I will leave the logging. But it seemed to be balancing pool a or b and duplicating pool c at the same time. It would be better if those tasks functioned individually. An idea for the future, detect the drive pool within a drive pool and automatically disable features not required or prioritize the tasks based on user preferences. I.E. Disk replacement or addition will start a balancing process, in the case of a replacement disk and data is not duplicated in pool c, it would be better to duplicate pool c than to balance pool a, after successful duplication, balance pool a. This would also create less balancing work as the duplication process should use the empty drive in pool a therefore requiring less balancing. At the moment, I have Pool A and B configured with all performance options enabled, balancing for drive overfill and volume equalization, no rules. Pool C has performance options enabled except bypass system filters, pool duplication enabled, no balancing and no rules.
  24. Not sure I am using this new feature but are there any recommendations as to what features should be enabled/disabled? My setup Sata card 1, 7 disks, Pool A, no duplication Sata card 2, 7 disks, Pool B, no duplication Pool A + Pool B = Pool C, with duplication enabled so two copies of all files on each sata card. I have changed a disk in Pool B and it is balancing at the moment, I think this is causing issues with Duplication on Pool C as it starts runs finishes and starts again. Should I turn balancing off on Pool A & B as it really isn't required, or just let it all work itself out over time?
  25. Thats what I did moving from whs2011 to w2012e, then to w2012r2e. Folder and file security might need a check over as well.
×
×
  • Create New...