Jump to content

browned

Members
  • Posts

    28
  • Joined

  • Last visited

  • Days Won

    1

browned last won the day on September 15 2017

browned had the most liked content!

browned's Achievements

Member

Member (2/3)

6

Reputation

  1. I have been thinking about this more over the last month or so. Having looked seriously at Freenas (new hardware, expensive), Unraid (less new bits, issues with performance when parity scanning, frequent hardware freezes for me, not virtual). Here is what I have come up with. Drive Pool A - New data landing zone Balancer = SSD Optimiser 2 x Duplication 2 x SSD as write cache 2 x HDD as Archive (maybe if SSD space is not enough) Drive Pool B - Long term data Balancer = Prevent Drive Overfill or None, do not want data moving often. No Duplication Snapraid enabled. 6 x HDD Data Drive Snapraid Parity- (Not a drivepool disk) External to drive pool for Snapraid parity data only 2 x HDD Parity drives. Drive Pool C - Drive presented to clients. Balancer = ??? custom/new No Duplication - Include Pool A as initial write dump - Include Pool B with data being moved here from Pool A after x days/months/years or drive space issue on pool A The balancer on Drive Pool C could be aware of Snapraid, even running the parity scans after files are moved. Maybe helping with Snapraid config and drive replacements, rebuilds? So the questions are 1 What are the possibilties of a balancer that would work with snapraid? 2 Would the above configuration work? 3 Would multiple pool A files be able to be dumped to different disks on pool B to speed up archiving? 4 Would the Prevent Drive Overfill balancer move files between disks on pool B causing parity issues? The Pros with this potential setup - Writing speed due to SSD landing. - Less costs on disk space. - Integration allows more config options for drivepool and snapraid. - Less work for stablebit to enable parity and enhance their product. - Still NTFS disks that can be put into any system and data read. The cons with this setup - Difficult first setup. - Single disk performance on data pool B. - Parity calcs always slow data pool down, especially of they take a day or so. - Drive restores could take longer - During disk failure/restore archiving is disabled - During disk failure on pool B files from missing disk are not available
  2. @Christopher (Drashna) I understand Stablebits stance on this, you have a great product. Developing parity would be time consuming and costly. Unfortunately the parity options out there are just not user friendly and lack the possibility of integrating into drivepool properly. This is something I have thought about everytime I need to add drives to my server, the cost of mirroring data with current large capacity drives is getting a bit much. I cannot see my data reducing, but perhaps my only option is less data.
  3. I am not sure realtime parity in an archive/parity pool setup would be an option without killing performance or having some custom ASIC chip to do the calculations on the fly like most RAID controllers. Hence having an active pool for changed files and an archive pool/parity pool for older non changing files. The difficult part would be managing an archive file being modified, somehow saving the new file/data to the active pool while keeping the archive pool and parity data from needing to be changed. You would need a way of displaying the active file to users instead of the older archive file, until the parity data can be recalculated with the old archive file removed. The active pool is your realtime parity data for your active data as it can be duplicated 2 3 or more times according to the settings of the pool.
  4. Sorry if this makes no sense just typing thoughts. I know this has been talked about before on more than one occasion but is it time to reanalyze the market and see how people feel about it. Drivepool + parity as an option I would pay for. That the savings on storage would be well worth it. What I think would be a huge upgrade or addin for stablebit is a parity option. The fact that you can have sub pools and direct data to certain drives/pools allows for so many configuration options. What I am thinking for my own situation is this, btw I have never lost data with Stablebit since it's release. Pool A 2 x 4TB with 2 x duplication. As per current setup. This will be for recently created, or modified files. Pool B is archive pool, no duplication. 4 x 6TB, 2 x 4TB, = 29.2TB usable storage. Pool C is the parity pool, 2 x duplication. 2 x 12TB disks or more. Parity pool has to have the largest individual disk size to protect the entire pool or smaller disks for individual folder protection. The parity could be calculated from the archive pools individual disks, or individual folders. Now this is where things get smart. Obviously all new and frequently accessed files are stored on pool A, these are duplicated against failure. Simple file and folder created and modified dates can be used to track the files readiness for archiving. Once a file is ready for archiving Stablebit can do this on a schedule out of hours or based on user settings or number of files or the amount of storage that is free on Pool A, etc options are endless. The benefits of Stablebit doing this over products already out there are in it's already great user interface. - Simple drive removal and addition from any pool. - Simple failed drive replacement. - Simple rules and options for file placements. - Parity Pool could be duplicating a single large parity file for all archive disks, or possibly just parity for some folders on the archive drive. - Less user interaction required as Stablebit does the work, set and forget, notify for problems. - Archive drive addition will increase capacity by the size of that added drive, no loss of capacity due to mirroring. - Pool B capacity would be 32.7TB with mirror of 12TB drives vs 65.4TB with archive + Parity. As most storage experts will tell you, drives over 8TB-12TB should really be in an RAID6 array, with hot spare. Thus allowing for multiple failures at a time, most will state that the larger the parity rebuild the more likely a second failure will take place during this time. At what point is mirroring going to be a risk, in my mind we could already be there at 12-14TB. I know I do not want to use larger than 12 TB disks without having at least 3 copies. I cannot afford to have 3 copies on large disks, nor can I have endless small disks as I do not want the heat/power usage and do not have the room. I know there are other options out there, Synology do something on their NAS boxes, FlexRAID, SnapRAID, Unraid. But none would have the ease that Stablebit could create. Thoughts anyone?
  5. I have been researching building a new server for a while now. Just wondering if anyone is using Hyper-V with replication? What I am looking into, 2 x Servers with Hyper-V, possibly Ryzen 5 2400G, 32GB, HPE Smart Array with 1GB read cache (raid 0 for each disk), 10Gbe. 1 x 5Gbe NIC per server dedicated to replication via crossover. Identical disks per server, 12TB, 6TB, and 4TB disks. Windows VM with Stablebit to pool the disks with no duplication. Hyper-V Replication to replicate/snapshot/sync the underlying VHD files from one server to the other every 5 minutes. If a disk fails I believe I should be able to change the disk, start the replicated server and resync back to the failed drive. Anyone else done this before with large slow storage?
  6. browned

    Refs?

    Apparently KB4077525 is supposed to fix the ReFS problems. See https://support.microsoft.com/en-us/help/4077525/windows-10-update-kb4077525
  7. I currently have a particular server setup, ESXi with basic AMD desktop hardware, 2 HP Smart Arrays controllers passed through to a Windows server , 7 disks on each HP controller (14 in total) with 7 Raid 0 arrays per controller. 3 x drive pools, 7 disks on one controller in pool A, 7 disks on the other controller in pool B, Pool C = Pool A + Pool B with duplication. This allows for a controller or a disk to fail and the server is still able to work and no data is lost. What I would like is to separate the system into physical 2 servers. A HP Smart Array controller in each server passed through to Windows servers. This will allow more resilience, and I will also be able to split the work load on each server. Again I am thinking of basic AMD Desktop parts to keep costs down. My questions are if this is possible as the two drive pools will now be a 10Gbe network link away. Can Stablebit CloudDrive be used in this situation, connect to a drivepool drive on another server? What would happen if a disk failed on the cloud copy? What would happen re syncing if a disk failed on the primary server? How long would the syncing take between systems on 10Gbe network? I am not too interested in other tech like unraid or VSAN stuff, I haven't looked into a Windows Cluster or a stablebit drive pool as that maybe an option. I like to be able to pull a dead or dying drive and plug it into any Windows machine and recover data if needed, the wife understands if things are not working she can press the power button and wait, then press it again and start it all up with. If that fails to fix it then it must be more. Thanks for any help.
  8. I believe every time an reFS system is booted the file system is checked with some scheduled tasks. Might need to disable them as some others have, but you have to think if MS set them up and enabled them they are important and should be run at some stage.
  9. I am not 100% but I am pretty sure that ReFS metadata is the problem, this does not show up in task manager as used memory. Look at the animated pink metadata map half way down this page https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629-375.html The MS registry settings relate flushing this data. Use rammap to get an idea of what is actually using your memory.
  10. From my works perspective (Windows 2016, Veeam, 90+TB, Moved to 64k ReFS format, Add 64GB Ram) MS has fixed the issue as long as you use 64k ReFS format, fit as much RAM as possible, and add a registry key or two. https://support.microsoft.com/en-us/help/4016173/fix-heavy-memory-usage-in-refs-on-windows-server-2016-and-windows-10 We are using Option 1 registry key only.
  11. A single disk HP Raid 0 array can be moved to any system, it will have a 8mb raid information partition but the rest of the disk and data will be readable if it is in NTFS format.
  12. Won't be an issue. I have 2 x P410 with 7 drives attached to each. Each drive has a RAID 0 array created on it and drivepool uses these for the pool. On the later P series cards (P44x) they have options for running in HBA or RAID mode.
  13. This may be related. https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629.html There are a couple of fixes for 2016 Server and ReFS in there, also some registry settings. Sorry 30+ pages to read through.
  14. Mine does the same, Got 24 notifications from the 24 checks done every 1 hour for the last day.
  15. Confirmation since my last post a couple of days ago I have had no issues.
×
×
  • Create New...