Jump to content

flux103

Members
  • Posts

    6
  • Joined

  • Last visited

flux103's Achievements

Newbie

Newbie (1/3)

0

Reputation

  1. I'm already way above 1, also no duplication and no balancing (all disabled). changed about every setting I can in settings.json and the only thing i've managed to do is have balancing transfer at the boost rate when not boosted which is still 40% of the drives max. And yes my drives are mounted with mount points or symlinks. currently have 30 and I do not have the 60 ssd's needed to use two per symlink so that is out. also performance has not degraded from when this was a fresh setup on fresh drives so I wouldn't think the MFT would be the problem but I will give it a shot,
  2. yup, I do not want redundancy. waste of space for my workload. also I do not want the risk of losing too many drives to take out the whole array which is why I am here. I just want to be able to write to my drives as fast as windows natively can, not 40% the speed....
  3. It's just sad when my simple batch file can far outperform this product. I'm all SAS and was trying this out on a test bench before I moved it into production but it looks like it is not ready for prime time. There is no reason 15 drives with no duplication should not be able to keep up with a 1GbE!. one single SAS in my setup can do almost 150% of 1GbE but after drivepool I get about 50% of line rate.... redundancy is of no concern and a waste of drives for my current application which is why I am not going with standard RAID. balancing has to be multithreaded! If I were to put this in production on 10GbE with an NVME cache drive and an array of 120 drives my entire setup would come to a crawl waiting for it to transfer files one at a time at 60MB/s meanwhile the cache is filling up at 1100MB/s... I understand this isn't what this is intended for but my application do not want or need data redundancy.
  4. yes it should, and if I could get the 150MB/s windows gives me I would be happy. But I get 62 out of stablebit and that doesn't cut it. SSD has brought me up to wirespeed but hasn't fixed the transferring to drive issue.
  5. LSI 9707-8E or 8I depending on which flavor you need. equivalent card but it's PCIE 3.0 instead of 2.0. although it is only two port like your M1015/H310. the LSI 9206-16E is four port, haven't seen a HBA/Raid much past a four port.
  6. I have a pool of 15 disks, each with a read write speed of 150MB/s. No duplication. I can test and each drive does perform very close to it's rated speed. when transferring files to array over SMB my write speed is 62MB/s. I have added three SSD for cache and I am writing to them at 225MB/s over SMB multipath. but when balancer runs I only get 40MB/s unless I increase priority then it jumps up to 60MB/s. so SSD Optimizer has actually slowed my workflow down as it's limited to 40MB/s unless I increase priority. I have changed the setting in config.json for it's priority but it's still only 60MB/s. also why is it only emptying one SSD at a time? I end up with all three SSD's full and waiting on one to empty sequentially when they could all be dumping to pool drives at the same time. my workflow is transferring a about half a dozen TB a day and this is bottlenecking me. Please help! Thanks
×
×
  • Create New...