Jump to content

mvd

Members
  • Posts

    11
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by mvd

  1. What determines when files are moved from the feeder to the archive disks is the settings on the main balancing configuration page. I have mine set to balance every day at 5:00AM, and have checked "Or if at least this much data needs to be moved" and set it to 1GB - the lowest setting. Then in the Archive Optimizer plugin's settings I have set 100% for both "Fill Feeder Drives up to" and "Fill Archive Drives up to" I wish it was possible to specify MB instead of GB because I have many small <1GB files that are left sitting around on the feeder disk and I only have my archive disks configured for backup, so I have opened a ticket with the developer to give us a better option to be able to flush the feeder disk completely.
  2. Nope, its not DrivePool service. I have spindown working fine on my 24 harddisks in Drivepool pools. Ofcourse I'm using a proper RAID controller (Areca) for the disk spindown function, but if drivepool was constantly querying them they'd constantly be waking back up, which they dont. If you've got disks connected to something like an IBM M1015 or other multiport SAS/SATA controller then HDD spindown in windows is a lot more complicated, but if you've just got them attached to SATA ports, windows powercfg HDD sleep setting should handle it.
  3. mvd

    DrivePool and SnapRAID

    Because its creating unnecessary work and burden on SnapRAID.. Making it chase a moving target. Because as soon as files get moved around the parity is out of sync, and if you've only scheduled syncing for once or twice a day then that's a problem as its unnecessary risk. Not to mention unnecessary wear and tear on the disks. Bottom line: Snapshot parity benefits from files on your disk staying as static as possible, not getting shuffled around constantly to maintain equal free space on all disks (who cares). Better to use Ordered File Placement to fill up disk at a time.
  4. Yep, definitely always click Save. Here are a few more idiosyncrasies I've discovered in further testing: 1) What orange arrows indicate and balancer actually does are two different things, so if you change "do not overfill past" slider from 99% to 10%, the orange arrows will change position (either 0% or 90% as you said), but balancer won't respond to the change unless the service is restarted. This is bad because let's say I set Disk1 and Disk 2 to not overfill past 90%. Disk 1 fills up to 90%, then balancer switches to Disk 2. However let's say I delete 500GB from Disk1 and now there's space to fill it back up to 90% -- balancer doesn't care, it'll keep filling Disk 2 until 90%. 2) OFP Balancer seems to ignore "Or this much free space (in GB)" value. I can set Disk1 to not overfill past 99%, and set "Or this much free space" to 1TB, and balancer ignores it even after a service restart.
  5. Nope, no other balancers enabled -- as you can see in my screenshot. I wouldn't be bothering you guys had I not already tried every avenue to get it to work And if you read post #2, Shane was also to reproduce as well.
  6. Yes, I tried that a long time ago. Believe me, I've screwed around with different settings for hours trying to make sense of it. Seems hardcoded at 90% no matter what
  7. Except it doesn't. Still stuck at 90% with or without "or this much free space" checked.
  8. Thanks Shane, glad I'm not the only one. I've noticed the "new disks not being added to bottom of OFP drive list" as well now that I've played with it a bit.
  9. mvd

    DrivePool and SnapRAID

    SnapRAID + DrivePool working perfectly here. I dont use any disk balancing plugins, though I'm trying to get the Ordered File Placement plugin to work correctly. Ofcourse the trick is you have to point Snapraid at the physical disks and not the pool drive letter obviously. In my case I have mounted my drives to folders (i.e. C:\MOUNT\Disk01, C:\MOUNT\Disk02, etc) and then the Snapraid.conf file contains the full path including the PoolPart* folder, like so: disk d1 C:\MOUNT\Disk01\PoolPart.f261d10c-c5de-46c2-9957-625406f74cc2\ disk d2 C:\MOUNT\Disk02\PoolPart.aef49183-777f-4efc-8692-8333a700d125\ disk d3 C:\MOUNT\Disk03\PoolPart.4511417f-2412-4621-b307-64fb0cc176d1\ ...etc And for the existing data I had on those disks before joining them to the DrivePool, I un-hid the PoolPart folders at the commandline with "attrib -h poolpart<TAB to autocomplete foldername>", and moved the existing folders into that folder so they show up in the pool drive letter. So initial setup is a bit of hassle but it works.
  10. I'm having some problems with the Ordered File Placement plugin. The following is based on testing with ONLY the Ordered File Placement plugin, no other plugins enabled. 1) OFP plugin seems to ignore "Or this much free space" GB value under "Do not fill drives above". I've tested this many different ways. If I set Do not fill drives above=99% and Or this much free space=500GB, and I have only 100GB free on that disk, balancer will still place files on that disk even though it shouldn't because <500GB of space is free. 2) If Disk1 exceeds Do not fill drives above=90%, and OFP begins to place files on Disk2, then if I delete 200GB of files from Disk1 and it is back below free space threshhold, OFP will not switch back to Disk1 dynamically when placing new files - it just keeps filling Disk2. Only if I manually I restart the Stablebit Drivepool service will it switch back to Disk1. 3) If I change the "Do not fill drives above" percentage from the default 90% to any other value and click SAVE, the little orange arrows on the dashboard for each pooled disk stay stuck at either 90% or 0%, they dont reflect the saved value. This may or not be limited to a GUI/cosmetic issue, as I've been trying to figure out if the balancer is using the new value internally despite what the GUI shows.
×
×
  • Create New...