Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Days Won


fluffybunnyuk last won the day on May 3

fluffybunnyuk had the most liked content!

About fluffybunnyuk

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. A good slightly techincal hack to avoid this is to create/use an account with the delete permission for deleting, and remove delete permission from the limited account you use for general use or drivepool access. All my files across the drivepool have "normal-user account", and i have a special account just for deletion. So when i want to delete i have to provide credentials to authorize the delete. It makes whoops where did my data go moments a little harder. Naturally its a custom permission, and i hate custom permissions as they sometimes go bad... and mostly get annoying.
  2. If your removing a drive from pool A, and a drive from pool B then yes. Same pool, then probably no... chances are if you remove 2 drives from same pool, a certain law dictates you'll pull 2 with the same data on, and lose the data Slow is good if the data remains intact. Me, i always yank one drive, and see where i stand before pulling another.
  3. Thats never going to happen with any software like drivepool. Theres a write read overhead(balancing) that multiplies with fragmentation. My best recommendation is assuming your using large core xeons or threadrippers, is to open the settings.json. Change all the thread settings upto 1 per core, to allow all cores to scoop I/O from the SSD. Give that a whirl. If that doesnt shift it. Try defragging the mft, and providing a good reserve so the balancing has a decent empty mft gap to write into. My system reports the SSD balancer is slow as a dog which is why i uninstalled it. Its
  4. 8 drives in raid-6 gets me 1600mb/sec and saturates a 10gbe, it allows me to write that out again at 800mb/sec to my backing store (4 drives so less speed). No write cache ssd in sight. Granted i'm not on the same scale as you, but i'm not far off with 24 bay boxes. But then my network, and disk i/o is offloaded away from the cpu. With that many drives, ssd just sucks cos as soon as it bottlenecks it bottlenecks bad usually when it decides to do housekeeping. I'd retest using 8 drives in hardware raid-0 (I/O offloaded), and use drivepool to balance to a pool of 4 drives. See wha
  5. At 6TB/day regular workload i'd be recommending 6 or 8 drive hardware raid with nvme cache, with a jbod drive pool as mirror, if like me you want to avoid reaching for the tapes. A 9460-8i is pricey (assuming using expander) but software pooling solutions dont really cut it with a medium to heavy load. Hardware raid is just fine so long as sata disks are avoided, sas disks have a good chance of correction if T10 is used. I've had the odd error once in a blue moon during a rebuild that'd have killed a sata array , whereas in the sas drive log it just goes in as a corrected read error, and
  6. Ultimate defrag has a boot time defrag for drive files like $MFT etc I go into tools/settings and then click the boot time defrag option. I set the starting cluster to 100000000 (12tb disk)(you can set your own location but 5% in seems reasonable for 0% to 10% track seek for writes, 10% into the disk would be optimal(0-20% seek) .and leave the order as the default. I change the free space after the reserved zone to a reasonable size (in my case 752mb to round it to a 1GB MFT) I click the run the next time the system starts box and click ok. Naturally a warning needs to be
  7. For me a pair of 12tb got me 400mb/sec which is decent enough for me to dump the ssd cache (i found it killed the speed). 15 disks in a pool should blow away 4 ssd drives in straight line speed. I write 1TB/hour doing a tape restore.
  8. 9207-8(i or e) sas2 pcie 3 is great for low amount of use. These are cheap (IT mode). A cheap intel sas2 expander gets a system off to a good start. Later on when usage ramps up then:- 9300 or 9305 sas3 keeps the controller cooler, and doesnt bottleneck so easy with alot of drives. These arnt cheap, but can be found for reasonable prices. Eventually backup to tape(rather than cheap mirrors) gets important:- Hardware raid-6 sas3. These are the definition of very expensive, but do keep a tape drive sustained fed at 300mb/sec(raid 6 part of pool) while plex is reading 50gb mkv
  9. Maybe it hates you too , like it hates me Glad to hear it fixed it. The remeasure should go fine now. You should make a note to check the corrupted file, and maybe restore it from a backup if necessary. I've noticed drivepool has a preference for disks used to create a pool. So like in my example i had 2 disks hot, and cold. I used hot to create the pool, and it loved hot ever after. Sadly i needed it to prefer cold. My only solution was to hotswap them around in my chassis so hot became cold , and vice versa thus conning drivepool into using the cold one. The short answer is
  10. Do a checkdisk instead. or open a cmd prompt and run for example "chkdsk D:\ /F" Regards removing your drive. You should tread carefully. If you have space in the pool, you can do a normal removal (not forced), wait for the disk to empty, and then it'll remove itself safely. Forcing a removal is likely to lead to data loss unless your 100% sure your files are duplicated, or the removal drive is near enough empty already. If your not sure dont do it, until you are 100% sure. If your normal drive removal works out okish but halts before removal because of the corrupted folde
  11. Try a reboot first. Sometimes i find drivepools tiny brain gets overloaded, and that seems to help. view hidden files, go into the poolpart , thennavigate to the folder location and delete it manually, then remeasure. I had a problem moving dozens of files drivepool kept flagging as viruses (incorrectly)(stopping my rebalancing). If that doesnt work. chkdsk fix all the drives, reboot then try again. I also had a permissions problem. Thats was too much hassle to fix so i just did a force remove, formatted the removed drive, and readded it. Its why i keep my pools with just
  12. Yes I think from reading another post here, if you fill out the contact form they deactivate it for you at their end since you cannot, after which you should be able to reactivate it. Wish i could be more help, havent had to do it myself.
  13. Its probably keyed to the old hardware, and maybe needs deactivating for the old hardware by sending a contact request to them. After that i imagine there wont be any conflict with activation on the new hardware. Thats assuming you tried pasting in your license code, and activating, and it failed. I'm sure someone will be along soon to help.
  14. Solved: Moving and defragging the MFT and related $ files to a location 5% into the platter, and a reboot.I extended the MFT to a 1 GB(700mb free) in size just to be on the safe side. Yay sustained 200mb+/sec is back.
  15. I have a pair of 12TB drives in a seperate pool as a pseudo mirror (0 file fragments). But am getting 20mb/sec write speeds, and 40mb/sec reads. The kick in the teeth is duplication is 80mb/sec. When i test the drives, i get 450mb/sec bus speed per port. When i copy a file to a single drive bypassing drivepool i get 250mb/sec. As soon as i use drivepool performance craters. I've tried disabling real time duplication, disabling file balancers, but nothing seems to shift the performance, it just gets slower and slower the more i add. It started out at 100mb/sec, dropped to
  • Create New...