Jump to content
Covecube Inc.

fluffybunnyuk

Members
  • Posts

    21
  • Joined

  • Days Won

    2

fluffybunnyuk last won the day on May 3

fluffybunnyuk had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

fluffybunnyuk's Achievements

Member

Member (2/3)

2

Reputation

  1. i use dpcmd check-pool-fileparts F:\ 4 > check-pool-fileparts.log to fix it. Then i read the log, and if theres no issues other than with drivepools record, i recheck duplication, and it goes back to what i'd expect 19.6TB duplicated, 8.99mb unusable for duplication for that pool. For me its more to do with drivepools index developing an issue rather than there being an actual duplication issue. But thats why i check the dpcmd log to make sure. I use notepad++ to open the logfile, and search /count "!" , "*", and "**" Usually it reports no inconsistencies at the end. At least i've never had one outside of when testing, and deliberately trying to trash a couple of test drives.
  2. fluffybunnyuk

    Thank you

    I had an inconsistency in my duplication, and use dpcmd to fix it. This is why i bought the software. It works, and is reliable . Drivepool that is. Cause: Modifying all drives involved in duplications file tables at the same time... Really dumb thing to do i know. But i only have a short maintenance window ,and pushed things to fit the time. So just dropped by to say thanks to the author for the work done to make it reliable.
  3. You could set it to balance every say 168 hours. That'd give you a week until the next balance runs.
  4. A good slightly techincal hack to avoid this is to create/use an account with the delete permission for deleting, and remove delete permission from the limited account you use for general use or drivepool access. All my files across the drivepool have "normal-user account", and i have a special account just for deletion. So when i want to delete i have to provide credentials to authorize the delete. It makes whoops where did my data go moments a little harder. Naturally its a custom permission, and i hate custom permissions as they sometimes go bad... and mostly get annoying.
  5. If your removing a drive from pool A, and a drive from pool B then yes. Same pool, then probably no... chances are if you remove 2 drives from same pool, a certain law dictates you'll pull 2 with the same data on, and lose the data Slow is good if the data remains intact. Me, i always yank one drive, and see where i stand before pulling another.
  6. Thats never going to happen with any software like drivepool. Theres a write read overhead(balancing) that multiplies with fragmentation. My best recommendation is assuming your using large core xeons or threadrippers, is to open the settings.json. Change all the thread settings upto 1 per core, to allow all cores to scoop I/O from the SSD. Give that a whirl. If that doesnt shift it. Try defragging the mft, and providing a good reserve so the balancing has a decent empty mft gap to write into. My system reports the SSD balancer is slow as a dog which is why i uninstalled it. Its a cheap version of sas nvme support, except instead of the controller handling the nvme I/O spam, the cpu does. (Which isnt a good idea the larger the system gets). Have you tried using sym links, and using a pair of ssd per symlink so your folder structure is drives inside drives i.e. C:\mounts\mountdata(disk-ssd)\mountdocs(disk-ssd);C:\mounts\mountdata(disk-ssd)\mountarchive(disk-ssd)\; So your data is transparently in one file structure but chunks are split over seperate drivepools. So the dump from the ssds are broadly in parallel? I have to set aside 2 hours/day for housekeeping to keep drivepool in tip-top condition. Large writes rapidly degrade the system for me from the optimal 200mb/sec/drive+ at the moment its running at 80mb/sec/drive (which is the lowest i can afford to go) mostly because i've hammered the mft with writes, and theres a couple of tb i need to defragment. I wish you all the best trying to resolve it.
  7. 8 drives in raid-6 gets me 1600mb/sec and saturates a 10gbe, it allows me to write that out again at 800mb/sec to my backing store (4 drives so less speed). No write cache ssd in sight. Granted i'm not on the same scale as you, but i'm not far off with 24 bay boxes. But then my network, and disk i/o is offloaded away from the cpu. With that many drives, ssd just sucks cos as soon as it bottlenecks it bottlenecks bad usually when it decides to do housekeeping. I'd retest using 8 drives in hardware raid-0 (I/O offloaded), and use drivepool to balance to a pool of 4 drives. See what happens. I think the SSD with its constant cpu calling saying I/O is waiting is gimping your system. Fun fact: A good way to hang a system is to daisy chain enough nvme to constantly keep the CPU in a permanent I/O wait state.
  8. At 6TB/day regular workload i'd be recommending 6 or 8 drive hardware raid with nvme cache, with a jbod drive pool as mirror, if like me you want to avoid reaching for the tapes. A 9460-8i is pricey (assuming using expander) but software pooling solutions dont really cut it with a medium to heavy load. Hardware raid is just fine so long as sata disks are avoided, sas disks have a good chance of correction if T10 is used. I've had the odd error once in a blue moon during a rebuild that'd have killed a sata array , whereas in the sas drive log it just goes in as a corrected read error, and carries on. I cant afford huge sas spinning rust, its very expensive.... so i have to compromise.... i run sas drive raid, and back it with large sata jbod in a pool. So its useful in a specific niche use case. In my case sustaining a tape drive write over several hours. The big problem with drivepool is the balancing , and packet writes, and fragmentation. All those millisecond delays add up fast. What i've said isnt really any help. But what you wrote is exactly why i had to bite the bullet and suck it up , and go sas hardware raid.
  9. Ultimate defrag has a boot time defrag for drive files like $MFT etc I go into tools/settings and then click the boot time defrag option. I set the starting cluster to 100000000 (12tb disk)(you can set your own location but 5% in seems reasonable for 0% to 10% track seek for writes, 10% into the disk would be optimal(0-20% seek) .and leave the order as the default. I change the free space after the reserved zone to a reasonable size (in my case 752mb to round it to a 1GB MFT) I click the run the next time the system starts box and click ok. Naturally a warning needs to be attached to this. Your playing with your disks file tables.... It can go really bad (unlikely but possible if your system is unstable). So to mitigate that risk a chkdsk /F MUST be run on the drive BEFORE. And preferably the disk needs as little fragmentation as possible( preferably a complete fragmented file defrag pass first).. As can be seen from the pic. Most of my data is moved to the rear , so all my writes happen at the start of the disk near the MFT, this means the head doesnt move too far when updating the MFT. Contiguous Reads i dont care about (i dont need speed other than pesky tape drive feeding), and writes are crucial to me to be executed asap since i dont use cache ssd.
  10. For me a pair of 12tb got me 400mb/sec which is decent enough for me to dump the ssd cache (i found it killed the speed). 15 disks in a pool should blow away 4 ssd drives in straight line speed. I write 1TB/hour doing a tape restore.
  11. 9207-8(i or e) sas2 pcie 3 is great for low amount of use. These are cheap (IT mode). A cheap intel sas2 expander gets a system off to a good start. Later on when usage ramps up then:- 9300 or 9305 sas3 keeps the controller cooler, and doesnt bottleneck so easy with alot of drives. These arnt cheap, but can be found for reasonable prices. Eventually backup to tape(rather than cheap mirrors) gets important:- Hardware raid-6 sas3. These are the definition of very expensive, but do keep a tape drive sustained fed at 300mb/sec(raid 6 part of pool) while plex is reading 50gb mkv files for chapter thumbnail task(jbod mirror part of pool). A 9440-8i(sas drives only with T10-PI) with sas3 expander is a wonderful thing in action. Also avoid cheap chinese knock-offs. It amuses me how people spend thousands of pounds on cheap drives, and refuse to pay £300 for the controller.
  12. Maybe it hates you too , like it hates me Glad to hear it fixed it. The remeasure should go fine now. You should make a note to check the corrupted file, and maybe restore it from a backup if necessary. I've noticed drivepool has a preference for disks used to create a pool. So like in my example i had 2 disks hot, and cold. I used hot to create the pool, and it loved hot ever after. Sadly i needed it to prefer cold. My only solution was to hotswap them around in my chassis so hot became cold , and vice versa thus conning drivepool into using the cold one. The short answer is probably "love".
  13. Do a checkdisk instead. or open a cmd prompt and run for example "chkdsk D:\ /F" Regards removing your drive. You should tread carefully. If you have space in the pool, you can do a normal removal (not forced), wait for the disk to empty, and then it'll remove itself safely. Forcing a removal is likely to lead to data loss unless your 100% sure your files are duplicated, or the removal drive is near enough empty already. If your not sure dont do it, until you are 100% sure. If your normal drive removal works out okish but halts before removal because of the corrupted folder, you should end up with just the corrupted folder on the disk, and in the disk properties it'll show the disk space used as a few hundred MB. At which point all files should be duplicated elsewhere (check , and sure), then, and only then should it be forced removed. Theres nothing like watching data vanish, and having to spend the next week breaking out 50 LTO tapes for a restore. I always like being 100% sure before i do something, and double checking it.
  14. Try a reboot first. Sometimes i find drivepools tiny brain gets overloaded, and that seems to help. view hidden files, go into the poolpart , thennavigate to the folder location and delete it manually, then remeasure. I had a problem moving dozens of files drivepool kept flagging as viruses (incorrectly)(stopping my rebalancing). If that doesnt work. chkdsk fix all the drives, reboot then try again. I also had a permissions problem. Thats was too much hassle to fix so i just did a force remove, formatted the removed drive, and readded it. Its why i keep my pools with just a simple drive mirror. like 8x8tb hardware raid-6 pooled with 4x12tb , or 12tb single drive+mirror. Adding rules and complexities just makes it more breakable , and complex in my mind. Big pools sound nice in principle, but i like keeping track of where my data *actually is*. Hope it helps. I forgot to add drivepool hates me, it has a preference for using certain drives in my system (annoyingly) the hot temp drive in a mirror, so i have to hotswap them round to make the hot drive the cool one it loves.
  15. Yes I think from reading another post here, if you fill out the contact form they deactivate it for you at their end since you cannot, after which you should be able to reactivate it. Wish i could be more help, havent had to do it myself.
×
×
  • Create New...