Jump to content

klepp0906

Members
  • Posts

    107
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by klepp0906

  1. think i answered my own question based on this, its expected behavior i guess. to my chagrin. anything over 16tb and youre getting 8k and as such losing the compression option https://support.microsoft.com/en-us/topic/default-cluster-size-for-ntfs-fat-and-exfat-9772e6f1-e31a-00d7-e18f-73169155af95
  2. okay, so with some digging it turns out ALL of my new 18TB exos were formatted the same way as the rest of my drives. simple GUI via disk management in windows. setting the allocation unit size to default. this always resulted in 4096/4k clusters. apparently windows sets 8k for drives over a certain size.. or.. something? because these were all set to that as the default, and i tried a dry run format and opening the dialogue doesnt even present 4k as an option. now assuming I wanted to back up 20TB of data then move it back again... how would one (if one could) make 4k cluster size if the GUI doesnt allow it? Would/could you do it in diskpart? Why is it not accepted outside of command line? My 2nd largest drives are 12TB and and 4k all is possible and was set to default there. so somewhere between 12 & 18 for whatever reason its not a thing? trying to understand if i have an issue and something went awry or if this is expected/normal behavior? If its the latter I can leave it, if it is not.. its likely ill have to spend days transferring data around and reformat the things via whatever avenue will afford me the ability to get a 4k cluster size if its possible. even outside of the shown T pool (2 18tbs) I also have 2 more 18TB disks which are parity drives. No pooling and nothing on them except a parity file each. Those too are set to 8k cluster by default with no option for 4k. what does it all mean? all drives but the 18tb'ers all the 18tb disks
  3. I use drivepool and have had no major issues until this oddity cropped up. All my non pooled drives seem to support file-based compression fine. all my pools have supported it fine as well (until now) but i recently purchased 2 seagate exos 18tb and made another pool. for some reason the new pool with the 18tb exos do not support file based compression. checked with fsutil and checked via right click > properties on folders therein. I'm unsure if this is something expected from enterprise drives or? (all my others are non enterprise). The file system is NTFS so no idea what is going on. Anyone have anything for me? non pooled drive pool with compression working pool with compression working pool with compression not working results in this on the drives and pools working properly (aka all but my T: pool) and this on the problem child
  4. But why was it not present before? The data on that pool has been unchanged apart from a few MB for a loooong time. Moreso why is there duplicated “metadata” showing on that pool but not the other pool which holds roughly just as many tb and the same number of disks etc? thanks for the reply regardless <3
  5. I have 2 pools. Neither uses any type of duplication. Both are the same size, both are made up of the same number of drives. my plex media pool shows Unduplicated and Other my emulation pool shows Unduplicated and Other AND Duplicated even though I have it disabled. it's only 454 kb worth so its not like im losing meaningful space. i still want to know how i can tell what is being duplicated and why and make that not the case. it wasnt like this last i checked but I did just restore a macrium image from this morning, unsure if that mucked something up somehow or if this automagically happened at some point along the way and I'm just now noticing. ill take any insight i can get.
  6. So this is my first rodeo with any drive pooling solution, and with snapraid, and with an external chassis. i've been sort of learning as i go. In my time with the new setup/config ive upgrade and replaced several drives. I've been taking the ones that were just upgrades and bagging em with the power on days and date of purchase and taking the ones i replaced due to age and/or errors and adding the error count to the former info. what I have not been doing (perhaps unfortunately?) is formatting them before I pull them. meaning the ones that were simply size upgrades but had lower power on hours and no errors I could potentially use as replacements for failures down the road. backups if you will. I also have some very small drives (2tb) that have no errors I could take one of my cold 10 or 12tb and decide to add in for more space at any point. the problem is those drives I pulled that are otherwise good, have existing poolpart folders. its my understanding those will be picked up by drivepool the minute I connect them to the pc? this would result in data being added to the pool that is redundant with what is on the drive that it was replaced by. I have no idea what the consequence of this would be? files with something appended? overwrites? errors? how would i avoid this or remedy whatever havoc it would wreak? Is my only option should I decide to use one of them again either A) connect it in another pc to erase it first or b) connect it to my main pc but disable the drivepool service when I do until I format it? Just looking for some insight so when that day inevitably comes I know what I'm getting into ahead of time.
  7. klepp0906

    WSL 2 support

    well, that will only half present an issue I guess. worst case id have to do some shuffling to and fro. moving bash scripts off the pool is easy, unfortunately most data i use bash scripts against is also on the pool. i suppose we'll see what comes first, an unrelenting need to use those scripts or the means and implementation from the stablebit side. time will tell
  8. klepp0906

    WSL 2 support

    worst developers ever! joking aside, what does that mean exactly? i assume when using WSL it will only present an issue if you try to access a virtual disk/pool? anything else will be business as usual? i still havent re-installed WSL after moving to 11 but its on my shortlist. trying to know what im getting into because moving away from drivepool certainly wont be an option. if i just have to move some bash scripts off my pool to a standalone disk or something, I can live with that.
  9. klepp0906

    WSL 2 support

    so...is WSL support in now? i cant quite tell via the banter here but i know i was just checking out the most recent beta patch notes and it says no support up top... does no support mean no support or does it mean does not work? or is that old info that hasnt been whittled from the changelog template? :P
  10. Hey, good suggestion. I did find that it was incredibly hot. Holy smokes. I’ve long since added kryonaut TIM and mounted a noctua to it though lol. that being said, after a whole slew of drive replacement, recreation of the pools, chkdsks, permissions resets, basically everything I could throw at it. so far so good lol
  11. So I'm trying to ascertain what is going on with one of my pools. It seems to be functioning more or less, but relative to my other pool which is in the same chassis, made up of largely the same drives, connected via the same means to the same controller, its behaving differently in a few ways. 1) it re-measures on every restart 2) it displays as having 0 "duplicated" now my line of thinking is that somehow 2 is tied to 1. I do not enable duplication but metadata is set to 3x by default. this (i assume) accounts for the 680kb worth of "duplicated" on my 1st pool which also does not re-measure on every restart. my second pool has 0 duplicated showing and as such I can only deduce it has no metadata stored so its not duplicating it. I'm unsure if this is... possible? as im not wholly familiar with which metadata drivepool stores or uses. basically curious if anyones pool looks like this screenshot? or they all look like this? If the former exists for anyone, is it presenting any issue like re-measuring after restarts? Outside of the constant re-measuring which I already know is incorrect, I also want to know if the apparent lack of metadata on my pool is abnormal and try to rectify it before I reformat my pc as i'd like to start problem free.
  12. yea im fairly novice myself. still working my way through trying to get everything functioning in the way i prefer. mostly there but a few crucial things evading me. namely the inability to get drives to spin down whatsoever and a constant remeasuring on restart of my pool. other than that im ready for the long haul. i did try to disable scanner services entirely as well as clouddrive services entirely. neither had any effect on my drives sleeping. The minute i killed the drivepool service they immediately started spinning down though so its exclusively drivepool and drivepool alone.
  13. i may or may not have been editing the non-override portion the first time i gave it a go. this time i had more time to go through the covecube doc and was made aware of the override section. convenient and preferred for sure, the setting also held after a restart. Unfortunately changing it to false had no effect on my drives sleeping. changed it to false, changed my scanner to have 60min throttle and only check during work window "12am > 12pm" and 24 hours later 0 spindowns on any of my disks. disable the drivepool service in services.msc. down they go as intended.
  14. klepp0906

    Drive Sleep

    gotcha. thanks anyways. one of my larger gripes with drivepool. ive tested on a brand new fresh windows 11 install. added drivepool and sleep immediately stopped. throttled smart scanning through timer and work window, disabled bitlocker detection in the json and still no sleep on any of my drives 24 hours later. disabled the drivepool service specifically and they immediately started sleeping. should be a higher priority for the dev id think. summer months are coming and people using drivepool tend to be using lots of drives in small spaces in a non enterprise environment. it gets ~75f in my upstairs office and with 24 7200rpm disks in a chassis i'd prefer them to be spun down if nobody is plex'ing etc.
  15. I have 2 pools and 1 re-measures without fail every time i restart the PC. I've reset permissions for the entire pool, i've also changed CoveFs_WaitForKnownPoolPartsOnMountMs and CoveFs_WaitForVolumesOnMountMs from 10000 to 60000 in an attempt to make it stop. Ive set drivepool service to delayed start. Without fail it continues. i'm out of ideas. If i manually measure the pool it takes about 4 seconds. literally. for ~30TB data on a 75TB pool made up of 11 disks. Does this seem right? Or like its not even measuring the pool? Lord knows my other pool with just as much data takes a lot longer.
  16. pretty much universally no lol. intel system and i'm connected via HBA. It seems to be in high stress situations though, have not seen it happen during general use that I can recall but its pretty frequent when accessing everything at once or under really heavy io
  17. I'll be the first to admit, this is relatively new ground for me. That being said, its evident SMART is a standard and so you'd think most things would be uniform regardless of medium/platform. if i use snapraid and/or smartctl to run my smart data it comes up with one of my drives at 100% chance of failure within the next year. It has 43 unrecoverable errors. The SMART wiki itself indicates unrecoverable errors as being one of the several "critical" attributes. A quick google and the consensus is "id replace a disk with errors ASAP" i decided to check scanner as i'd received no warnings or anything of the nature. Meanwhile scanner sees the 43 errors but apparently determines that as being just fine. which is right?
  18. Does this generally indicate failing disks? Ive noticed on two occasions now ive had 2 separate drives drop from the pool. A restart didnt bring them back. Only a shutdown and restart of my jbod/chassis that keeps the disks powered on 24/7 otherwise. They tend to drop during heavy use (during a snapraid scrub in this case) just trying to figure out if thats likely the issue before i go dropping the exorbitant cost on a few hard drives that may not fix the issue. I assume thats its likely but with no warnings from scanner smart or otherwise I'd love some reinforcement or otherwise
  19. mkay, i was suckered in. mostly to support covecube (totally need to change company name to stablebit, confusing) but id be lying if i didnt say it was also in part to keep remote notifications on my phone etc, just in case. now since i feel like im on the board, where's my dark mode. speak up fellow shareholders!~
  20. anyone able to bring me up to date on this? I have two pools, 11 disks each. Every time i restart the computer one of them re-measures. the other one does not. EDIT: so ive removed a few older drives and upgraded them to 12tb exos which did not solve the issue. ive also reset the permissions on the entire pool with no luck. gonna try that settings.json edit even though i cant understand why i'd need it on one pool and not the other. the irony is the pool that SHOULD be slower (millions of small files) is the one that does not have this issue while the pool with all large media files does.
  21. I did not. I got side tracked and just disabled it for now. I’ll post back when I get back around to revisiting it should I get it sorted though.
  22. ill try editing it and restarting without restarting the service. perhaps something is cached thats rewriting it or being synced from scanner or something. youre right, waiting for spin-up can be annoying. you have to pick your battle i guess. a lot of disks (27 in my case) is a lot of heat, power etc. some drives have data that i use once every few days or even less so i think you move beyond the tipping point as far as wear n tear from perpetual spinning vs spinning down. if nothing else i like things to function as they should and know that i have the option to go back and forth in the future. of course the other angle is what disabling the setting actually does. apparently has something to do with detecting encrypted drives which I dont use, but if i did - what would be the result. error? crash?
  23. mmm. a per device is rough. for minimal devices id prefer xxx a month up to 2 or 3 devices, then 1/m per additional device. will have to look into it and mull it over more. while i like the idea of supporting stablebit a bit more, nowadays im "subscriptioned" to death. literally its insane. A dollar a month i can submit to, but im not going to simply cover one device lol. thanks for the link
  24. i cannot figure this out. I get notifications and they pile up. always one pool having been measured/complete over and over. 4 of them last i checked. 0 of them for my other pool. is this done as a maintenance sort of thing on a schedule or only when files are added? I can only assume it has to be the latter right? Thats all that would make sense in my scenario as they share the same exact settings through and through, all that is different is the latter has had some media added, the former has been static for now.
  25. Ugh.. talk about triggering. I have 2 pools. one has very little other data relative to the media present. maybe 3gb against 40TB of actual data. I have another pool that also has about 40TB and after a full on disk equalizer balance pass it removed it all. showed 0 other data which was great, as i didnt expect there to be any outside of necessary metadata made by drivepool. I ended up doing a re-measure and now its showing 23.5gb..... from zero after balancing to 23.5gb. should i assume the latter number is correct and the first time it didnt calculate it for some reason? and is 23.5gb an unreasonable amount for metadata against this volume of data? Since both pools have the same amount of disks, the same amount of total storage, the same size of disks... the only thing i can come up with is the metadata is generated in blocks and on the pool that has barely any "other" data the files are way way way fewer in number because theyre big whereas the pool that shows ~24gb of "other" data theres a few million files i believe. lots of small stuff. confirmation thats the culprit or do i have something else going on?
×
×
  • Create New...