Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


klepp0906 last won the day on August 30

klepp0906 had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

klepp0906's Achievements

Advanced Member

Advanced Member (3/3)



  1. think i answered my own question based on this, its expected behavior i guess. to my chagrin. anything over 16tb and youre getting 8k and as such losing the compression option https://support.microsoft.com/en-us/topic/default-cluster-size-for-ntfs-fat-and-exfat-9772e6f1-e31a-00d7-e18f-73169155af95
  2. okay, so with some digging it turns out ALL of my new 18TB exos were formatted the same way as the rest of my drives. simple GUI via disk management in windows. setting the allocation unit size to default. this always resulted in 4096/4k clusters. apparently windows sets 8k for drives over a certain size.. or.. something? because these were all set to that as the default, and i tried a dry run format and opening the dialogue doesnt even present 4k as an option. now assuming I wanted to back up 20TB of data then move it back again... how would one (if one could) make 4k cluster size if the GUI doesnt allow it? Would/could you do it in diskpart? Why is it not accepted outside of command line? My 2nd largest drives are 12TB and and 4k all is possible and was set to default there. so somewhere between 12 & 18 for whatever reason its not a thing? trying to understand if i have an issue and something went awry or if this is expected/normal behavior? If its the latter I can leave it, if it is not.. its likely ill have to spend days transferring data around and reformat the things via whatever avenue will afford me the ability to get a 4k cluster size if its possible. even outside of the shown T pool (2 18tbs) I also have 2 more 18TB disks which are parity drives. No pooling and nothing on them except a parity file each. Those too are set to 8k cluster by default with no option for 4k. what does it all mean? all drives but the 18tb'ers all the 18tb disks
  3. I use drivepool and have had no major issues until this oddity cropped up. All my non pooled drives seem to support file-based compression fine. all my pools have supported it fine as well (until now) but i recently purchased 2 seagate exos 18tb and made another pool. for some reason the new pool with the 18tb exos do not support file based compression. checked with fsutil and checked via right click > properties on folders therein. I'm unsure if this is something expected from enterprise drives or? (all my others are non enterprise). The file system is NTFS so no idea what is going on. Anyone have anything for me? non pooled drive pool with compression working pool with compression working pool with compression not working results in this on the drives and pools working properly (aka all but my T: pool) and this on the problem child
  4. But why was it not present before? The data on that pool has been unchanged apart from a few MB for a loooong time. Moreso why is there duplicated “metadata” showing on that pool but not the other pool which holds roughly just as many tb and the same number of disks etc? thanks for the reply regardless <3
  5. I have 2 pools. Neither uses any type of duplication. Both are the same size, both are made up of the same number of drives. my plex media pool shows Unduplicated and Other my emulation pool shows Unduplicated and Other AND Duplicated even though I have it disabled. it's only 454 kb worth so its not like im losing meaningful space. i still want to know how i can tell what is being duplicated and why and make that not the case. it wasnt like this last i checked but I did just restore a macrium image from this morning, unsure if that mucked something up somehow or if this automagically happened at some point along the way and I'm just now noticing. ill take any insight i can get.
  6. So this is my first rodeo with any drive pooling solution, and with snapraid, and with an external chassis. i've been sort of learning as i go. In my time with the new setup/config ive upgrade and replaced several drives. I've been taking the ones that were just upgrades and bagging em with the power on days and date of purchase and taking the ones i replaced due to age and/or errors and adding the error count to the former info. what I have not been doing (perhaps unfortunately?) is formatting them before I pull them. meaning the ones that were simply size upgrades but had lower power on hours and no errors I could potentially use as replacements for failures down the road. backups if you will. I also have some very small drives (2tb) that have no errors I could take one of my cold 10 or 12tb and decide to add in for more space at any point. the problem is those drives I pulled that are otherwise good, have existing poolpart folders. its my understanding those will be picked up by drivepool the minute I connect them to the pc? this would result in data being added to the pool that is redundant with what is on the drive that it was replaced by. I have no idea what the consequence of this would be? files with something appended? overwrites? errors? how would i avoid this or remedy whatever havoc it would wreak? Is my only option should I decide to use one of them again either A) connect it in another pc to erase it first or b) connect it to my main pc but disable the drivepool service when I do until I format it? Just looking for some insight so when that day inevitably comes I know what I'm getting into ahead of time.
  7. klepp0906

    WSL 2 support

    well, that will only half present an issue I guess. worst case id have to do some shuffling to and fro. moving bash scripts off the pool is easy, unfortunately most data i use bash scripts against is also on the pool. i suppose we'll see what comes first, an unrelenting need to use those scripts or the means and implementation from the stablebit side. time will tell
  8. klepp0906

    WSL 2 support

    worst developers ever! joking aside, what does that mean exactly? i assume when using WSL it will only present an issue if you try to access a virtual disk/pool? anything else will be business as usual? i still havent re-installed WSL after moving to 11 but its on my shortlist. trying to know what im getting into because moving away from drivepool certainly wont be an option. if i just have to move some bash scripts off my pool to a standalone disk or something, I can live with that.
  9. klepp0906

    WSL 2 support

    so...is WSL support in now? i cant quite tell via the banter here but i know i was just checking out the most recent beta patch notes and it says no support up top... does no support mean no support or does it mean does not work? or is that old info that hasnt been whittled from the changelog template? :P
  10. Hey, good suggestion. I did find that it was incredibly hot. Holy smokes. I’ve long since added kryonaut TIM and mounted a noctua to it though lol. that being said, after a whole slew of drive replacement, recreation of the pools, chkdsks, permissions resets, basically everything I could throw at it. so far so good lol
  11. So I'm trying to ascertain what is going on with one of my pools. It seems to be functioning more or less, but relative to my other pool which is in the same chassis, made up of largely the same drives, connected via the same means to the same controller, its behaving differently in a few ways. 1) it re-measures on every restart 2) it displays as having 0 "duplicated" now my line of thinking is that somehow 2 is tied to 1. I do not enable duplication but metadata is set to 3x by default. this (i assume) accounts for the 680kb worth of "duplicated" on my 1st pool which also does not re-measure on every restart. my second pool has 0 duplicated showing and as such I can only deduce it has no metadata stored so its not duplicating it. I'm unsure if this is... possible? as im not wholly familiar with which metadata drivepool stores or uses. basically curious if anyones pool looks like this screenshot? or they all look like this? If the former exists for anyone, is it presenting any issue like re-measuring after restarts? Outside of the constant re-measuring which I already know is incorrect, I also want to know if the apparent lack of metadata on my pool is abnormal and try to rectify it before I reformat my pc as i'd like to start problem free.
  12. yea im fairly novice myself. still working my way through trying to get everything functioning in the way i prefer. mostly there but a few crucial things evading me. namely the inability to get drives to spin down whatsoever and a constant remeasuring on restart of my pool. other than that im ready for the long haul. i did try to disable scanner services entirely as well as clouddrive services entirely. neither had any effect on my drives sleeping. The minute i killed the drivepool service they immediately started spinning down though so its exclusively drivepool and drivepool alone.
  13. i may or may not have been editing the non-override portion the first time i gave it a go. this time i had more time to go through the covecube doc and was made aware of the override section. convenient and preferred for sure, the setting also held after a restart. Unfortunately changing it to false had no effect on my drives sleeping. changed it to false, changed my scanner to have 60min throttle and only check during work window "12am > 12pm" and 24 hours later 0 spindowns on any of my disks. disable the drivepool service in services.msc. down they go as intended.
  14. klepp0906

    Drive Sleep

    gotcha. thanks anyways. one of my larger gripes with drivepool. ive tested on a brand new fresh windows 11 install. added drivepool and sleep immediately stopped. throttled smart scanning through timer and work window, disabled bitlocker detection in the json and still no sleep on any of my drives 24 hours later. disabled the drivepool service specifically and they immediately started sleeping. should be a higher priority for the dev id think. summer months are coming and people using drivepool tend to be using lots of drives in small spaces in a non enterprise environment. it gets ~75f in my upstairs office and with 24 7200rpm disks in a chassis i'd prefer them to be spun down if nobody is plex'ing etc.
  15. I have 2 pools and 1 re-measures without fail every time i restart the PC. I've reset permissions for the entire pool, i've also changed CoveFs_WaitForKnownPoolPartsOnMountMs and CoveFs_WaitForVolumesOnMountMs from 10000 to 60000 in an attempt to make it stop. Ive set drivepool service to delayed start. Without fail it continues. i'm out of ideas. If i manually measure the pool it takes about 4 seconds. literally. for ~30TB data on a 75TB pool made up of 11 disks. Does this seem right? Or like its not even measuring the pool? Lord knows my other pool with just as much data takes a lot longer.
  • Create New...