Jump to content

thepregnantgod

Members
  • Posts

    180
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by thepregnantgod

  1. 38 drives connected to my server. Been scanning non-stop for almost a week now. Was 1 drive away from having scanned every drive for errors, etc. System hangs, I have to reboot, come back and now Scanner has no record of scanning any of the drives and has started all over. Uggh...
  2. I'm rebuilding my Plex server and am curious if there's a reason I should switch my Win10 x64 for a Server platform.
  3. Drashna, I have a 152TB pool (35 drives). Four of those drives are USB so when I remotely rebooted my system from vacation they didn't pick up and would register as missing. (I thought, that's fine - the pool is triple duplicated so...read access will just be locked until I return). I come home to find my pool running out of space...I have 40TB of data triple duplicated - so give or take, I should have 30TB of space available since 40x3=120 out of 152TB. Would the system, with four disks missing, simply checked the remaining disks and said, well look at all these files that aren't triple duplicated so let's duplicate them a third time on the remaining drives? Otherwise, I can't explain where all the space went. In fact, using TreeFileSize in admin mode, I confirmed only 40.2TB of data but the pool is showing only 8TB free out of 152TB.
  4. Not sure if this is possible or easy... Right now I have my pool 3x duplicated (upgraded from 2x duplicated) and I have 1.51TB marked in a dark blue which is keyed as "unduplicated." Does that mean it's only x1 on the pool, or it's x2 but not yet x3 duplicated? If you had colors for how many copies of a file are on the pool that would be great! (i.e. Red for 1x copy, Blue x2 Copy, Green x3 Copy, etc.) * Of course, not sure how this would work if folks are doing a full pool duplication like I am and instead duplicating some folders only.
  5. Drashna, I'm curious if there is a way to speed up measuring and duplicating? I have 38 drives (over 152TB) - mostly green in a single pool. The entire pool was duplicated twice but considering the moving parts, I changed the duplication to 3 times. Then one of my pain in the arse drives decided to go RAW. I don't think I lost any data because everything was duplicated twice already. But now, it's taking forever to measure, check duplication, then start duplicating again. It's not system power since they're attached to a 1950x at 4.1ghz with 96gb RAM. And by a long time I mean more than 24hrs of measuring time...
  6. Drashna, a question for you - playing with AS SSD to see the read/write speeds of various cluster sizes and ReFS/NTFS, I noticed that AS SSD benchmark tool will show in red or green if a drive is aligned correctly. What I've noticed is that on my ReFS pool (64k cluster size) and my NTFS pool (2mb cluster size) are both misaligned. 32785K BAD However, when I add a drive letter to the ReFS 64k drive alone - that is aligned correctly. 132096K OK When I add a drive letter to the NTFS 2mb drive alone - this is not aligned correctly. 132096K BAD Am I missing something? I've read that to get best performance from a drive it should be aligned correctly. I can assume that 2mb cluster size (being new) is not aligned correctly. But if the ReFS 64k cluster drive is aligned correctly - shouldn't the pool with only ReFS 64k drives also be aligned correctly? Thanks ahead of time for your answer.
  7. I installed the newest RC with SSD plugins and despite copying files over from my NVME to an SSD duplicated pool (with two SSDs set) - it still shows it being copied to a slower drive. I then installed the Ordered File Placement and made that one drive specifically the target - still no matter what it goes to that one drive. 1. Is it because I'm copying to a specific folder and that folder is located on that drive? 2. Does it matter that the drives I'm trying to have files placed on are newly formatted REFS drives in a mixed pool?
  8. For some reason, my Disk 11 (4TB) was not in the pool. I attempted to readd it after I added a drive letter and confirmed all the data was still there (and checking it for errors). It got to 90% then said "can't add the same disk twice" I was going to copy all the data over to the pool then format and add that way but it would take quite some time. Here's my question - I did the following: 1. Went to the hidden folder "poolpart..." and cut all the folders within. 2. I pasted to the root of that individual drive 3. I took ownership of the poolpart folder - then deleted it 4. I then added the drive back to the pool. Did I mess anything up - some internal database of duplication and such?
  9. RES2SV240 vs RES2SV240NC - what's the difference? (edit - NC means NC cables) And do I need to do something with MPIO to avoid this error : Disk 8 has the same disk identifiers as one or more disks connected to the system.
  10. Also, you have any recommendations for a 32 port (SFF-8088 end connection) card? I'm looking - I know the name brands ARECA and such, but can't find any with 32 ports.
  11. Thanks, Drashna. This server hobby of mine is expensive enough. I went with the cheapest 32port card I could find. Uggh... Seems to only be happening when I flood the bus with 100s of gigabytes of data writing. Once I get things set, perhaps that won't happen.
  12. This isn't a Drivepool question. However, some of you who frequent these forums also have setups similiar to mine and am curious of what to do. I recently upgraded to a Highpoint DC7280 - 32 port card. I'm experiencing though - what I can only call bus saturation concerns - that lock down the entire system. The Event ID is 9 -The device, \Device\Scsi\dc72801, did not respond within the timeout period. There are no settings that I know of how to configure the card. Highpoint is not known for their customer service. Anyone else have a card timeout that does a softlock on their system?
  13. To chime in a little late, I wanted to convert to REFS. Got half way through and found that when the power pops (kids running vacuum or something) - some of my drives turned RAW. And it was nearly impossible getting any data from them. I converted back to NTFS. (There's a price of being an early adopter)
  14. I did that, Drashna. Still no balancing. Your thoughts on fully uninstalling - deleting settings, etc. and then starting over?
  15. Not sure why, but my pool isn't balancing. I have two plugins prioritized - the SSD optimizer with 3 SSDs (same size) since I have the pool triplicated. I have Disk Space Equalizer next up, with percent used as the selection. Nevertheless, it's not balancing. I'm using the most recent beta. I tried adding the most recent plugins but it doesn't recognize the beta so it won't install with something like "needs ver 2.0" to be installed.
  16. Thanks. I'm chalking it up to my horrible fascination with new tech and converting drives to ReFS. Since all my drives are above 2TB, now formatted as NTFS with GPT. I'm hoping this problem will go away.
  17. I don't think this is a product of Drivepool. But I know many of you guys are storage fanatics and might have faced this before... I am routinely finding a drive that loses its partition and goes RAW. This is a huge pain that I'd like to remedy. 1. What, in your experience, can cause this? 2. What is the easiest way to rebuild the partition and have the drive be good again? 3. Does ReFS partitions impact recovery or probability?
  18. Just a suggestion for future builds: 1. If a drive has smart errors - to highlight that row in yellow; damaged highlight row in red 2. Provide a column (much like temp, case location, drive location, etc.) for Warranty Date - I'm currently using Drive Location - so I have a quick view of when I need to return a drive that's not behaving
  19. I use some older drives that develop bad sectors. I've toyed with using REFS and then back and then back again...but I'm curious how REFS handles a physically bad sector. Does it detect it before and then make sure to have a copy somewhere else - I think this is different than bitrot, no?
  20. Drashna (or others), I am rebuilding a pool and I am experiencing something weird. There's a large portion of my pool that says "unusable for duplication" - like 4.16TB out of 14TB. Now, 3 of the drives (totaling 1.8TB) are SSD cache drives so I could understand that but why is so much unable to be used for duplication?
  21. Nope. 2048k - it's new because the limit used to be 64k with default being 4k.
  22. I was doing some maintenance and I noticed a new cluster size option. Anyone else try this yet? Most of my pool is stored with large files (mkvs and such). I am curious if this would help performance. I'm not worried about space lost with small files.
  23. They removed it, again! And I'm halfway through a full conversion for my drives (memleak be damned). Now, short of installing Windows again - is there a way to format the drives in REFS? I can unplug them and do it via another PC, but that's a pain...
×
×
  • Create New...