Jump to content
Covecube Inc.

duelistjp

Members
  • Content Count

    28
  • Joined

  • Last visited

  • Days Won

    1
  1. might have been because while i was messing with the move to a new comp i had balancing disabled. 2.1.1.561
  2. Time to show off my new system. honestly a bit overboard for me but don't want to have to do much with it for a while case supermicro 36 3.5" bay 4u CSE-847E16-R1K28LPB MOBO also supermicro X9DRH-iF Processor 2 Intel Xeon E5-2660 V1 @ 2.2GHz 8-core 16 threads each Memory is 128GB ddr3 ecc memory HBA is a LSI 9211-8i 1280W redundant power supply PWS-1K28P-SQ still breaking her in but I'm sure my "linux isos" will love their new home. I have a 2tb red drive for downloads before being copied to my pool, my pool has snapraid 2 drive parity using 2 wd red 8tb i shucked from best buy. the data array is 4 8tb seagate archive drives, a 4tb hgst desktar nas drive, 2 4tb wd red and 2 3tb wdred. home internet is 1gb/250mb fiber optic. running radarr, sonarr, deluge, sabnzbd, and plex on it. got some plans for virtualizing a home lab to play around to pick up some IT skills to complement my Computer Science degree
  3. thanks. currently freeing up 25TB. i found after i turned it off it took a few minutes then it gave me a message that file duplication was inconsistent down near the yellow bar and gave me an option to fix it. that is what seems to be working
  4. i think the problem might be related to this one with the ssd optimizer. seems pretty similar. http://community.covecube.com/index.php?/topic/1881-ssd-optimizer-problem/
  5. duelistjp

    REFS in pool

    any idea when server 2016 will support the new version?
  6. i use a powershell script to put the tree output into 2 files. one with just the directories so it is more easily browseable and one with full files. i use task scheduler to run it weekly basically under the assumption that replacing a week's worth with sonarr/radarr isn't too bad. you really should have a backup though. here is my script for reference. you can modify it for where your drives are mounted and where you want the files. i mount mine inside a folder called drives and store inside a "Trees" folder in the documents section. this produces 2 txt files with the date in the filename to make things easy but you might need to keep an eye on how many you keep. my two files combined are about 275MB on 25TB roughly of files. for sorting i make it do year month day. that way when sorted alphabetically the files are in the proper order. file names will look like this for example trees-2017-07-23.txt and trees_with_files-2017-07-23.txt. might help in the future. $date = (Get-Date -format "yyyy-MM-dd") tree C:\Drives > "C:\Users\Administrator\Documents\Trees\trees-$date.txt" tree C:\Drives > "C:\Users\Administrator\Documents\Trees\trees_with_files-$date.txt" /F
  7. yeah this made this feature a bit less useful to me that it can't somehow know the size of the file before it writes. i sometimes have files larger than the spare 32gb ssd i had lying around. but i can see why you wouldn't always be able to tell the size before you start.
  8. I currently have a pool with duplication in light of the fact i now have a local backup and a remote backup i'd like to get rid of duplication and clear up some space. how do i do this without messing up do i just undo the duplication in settings and run balancing or what? remeasuring the pool doesn't seem to do it
  9. was wondering if this had nfs support. also see windows file share listed. is that samba?
  10. any new info on this it has been over 2 months now
  11. thanks that was what i was thinking would happen
  12. you had duplication it should make duplicates again in my experience during balancing. I actually tried this a few months ago so i could be aware of the process. with duplication it is simple just put the new drive in and let it rebalance. if you don't have a new drive and not enough space to duplicate everything now there is going to be some left unduplicated i imagine but other than that it works nicely
  13. well it'll be quite a bit more than one drive if i'm using a backup software to backup to it will this cause any problems when the drive is full or will it just slow to practically nothing on the write speed as it waits for data to be uploaded to make room. just don't want any instability on the system
  14. i'm looking into using cloud drive to backup a large 40tb drivepool. i only have 50gb of cache space allocated to the drive. will backing up to this cause problems as the cache gets written to faster than it can upload to the cloud. and if so what do i have to do to make it work. also when i made the cache crashplan is reporting the cache as being the same size as the cloud drive and windows lists it as huge as well although space on disk is small. will this cause huge amounts of data to be uploaded to crashplan? if so does anyone know how to exclude the clouddrive folders from crashplan when other stuff in the same directory as the cache directories needs to be backed up. i think you can do it with regex but not sure how
  15. as far as knowing what was on each drive what i do is i have the individual drives mounted in a drives folder and have a script scheduled weekly to run tree on that folder and output it to a text file with the date of the backup in the filename on my google drive folder that way if the unthinkable happens and multiple drives fail i can still see what was on each drive even if the comp is completely dead
×
×
  • Create New...