Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by duelistjp

  1. might have been because while i was messing with the move to a new comp i had balancing disabled.
  2. Time to show off my new system. honestly a bit overboard for me but don't want to have to do much with it for a while case supermicro 36 3.5" bay 4u CSE-847E16-R1K28LPB MOBO also supermicro X9DRH-iF Processor 2 Intel Xeon E5-2660 V1 @ 2.2GHz 8-core 16 threads each Memory is 128GB ddr3 ecc memory HBA is a LSI 9211-8i 1280W redundant power supply PWS-1K28P-SQ still breaking her in but I'm sure my "linux isos" will love their new home. I have a 2tb red drive for downloads before being copied to my pool, my pool has snapraid 2 drive parity using 2 wd red 8tb i shucked from best buy. the data array is 4 8tb seagate archive drives, a 4tb hgst desktar nas drive, 2 4tb wd red and 2 3tb wdred. home internet is 1gb/250mb fiber optic. running radarr, sonarr, deluge, sabnzbd, and plex on it. got some plans for virtualizing a home lab to play around to pick up some IT skills to complement my Computer Science degree
  3. thanks. currently freeing up 25TB. i found after i turned it off it took a few minutes then it gave me a message that file duplication was inconsistent down near the yellow bar and gave me an option to fix it. that is what seems to be working
  4. i think the problem might be related to this one with the ssd optimizer. seems pretty similar. http://community.covecube.com/index.php?/topic/1881-ssd-optimizer-problem/
  5. any idea when server 2016 will support the new version?
  6. i use a powershell script to put the tree output into 2 files. one with just the directories so it is more easily browseable and one with full files. i use task scheduler to run it weekly basically under the assumption that replacing a week's worth with sonarr/radarr isn't too bad. you really should have a backup though. here is my script for reference. you can modify it for where your drives are mounted and where you want the files. i mount mine inside a folder called drives and store inside a "Trees" folder in the documents section. this produces 2 txt files with the date in the filename to make things easy but you might need to keep an eye on how many you keep. my two files combined are about 275MB on 25TB roughly of files. for sorting i make it do year month day. that way when sorted alphabetically the files are in the proper order. file names will look like this for example trees-2017-07-23.txt and trees_with_files-2017-07-23.txt. might help in the future. $date = (Get-Date -format "yyyy-MM-dd") tree C:\Drives > "C:\Users\Administrator\Documents\Trees\trees-$date.txt" tree C:\Drives > "C:\Users\Administrator\Documents\Trees\trees_with_files-$date.txt" /F
  7. yeah this made this feature a bit less useful to me that it can't somehow know the size of the file before it writes. i sometimes have files larger than the spare 32gb ssd i had lying around. but i can see why you wouldn't always be able to tell the size before you start.
  8. I currently have a pool with duplication in light of the fact i now have a local backup and a remote backup i'd like to get rid of duplication and clear up some space. how do i do this without messing up do i just undo the duplication in settings and run balancing or what? remeasuring the pool doesn't seem to do it
  9. was wondering if this had nfs support. also see windows file share listed. is that samba?
  10. any new info on this it has been over 2 months now
  11. thanks that was what i was thinking would happen
  12. you had duplication it should make duplicates again in my experience during balancing. I actually tried this a few months ago so i could be aware of the process. with duplication it is simple just put the new drive in and let it rebalance. if you don't have a new drive and not enough space to duplicate everything now there is going to be some left unduplicated i imagine but other than that it works nicely
  13. well it'll be quite a bit more than one drive if i'm using a backup software to backup to it will this cause any problems when the drive is full or will it just slow to practically nothing on the write speed as it waits for data to be uploaded to make room. just don't want any instability on the system
  14. i'm looking into using cloud drive to backup a large 40tb drivepool. i only have 50gb of cache space allocated to the drive. will backing up to this cause problems as the cache gets written to faster than it can upload to the cloud. and if so what do i have to do to make it work. also when i made the cache crashplan is reporting the cache as being the same size as the cloud drive and windows lists it as huge as well although space on disk is small. will this cause huge amounts of data to be uploaded to crashplan? if so does anyone know how to exclude the clouddrive folders from crashplan when other stuff in the same directory as the cache directories needs to be backed up. i think you can do it with regex but not sure how
  15. as far as knowing what was on each drive what i do is i have the individual drives mounted in a drives folder and have a script scheduled weekly to run tree on that folder and output it to a text file with the date of the backup in the filename on my google drive folder that way if the unthinkable happens and multiple drives fail i can still see what was on each drive even if the comp is completely dead
  16. yeah the best upload i can get here is 25mbps from fiber to the home. looking into their business offerings but that stuff is expensive.
  17. it'll be a few months till crashplan finishes its initial backup and i can really start on this so hopefully it will be available by then. just was working on setting up my backup strategy and thought i'd see if this was useful
  18. nice personally i just tree to a txt file in the google drive folder but i script it like yours. edited because i thought I might as well share what is in my script in case anyone prefers text files. i have that folder regularly backed up to goodle drive. i happen to like the format but yeah the fancier format allows some nice things to. I run it weekly as a 80mb file is not insignificant I just do the folders currently because else it becomes huge. been thinking about moving it off the ssd though and if i do i will make it do to files at a time. one with the files because trying to go through the entire structure with files is a bit intimidating honestly and hard to glance through $date = (Get-Date -format "yyyy-MM-dd") tree C:\Drives > "C:\Users\Administrator\Documents\Trees\trees-$date.txt"
  19. I have a 30tb pool with duplication on my server locally. i created 3 10 tb cloud drives on amazon then pooled those together into a single 30tb pool. is there any way i can sync the pools so everything on the server is also on amazon. i want to keep 2 copies locally but still have access to data in case of catastrophic failure while i'm rebuilding the computer from my crashplan backup. crashplan is already backing up of course it just isn't really great at letting me access a movie while i'm restoring a ton of other data. I was thinking maybe there was some way of creating a pool of the 2 pools and setting that to have duplication and using the rules to make it so stuff was written to the local pool first but I don't see a way to make a pool of pools. So I wanted to ask if there was a way to do this
  20. if i have duplication enabled on my pool and then decide to disable it do i need to do anything to reclaim the space used by the duplicated files?
  21. not missing your other points but he said other than for databases. I'm hopeful though that a media server will be not so write intensive to be a big deal but i got my 2nd paycheck today so have ordered the parts for this server will try the 8tb in a future expansion perhaps but for now i'm happy. i had a few more family members say they would want access to the plex server so i went ahead and got an i5-4590. my father decided that since a few family members wanted access he would pay the difference to upgrade it so it could handle an additional transcode or 2
  22. well i don't know how you do your backups but for my media i was expecting them to be write once read a couple times. but yeah agree about them being meant for backup but if you fill the drive then just read from it i think it might work in a nas
  23. well i thought that once i have 8tb of media i can put it on there. it'll pretty much be static then as i don't intend to delete older media. it will of course depend on what i hear over the next couple months. if i did use it i would be putting new data on the reds and once i'm sure it's good to keep go ahead and offload 8tb at once to a seagate. this would of course be after a couple full formats and extended self tests on the seagate to verify it is solid
  24. the 4tb i see on sale commonly for about $150-$160. the 5 tb always seems to be at least $50 more. and yeah i know I should probably be going with bigger drives. am thinking of using existing drives around the house for a few months then possibly moving to seagates 8tb if the reviews are good and there don't seem to be huge problems with reliability
  25. well the mobo had plenty of sata ports which i liked was hoping to put off dealing with hbas for a while and i figre now that i can afford hdds i'll probably be expanding the number of hdds pretty quickly. i've never used an hba and am gonna have enough trouble figuring this thing out and how to set up everything and use windows server 2012 r2 which i can get for free from my university. will consider it though. are they hard to set up and are they prone to failure. a quick google seems to have them being related to raid normally. the memory i actually already got a while back at ~$55 after rebate it was cheaper than the rest i was seeing and figured it would run at the slower speed in this comp without issue.
  • Create New...