Jump to content

defcon

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by defcon

  1. I live in a large Metro area but still don't see anything really cheap. My server is a SuperMicro, it needs ~34" depth so most of the cheap racks don't work. e.g. the rack linked above is 29" depth which I doubt will be enough when you add in clearance, cables etc. Then you need to add in rails ($50-75) What I'd really like is someone to build me a noise isolated rack like the crazy expensive IsoRacks.
  2. You don't need a rack. I live in an apartment, no space for a full rack, I put it on small bedside table. I might make one of these - https://wiki.eth0.nl/index.php/LackRack, I looked at smaller racks and they are not cheap at all, would rather spend that money on disks.
  3. You don't need to build your own pc, there are many sellers on ebay selling refurb pc's/servers, it all depends on your needs, budget etc. e.g. you can get a Lenovo ThinkServer/HP workstation for cheaper than you can build your own and it will have better build quality as well. If you think you want a lot of drive bays for future use, there are many other server types. e.g. http://www.ebay.com/itm/HP-DL380E-G8-2x-E5-2420-12-CORE-16GB-NO-HDD-H220-LSI-9205-8i-/172542645101?hash=item282c57d36d For $330 you are getting a modern cpu, 16GB RAM, 12 hotswap 3.5" drives, and industrial build quality, you can't really build anything close to this. The downsides are going to noise, but depending on the server there are ways to quiet these down. Just another option to consider.
  4. Yes, current data is duplicated, but I am close to capacity. So I have 2 choices - 1. add external storage via enclosure etc 2. add a 2nd server which will be a duplicate of the main one #2 gives me a lot more options as I have 2 servers now and room to grow on both. The problem of course is that I can't then use DrivePool for duplication. Or are you saying that if the 2nd server is an iSCSI target then I mount it as a drive under DrivePool? That will also require both to be on at all times right?
  5. My main server is running out of drive bays and capacity. I want to have duplication enabled on everything. I can't upgrade all the drives to larger sizes. I can reuse another pc as backup server and in that case I will have to turn off duplication on the shares since I see no way to have Drivepool duplicate across the network. And the backup server would have its own Drivepool. Is there any other pssible solution? How do you move disks from main server? I can turn off duplication, then 'remove disk' so all files are migrated off it. But then I will have to copy all the data back once the disk is in the 2nd server which is a lot of redundant copies. What I'd ideally like is to have a set of physical disks with duplicated data which I can then remove and move to 2nd pc and start a pool.
  6. I've asked about this before, and the real solution to this is to implement a system where you can specify the level at which folders get split. see here - http://community.covecube.com/index.php?/topic/2302-file-placement-based-on-folder/?view=findpost&p=15946 But this is a big change, I'm also hoping Alex can get around to it sometime.
  7. Thanks, so as I understand it this cannot be done by just a file balancer plugin and needs core changes, so there's no point in trying to write my balancer for this. I hope Alex gets the time for this but there are probably more important things in the pipeline. btw is there a roadmap for your products?
  8. This is the split level I'm referring to - https://lime-technology.com/wiki/index.php/Un-Official_UnRAID_Manual#Split_level I'm not sure how the current balancer is implemented, but this doesn't seem that hard computationally. Since it just needs to look at the path depth and decide if a new disk should be used. The current file placement algorithm must do this as well when it decides how to place files, it just looks at different criteria. I have taken a look at writing my own balancing plugin but there doesn't seem to be enough info passed to the plugins and in any case these are called after the files have been placed initially. Not when they are being copied to the pool and DrivePool decides where they go, correct? I'm afraid with the defaults, I'll end up with files in the same folder split across multiple disks (and this can also need all of them to be spun up). Other people also seem to have complained of this. e.g - http://community.covecube.com/index.php?/topic/2153-crashplan-restore-is-a-nightmare-if-using-default-drivepool-placement/ IMO this should be a bit higher priority. If this can be done via a balancer plugin I'm willing to help out.
  9. unRaid has a concept of 'split level' where you can specify upto what level files/folders should be placed on the same drive. I don't see anything similar in DrivePool but maybe I'm not seeing that option. e.g. if I have movies\ movie1 movie2 tv\ show1\ season1 season2 show2\ season1 season2 is there a way to setup things so that files in a season or movie folder are never split across drives? Or if a season has to split then the folder is recreated?
  10. So the only advantage of encryption is if someone else manages to get a hold of your cloud credentials and install CloudDrive they still won't be able to mount the drive? That seems like a very niche case and of course I understand the need for security, but I'm afraid of forgetting my password I never really considered that due to its architecture, CloudDrive is secure by its very nature so it should be possible to avoid the encryption.
  11. Even is Storage Spaces was a one click setup, I still wouldn't use it. It has the same dangers as any other solution that stripes data and doesn't keep individual disks readable in a normal pc. RAID with striping, ZFS etc all have the same dangers. The big advantage of DrivePool is your data is always in good old NTFS. No rebuilds, no dependency on hw, and no need to repair an entire array if a disk goes bad. You can take out a disk any time. SS/ZFS/RAID were all designed for enterprises, and are not meant for home use. Besides the data dangers, they have higher costs - in disks (needing to match disks, replace multiples), hw and running costs.
  12. Are you talking about Storage Spaces? If so that was introduced in Windows a while back and is a very different tech.
  13. Added my GDrive to CloudDrive with encryption enabled. My upload speed should be ~40Mbps but in CloudDrive I see max speeds of 5Mbps and right now it is crawling along at 800Kbps. I know Amazon is limited right now but I thought Google would have decent speeds. Is this expected?
  14. I've had this problem before after a Windows reinstall, its rather annoying and common. A faster (well not really, but less clicking around) way is to run following in an admin cmd prompt, at the root level of your data drives - takeown /R /F * (takes ownership) ICACLS * /T /Q /C /RESET (resets all permissions to default and inherit) Someone made a tool to this as well - http://lallouslab.net/2013/08/26/resetting-ntfs-files-permission-in-windows-graphical-utility/
  15. Not sure if there's a DP specific issue here since I don't have multiple installs, but I use a MS account (but local should be the same) to sign in which is also the admin, I wouldn't use the built in Administrator account.
  16. The LSI cards in HBA mode (like the M1015) don't support drive spin down. If you want this feature, and its highly desirable since not having all disks running is a big advantage, here's how to do it - https://forums.servethehome.com/index.php?threads/supermicro-4u-24-bay-chassis-with-sas2-expander.4485/page-15#post-65175
  17. My truly critical data (personal docs etc) is backed up to multiple cloud services and on a usb key I carry with me. For photos right now I have a 1TB Google Drive promo (from buying a chromebook) which is more than enough for all my pics but its going to expire soon. Everything else I manage manually using a bunch of internal + external disks. I have about 80% of my files duplicated, keeping the same folder structure on the main and backup disks, but I have to juggle around files as disks fill up. I think migrating to DrivePool will be easy because I just need to seed, and the balancer should hopefully see I have 2 copies of the same folder structure. I'm not sure How I will tell it to leave the files where they are and not move them around, the file placement/balancer plugin rules look complicated. Drashna, you have a massive server! I'm assuming you are using duplication since it says in your server link that its 117TB, and you mention 50TB data. Do you also do backups and if so how? I've see the discussions on ZFS before, I really don't understand why someone would use it for a home server with so many issues. Ransomware is a scary prospect, I don't know how to prevent that since all files have write access.
  18. Ideally I'd have everything backed up offsite or offline. But like I said, that needs a 2nd server at least, or managing a bunch of externals and keeping track of which one is filled up when its time to take a backup. Its simply not feasible after a certain size. Most of the time deletion is deliberate - e.g. I delete a movie after I'm done watching. I am willing to live with the dangers of accidental deletion. Actually I think if I use snapraid and have it do a nightly sync/scrub, then it can reconstruct accidentally deleted files because parity is not realtime. But to me the main advantage of duplication is its actually 2 copies of your data which is the most secure form of protection as you don't need any reconstruction. And its effectively a backup since I can resue the disks on other pc's. If it doesn't serve as a kind of backup, then am I gaining much vs a parity solution? After all we are using 2x the disk space.
  19. Yes, I know there's no substitute for backup, but its not feasible once you get beyond a certain size. And offsite backups would be best but again I don't think its feasible unless you have access to a data center where you can coloc a backup server. So if I enable duplication doesn't that effectively achieve the goal of backups? The other option is to build a 2nd server and use it for backups which is just added expense. I am not worried about accidental deletions, even with a backup server I'd use rsync. What do people use for backup?
  20. Question for Drashna - your storage server could run Emby (and other stuff) as well, right? I was looking at Dell R210/610 on ebay to use for a dedicated server but I think it may not be needed. I am also getting a storage server with Xeon L5630 and plan to use that to run DrivePool as well as Emby, it should be enough right?
  21. So if I install Essentials, then enable Hyper-V role, I'm not allowed to run any guest Linux vm's on it? Or Windows vm's for which I have a license?
  22. 1. Install Hyper-V core, then WSE 2012 on it 2. Install WSE 2012, then enable Hyper-V In both cases you end up with a type 1 hypervisor and any extra vm's run with no performance penalty. With #1 - you lose native access to disk because Hyper-V is the host, WSE is a guest OS. And you have to manage Hyper-V from a 2nd pc With #2, WSE is host with full native access to drivers. In both cases the licensing is the same, so if you own a Server 2012 license I see no reason to go with #1 if you want to use DrivePool.
  23. I enabled Hyper-V on my WIndows 10 Pro pc. From everything that I've read, Hyper-V is a type 1 hypervisor and will always insert itself beneath a Windows host. And Win 10 actually has the latest improvements in Hyper-V from Server 2016. This link talks more about it - https://www.petri.com/windows-10-build-10565-adds-nested-hyper-v So right now I have Hyper-V enabled running 2 different vm's and I have full SMART access. The wmi command still has the same output but that may be because of some special magic going on. So right now I'm fairly confident that this approach will allow me to use my initial idea, won't know for sure unless someone can try the above. Sorry, this is a very specific use case no else probably cares about
  24. I wish I had a spare machine to try this simple experiment out, maybe someone else can help? get evaluation iso of Server 2012 Essentials R2, install on pc check SMART using any tool like CrystalDiskInfo what is output of 'wmic baseboard get manufacturer,product' ? enable Hyper-V role does SMART still work? what is output of 'wmic baseboard get manufacturer,product' ?
  25. I don't know why you are answering questions on a Sun, but thanks !!! Hyper-V is very tricky - if I install Server 2012 on bare metal and enable the Hyper-V role, then it seems like it virtualizes itself !! http://serverfault.com/questions/326844/is-hyper-v-a-real-hypervisor "Hyper-V is a type 1 hypervisor, no matter whether installed as a windows server component or as "Hyper-V server". In the former case, it looks as if it's a type 2 product because you install windows first, but when you install the hyper-V role, it essentially virtualises the windows server installation that's already present." Is this case covered? There's no real host OS in this case, just the hypervisor. And you can't install a 2nd copy of Scanner on it anyway, like some threads here suggest. It seems ESXi can do this - http://community.covecube.com/index.php?/topic/131-how-to-get-smart-data-passed-on-from-esxi-51-host/, so maybe Proxmox can do it as well? I want to run VM's for other purposes too, and would want to use the built in capabilities on Windows server if possible. It'd be a real shame to get server hardware and not make use of it. Not sure what this means - "While the HyperV role is included in the OS, it's for a very specific use case, and is not meant for "general use"." Are you talking about HyperV in Essentials not being intended for general use?
×
×
  • Create New...