Jump to content

defcon

Members
  • Posts

    34
  • Joined

  • Last visited

Posts posted by defcon

  1. I live in a large Metro area but still don't see anything really cheap. My server is a SuperMicro, it needs ~34" depth so most of the cheap racks don't work. e.g. the rack linked above is 29" depth which I doubt will be enough when you add in clearance, cables etc. Then you need to add in rails ($50-75)

     

    What I'd really like is someone to build me a noise isolated rack like the crazy expensive IsoRacks.

  2. You don't need to build your own pc, there are many sellers on ebay selling refurb pc's/servers, it all depends on your needs, budget etc. e.g. you can get a Lenovo ThinkServer/HP workstation for cheaper than you can build your own and it will have better build quality as well. If you think you want a lot of drive bays for future use, there are many other server types.

     

    e.g. http://www.ebay.com/itm/HP-DL380E-G8-2x-E5-2420-12-CORE-16GB-NO-HDD-H220-LSI-9205-8i-/172542645101?hash=item282c57d36d

     

    For $330 you are getting a modern cpu, 16GB RAM, 12 hotswap 3.5" drives, and industrial build quality, you can't really build anything close to this. The downsides are going to noise, but depending on the server there are ways to quiet these down. 

     

    Just another option to consider.

  3. To clarify, the data is not duplicated already, but you want to enable it.

     

    But the other issue is that you don't have enough room on the pool, and no space in the case to add more disks? 

     

    Correct? 

     

     

    If thats the case, "external" may be the way to go.

     

    There are plenty of USB/eSATA enclosures that support 4-8 drives and can easily expand your storage.

     

    That said, .... it's a lot more expensive, and may be rather load. 

    As in $300-500 not counting drives. But SAS is an enterprise technology and can get a LOT of disks connected.

    If you're interested in this, let me know.

     

     

    But StableBit CloudDrive is another solution. 

     

    If you have another system, and access to Windows Server, you could set up iSCSI, as well. 

     

    Yes, current data is duplicated, but I am close to capacity. So I have 2 choices -

     

    1. add external storage via enclosure etc

    2. add a 2nd server which will be a duplicate of the main one

     

    #2 gives me a lot more options as I have 2 servers now and room to grow on both. The problem of course is that I can't then use DrivePool for duplication. Or are you saying that if the 2nd server is an iSCSI target then I mount it as a drive under DrivePool? That will also require both to be on at all times right?

  4. My main server is running out of drive bays and capacity. I want to have duplication enabled on everything. I can't upgrade all the drives to larger sizes. I can reuse another pc as backup server and in that case I will have to turn off duplication on the shares since I see no way to have Drivepool duplicate across the network. And the backup server would have its own Drivepool. Is there any other pssible solution? 

     

    How do you move disks from main server? I can turn off duplication, then 'remove disk' so all files are migrated off it. But then I will have to copy all the data back once the disk is in the 2nd server which is a lot of redundant copies. What I'd ideally like is to have a set of physical disks with duplicated data which I can then remove and move to 2nd pc and start a pool. 

     

     

     

     

  5. Thanks, so as I understand it this cannot be done by just a file balancer plugin and needs core changes, so there's no point in trying to write my balancer for this. I hope Alex gets the time for this but there are probably more important things in the pipeline. btw is there a roadmap for your products?

  6. To implement something like this would require a complete re-write of the balancing engine, which is by no means a small project.  It is possible that we could do something like this in the future and make it much more robust, but that's something we plan on doing right now.  

     

    As for enforcing placement, you can use the "file Placement rules" to force this.  But you'd have to micromanage the pool to do this properly.  

     

    This is the split level I'm referring to - https://lime-technology.com/wiki/index.php/Un-Official_UnRAID_Manual#Split_level

     

    I'm not sure how the current balancer is implemented, but this doesn't seem that hard computationally. Since it just needs to look at the path depth and decide if a new disk should be used. The current file placement algorithm must do this as well when it decides how to place files, it just looks at different criteria. I have taken a look at writing my own balancing plugin but there doesn't seem to be enough info passed to the plugins and in any case these are called after the files have been placed initially. Not when they are being copied to the pool and DrivePool decides where they go, correct?

     

    I'm afraid with the defaults, I'll end up with files in the same folder split across multiple disks (and this can also need all of them to be spun up). Other people also seem to have complained of this.

     

    e.g - http://community.covecube.com/index.php?/topic/2153-crashplan-restore-is-a-nightmare-if-using-default-drivepool-placement/

     

    IMO this should be a bit higher priority. If this can be done via a balancer plugin I'm willing to help out.

  7. unRaid has a concept of 'split level' where you can specify upto what level files/folders should be placed on the same drive. I don't see anything similar in DrivePool but maybe I'm not seeing that option. 

     

    e.g. if I have

     

    movies\

         movie1

         movie2

     

    tv\

       show1\

          season1

          season2

       show2\

          season1

          season2

     
    is there a way to setup things so that files in a season or movie folder are never split across drives? Or if a season has to split then the folder is recreated?
  8. So the only advantage of encryption is if someone else manages to get a hold of your cloud credentials and install CloudDrive they still won't be able to mount the drive? That seems like a very niche case and of course I understand the need for security, but I'm afraid of forgetting my password :) 

     

    I never really considered that due to its architecture, CloudDrive is secure by its very nature so it should be possible to avoid the encryption.

  9. Even is Storage Spaces was a one click setup, I still wouldn't use it. It has the same dangers as any other solution that stripes data and doesn't keep individual disks readable in a normal pc. RAID with striping, ZFS etc all have the same dangers.

     

    The big advantage of DrivePool is your data is always in good old NTFS. No rebuilds, no dependency on hw, and no need to repair an entire array if a disk goes bad. You can take out a disk any time.

     

    SS/ZFS/RAID were all designed for enterprises, and are not meant for home use. Besides the data dangers, they have higher costs - in disks (needing to match disks, replace multiples), hw and running costs.

  10. Added my GDrive to CloudDrive with encryption enabled. My upload speed should be ~40Mbps but in CloudDrive I see max speeds of 5Mbps and right now it is crawling along at 800Kbps. 

     

    I know Amazon is limited right now but I thought Google would have decent speeds. Is this expected? 

  11. I've had this problem before after a Windows reinstall, its rather annoying and common. A faster (well not really, but less clicking around) way is to run following in an admin cmd prompt, at the root level of your data drives -

     

    takeown /R /F *    (takes ownership)

    ICACLS * /T /Q /C /RESET (resets all permissions to default and inherit)

     

    Someone made a tool to this as well - http://lallouslab.net/2013/08/26/resetting-ntfs-files-permission-in-windows-graphical-utility/

  12. Well, a parity solution takes a long time to recover I would think. Some argue that it also puts additional stress on the remaining HDDs, increasing probability of cascading failure (but, IMHO that is quite a stretch). I use duplication solely as a means to increase uptime. Backup for loss due to accidental deletions, viruses, fire, theft etc. And yes, I rotate Server Backups (which include client backups) offsite.

     

    What is you setup?

     

    My truly critical data (personal docs etc) is backed up to multiple cloud services and on a usb key I carry with me. For photos right now I have a 1TB Google Drive promo (from buying a chromebook) which is more than enough for all my pics but its going to expire soon.

     

    Everything else I manage manually using a bunch of internal + external disks. I have about 80% of my files duplicated, keeping the same folder structure on the main and backup disks, but I have to juggle around files as disks fill up. I think migrating to DrivePool will be easy because I just need to seed, and the balancer should hopefully see I have 2 copies of the same folder structure. I'm not sure How I will tell it to leave the files where they are and not move them around, the file placement/balancer plugin rules look complicated.

     

    Drashna, you have a massive server! I'm assuming you are using duplication since it says in your server link that its 117TB, and you mention 50TB data. Do you also do backups and if so how? I've see the discussions on ZFS before, I really don't understand why someone would use it for a home server with so many issues.

     

    Ransomware is a scary prospect, I don't know how to prevent that since all files have write access.

  13. Ideally I'd have everything backed up offsite or offline. But like I said, that needs a 2nd server at least, or managing a bunch of externals and keeping track of which one is filled up when its time to take a backup. Its simply not feasible after a certain size.

     

    Most of the time deletion is deliberate - e.g. I delete a movie after I'm done watching. I am willing to live with the dangers of accidental deletion. Actually I think if I use snapraid and have it do a nightly sync/scrub, then it can reconstruct accidentally deleted files because parity is not realtime.

     

    But to me the main advantage of duplication is its actually 2 copies of your data which is the most secure form of protection as you don't need any reconstruction. And its effectively a backup since I can resue the disks on other pc's. If it doesn't serve as a kind of backup, then am I gaining much vs a parity solution? After all we are using 2x the disk space.

  14. Yes, I know there's no substitute for backup, but its not feasible once you get beyond a certain size. And offsite backups would be best but again I don't think its feasible unless you have access to a data center where you can coloc a backup server.

     

    So if I enable duplication doesn't that effectively achieve the goal of backups? The other option is to build a 2nd server and use it for backups which is just added expense. I am not worried about accidental deletions, even with a backup server I'd use rsync.

     

    What do people use for backup?

  15. Question for Drashna - your storage server could run Emby (and other stuff) as well, right? I was looking at Dell R210/610 on ebay to use for a dedicated server but I think it may not be needed. I am also getting a storage server with Xeon L5630 and plan to use that to run DrivePool as well as Emby, it should be enough right?

  16. 1. Install Hyper-V core, then WSE 2012 on it

    2. Install WSE 2012, then enable Hyper-V

     

    In both cases you end up with a type 1 hypervisor and any extra vm's run with no performance penalty.

     

    With #1 - you lose native access to disk because Hyper-V is the host, WSE is a guest OS. And you have to manage Hyper-V from a 2nd pc

    With #2, WSE is host with full native access to drivers.

     

    In both cases the licensing is the same, so if you own a Server 2012 license I see no reason to go with #1 if you want to use DrivePool.

  17. I enabled Hyper-V on my WIndows 10 Pro pc. From everything that I've read, Hyper-V is a type 1 hypervisor and will always insert itself beneath a Windows host. And Win 10 actually has the latest improvements in Hyper-V from Server 2016. This link talks more about it - https://www.petri.com/windows-10-build-10565-adds-nested-hyper-v

     

    So right now I have Hyper-V enabled running 2 different vm's and I have full SMART access. The wmi command still has the same output but that may be because of some special magic going on. So right now I'm fairly confident that this approach will allow me to use my initial idea, won't know for sure unless someone can try the above.

     

    Sorry, this is a very specific use case no else probably cares about :)

  18. I wish I had a spare machine to try this simple experiment out, maybe someone else can help?

     

    get evaluation iso of Server 2012 Essentials R2, install on pc

    check SMART using any tool like CrystalDiskInfo

    what is output of 'wmic baseboard get manufacturer,product' ?

    enable Hyper-V role

    does SMART still work? what is output of 'wmic baseboard get manufacturer,product' ?

  19.  

    • There is a slight performance hit when passing through disks. There always will be. But for the most case, this is negliable. 

      Also, Hyper-V does not pass on the SMART data to VMs. That means tools like StableBit Scanner won't pick it up on the VM. 

    • You absolutely can do that. However, I run everything on my Essentials system. It may not be the "best idea", but it works best, IMO. 
    • Yes and no.  

      While the HyperV role is included in the OS, it's for a very specific use case, and is not meant for "general use". 

     

    To be honest, depending on what you're doing, and how "locked down" you want to be, it may be simplest to just run WSE on "bare metal" (no hypervisor), and run the other stuff on WSE directly.  You can create service accounts to lock down access, if needed.  

    But this is the best bet, IMO (as it is simpler than virtualization). 

     

     

    I don't know why you are answering questions on a Sun, but thanks !!!

     

    Hyper-V is very tricky - if I install Server 2012 on bare metal and enable the Hyper-V role, then it seems like it virtualizes itself !! 

     

    http://serverfault.com/questions/326844/is-hyper-v-a-real-hypervisor

    "Hyper-V is a type 1 hypervisor, no matter whether installed as a windows server component or as "Hyper-V server".

    In the former case, it looks as if it's a type 2 product because you install windows first, but when you install the hyper-V role, it essentially virtualises the windows server installation that's already present."

     

    Is this case covered? There's no real host OS in this case, just  the hypervisor. And you can't install a 2nd copy of Scanner on it anyway, like some threads here suggest.

    It seems ESXi can do this - http://community.covecube.com/index.php?/topic/131-how-to-get-smart-data-passed-on-from-esxi-51-host/, so maybe Proxmox can do it as well?

     

    I want to run VM's for other purposes too, and would want to use the built in capabilities on Windows server if possible. It'd be a real shame to get server hardware and not make use of it. Not sure what this means - "While the HyperV role is included in the OS, it's for a very specific use case, and is not meant for "general use"."  Are you talking about HyperV in Essentials not being intended for general use? 

×
×
  • Create New...