Jump to content
Covecube Inc.

defcon

Members
  • Content Count

    34
  • Joined

  • Last visited

  1. defcon

    Basic backup system

    I live in a large Metro area but still don't see anything really cheap. My server is a SuperMicro, it needs ~34" depth so most of the cheap racks don't work. e.g. the rack linked above is 29" depth which I doubt will be enough when you add in clearance, cables etc. Then you need to add in rails ($50-75) What I'd really like is someone to build me a noise isolated rack like the crazy expensive IsoRacks.
  2. defcon

    Basic backup system

    You don't need a rack. I live in an apartment, no space for a full rack, I put it on small bedside table. I might make one of these - https://wiki.eth0.nl/index.php/LackRack, I looked at smaller racks and they are not cheap at all, would rather spend that money on disks.
  3. defcon

    Basic backup system

    You don't need to build your own pc, there are many sellers on ebay selling refurb pc's/servers, it all depends on your needs, budget etc. e.g. you can get a Lenovo ThinkServer/HP workstation for cheaper than you can build your own and it will have better build quality as well. If you think you want a lot of drive bays for future use, there are many other server types. e.g. http://www.ebay.com/itm/HP-DL380E-G8-2x-E5-2420-12-CORE-16GB-NO-HDD-H220-LSI-9205-8i-/172542645101?hash=item282c57d36d For $330 you are getting a modern cpu, 16GB RAM, 12 hotswap 3.5" drives, and industrial build quality, you can't really build anything close to this. The downsides are going to noise, but depending on the server there are ways to quiet these down. Just another option to consider.
  4. Yes, current data is duplicated, but I am close to capacity. So I have 2 choices - 1. add external storage via enclosure etc 2. add a 2nd server which will be a duplicate of the main one #2 gives me a lot more options as I have 2 servers now and room to grow on both. The problem of course is that I can't then use DrivePool for duplication. Or are you saying that if the 2nd server is an iSCSI target then I mount it as a drive under DrivePool? That will also require both to be on at all times right?
  5. My main server is running out of drive bays and capacity. I want to have duplication enabled on everything. I can't upgrade all the drives to larger sizes. I can reuse another pc as backup server and in that case I will have to turn off duplication on the shares since I see no way to have Drivepool duplicate across the network. And the backup server would have its own Drivepool. Is there any other pssible solution? How do you move disks from main server? I can turn off duplication, then 'remove disk' so all files are migrated off it. But then I will have to copy all the data back once the disk is in the 2nd server which is a lot of redundant copies. What I'd ideally like is to have a set of physical disks with duplicated data which I can then remove and move to 2nd pc and start a pool.
  6. I've asked about this before, and the real solution to this is to implement a system where you can specify the level at which folders get split. see here - http://community.covecube.com/index.php?/topic/2302-file-placement-based-on-folder/?view=findpost&p=15946 But this is a big change, I'm also hoping Alex can get around to it sometime.
  7. Thanks, so as I understand it this cannot be done by just a file balancer plugin and needs core changes, so there's no point in trying to write my balancer for this. I hope Alex gets the time for this but there are probably more important things in the pipeline. btw is there a roadmap for your products?
  8. This is the split level I'm referring to - https://lime-technology.com/wiki/index.php/Un-Official_UnRAID_Manual#Split_level I'm not sure how the current balancer is implemented, but this doesn't seem that hard computationally. Since it just needs to look at the path depth and decide if a new disk should be used. The current file placement algorithm must do this as well when it decides how to place files, it just looks at different criteria. I have taken a look at writing my own balancing plugin but there doesn't seem to be enough info passed to the plugins and in any case these are called after the files have been placed initially. Not when they are being copied to the pool and DrivePool decides where they go, correct? I'm afraid with the defaults, I'll end up with files in the same folder split across multiple disks (and this can also need all of them to be spun up). Other people also seem to have complained of this. e.g - http://community.covecube.com/index.php?/topic/2153-crashplan-restore-is-a-nightmare-if-using-default-drivepool-placement/ IMO this should be a bit higher priority. If this can be done via a balancer plugin I'm willing to help out.
  9. unRaid has a concept of 'split level' where you can specify upto what level files/folders should be placed on the same drive. I don't see anything similar in DrivePool but maybe I'm not seeing that option. e.g. if I have movies\ movie1 movie2 tv\ show1\ season1 season2 show2\ season1 season2 is there a way to setup things so that files in a season or movie folder are never split across drives? Or if a season has to split then the folder is recreated?
  10. So the only advantage of encryption is if someone else manages to get a hold of your cloud credentials and install CloudDrive they still won't be able to mount the drive? That seems like a very niche case and of course I understand the need for security, but I'm afraid of forgetting my password I never really considered that due to its architecture, CloudDrive is secure by its very nature so it should be possible to avoid the encryption.
  11. Even is Storage Spaces was a one click setup, I still wouldn't use it. It has the same dangers as any other solution that stripes data and doesn't keep individual disks readable in a normal pc. RAID with striping, ZFS etc all have the same dangers. The big advantage of DrivePool is your data is always in good old NTFS. No rebuilds, no dependency on hw, and no need to repair an entire array if a disk goes bad. You can take out a disk any time. SS/ZFS/RAID were all designed for enterprises, and are not meant for home use. Besides the data dangers, they have higher costs - in disks (needing to match disks, replace multiples), hw and running costs.
  12. Are you talking about Storage Spaces? If so that was introduced in Windows a while back and is a very different tech.
  13. Added my GDrive to CloudDrive with encryption enabled. My upload speed should be ~40Mbps but in CloudDrive I see max speeds of 5Mbps and right now it is crawling along at 800Kbps. I know Amazon is limited right now but I thought Google would have decent speeds. Is this expected?
  14. I've had this problem before after a Windows reinstall, its rather annoying and common. A faster (well not really, but less clicking around) way is to run following in an admin cmd prompt, at the root level of your data drives - takeown /R /F * (takes ownership) ICACLS * /T /Q /C /RESET (resets all permissions to default and inherit) Someone made a tool to this as well - http://lallouslab.net/2013/08/26/resetting-ntfs-files-permission-in-windows-graphical-utility/
  15. Not sure if there's a DP specific issue here since I don't have multiple installs, but I use a MS account (but local should be the same) to sign in which is also the admin, I wouldn't use the built in Administrator account.
×
×
  • Create New...