Jump to content
Covecube Inc.

Search the Community

Showing results for tags 'CrashPlan'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
    • Nuts & Bolts
  • BitFlock
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 2 results

  1. Hi, I'm setting up a new home server and I just downloaded the DrivePool and Scanner apps to test out. I am trying to figure out the best setup to maximize my storage space but also have solid "real" backups and not just duplicated files. Duplication of files is great but isn't a real backup, since if I accidentally delete a file the duplicate is also automatically deleted. Duplication also doesn't allow for file versioning. I have six 8TB drives and I was thinking of creating two 24TB drive pools: a "Data" pool for live storage and a "Backup" pool to store backups of everything on the Data pool. I was also thinking of using the free version of CrashPlan to handle the backups. My question is this, would it be better to have CrashPlan back up the individual drives in the Data pool, or just have it back up the the entire shared Data pool as a whole? My thinking was that if I backed up the individual drives separately, if one of the drives in the Data pool failed, I could just replace it with a new drive and restore the backup of the old drive onto it to reconstitute the Data pool. However, not knowing all the technical details on how DrivePool works, I wasn't sure if that would actually work? Any thoughts or comments on this?
  2. Hey all. Forgive me if this post is long but I'm having a hard time making a decision here. So I have about 9TB of data that I am keeping on a small Windows 10 home server. It's mostly media, video projects and emulation stuff (I run a retro gaming YouTube channel and have tons of emulation sets.) It's about 485,000 files in total. Until recently, it was all kept on a Windows Storage Spaces JBOD volume through a 4-bay MediaSonic ProBox that was connected directly to my main PC. Problem is, because CrashPlan has such a horribly written client, backing all this up to the cloud was taking over 3GB of memory at any given time. So I built this home server and moved stuff there. The problem was, some of the data was still residing on my desktop PC and again because of CrashPlan's crap client, it was constantly rescanning terabytes of data off my main machine over the network which also was no good. So I decided to move everything to my home server. I bought 4 new 3TB WD Red drives and decided to try FlexRAID RAID-F. I copied everything off the old Storage Spaces volume to a bunch of other drives I had lying around, installed the new drives and created a FlexRAID volume. 3 data drives, 1 parity drive. This was way more effort than it should have been as FlexRAID's UI isn't great, the documentation is terrible (tons of pages on their wiki are outdated), their forums are useless and the developer won't even talk to you unless you pay a ridiculous amount of money. Still, I got it working. Then I spent the better part of 2 days copying all the data back. Then I discovered that because of the way CrashPlan reads the file system, it is not capable of doing real-time backups with FlexRAID RAID-F and can only discover changes when everything is rescanned. Because of the size and number of files in my backup, this takes literally hours each time. That's no good, I pay for CrashPlan in part for real-time backup. I'm fed up with FlexRAID and am ready to copy everything back off it again and dump it before the trial ends. I've been looking at alternatives and am intrigued by DrivePool. What you seem to offer isn't RAID per ce, it's storage pooling but with the option to have some or all of the data duplicated across multiple drives to protect against failure. As I understand it, this can't heal itself from a failure like a RAID can but it is possible to have a drive die without losing the whole pool. Truth be told, I think having a full RAID plus CrashPlan is probably overdoing it for my scenario. I have 12TB of storage available with this pool of Red drives but right now, only have access to 9TB of it as I have 1 drive being used for parity. If I convert the whole thing into DrivePool, I will have an extra 3TB I can use to duplicate the most important stuff, while entrusting the rest to CrashPlan. Hopefully that explains what I'm looking for. So, before I take the plunge and copy all this data twice yet again, here's my questions for confirmation: If a drive fails, I can remove/replace that drive without it taking out the rest of the pool? If I pool all 4 of these 3TB drives together, I'll get a combined pool of roughly 12TB? Will DrivePool appear as a normal NTFS volume so that CrashPlan can back it up in real-time? FlexRAID RAID-F does not but their T-RAID option apparently does. I read a recent thread where someone talked about it being a nightmare to restore FROM CrashPlan after a drive failure, largely because of CrashPlan's crap client again. Should I run into a failure and it's with unduplicated data, is this what I can expect to deal with? It's not a deal breaker, I just want to know. I hate CrashPlan's client but unfortunately, they're the only truly unlimited option I have available. Again, sorry for the long post but after wasting so many hours on FlexRAID, I really want to make sure what I choose next will do the job. As I said, I think with CrashPlan, I don't necessarily need full RAID with this data. But I want to make sure I'm not missing anything. Thank you very much!
×
×
  • Create New...