Jump to content

backup

Members
  • Posts

    5
  • Joined

  • Last visited

Everything posted by backup

  1. https://kopia.io/ https://kopia.io/docs/ I'm slowly squeezing "sniglets" from our NUC based storage system and wish to learn from others using Kopia. Believe it uses the Blake2 hash for everything. I have successfully restored drives with Kopia by checking the "skip any errors" box, but have experienced some unwelcome problems when performing hash verifications of existing backups. So far at least, those problems have been traced to equipment deficiencies which I am addressing. Deduplication saves a lot of space. So I'm using Kopia for backups more and more. Our previous system used a hash based equivalent to Rsync (https://bvckup2.com/kb/delta-copying). I trust Bvckup more - but Kopia's design allows deduplication of multiple drives and pools.
  2. Well, now you have me concerned after just installing scanner for eval You would think after 40 years *Someone* would have released an accurate test for disks - WITH the option to move marginal blocks to the $badclustr file? Cuz we all know that drive manufacturers don't want returns - so they skew their SMART Systems to prevent it. My situation is mostly similar to yours. I'm getting notices of file corruption from our Kopia repository. Kopia stores Blake 2 hashes for every file. At first I feared the problem was with the Drivepool software (used as RAID 0 here). Chkdsk * /r found no problems....but then I dusted off an old friend (https://panthema.net/2013/disk-filltest/). Sure enough, it quickly failed the drive (perhaps absolving DrivePool's software). Now trying "Scanner" to determine if it sees something amiss on this drive. Sure hope it isn't foo-foo dust like the others.
  3. Has anyone tried this with drivepool for old data which you might still wish to maintain? Given the state of built in drive defect management - I would think this is doable. Lotsa copies keeps stuff safe. With three copies, I doubt one would need to worry about the condition of used drives? An example: Lets say you still had 2TB drives in your personal inventory, but they weren't being used because drive sales allowed you to upgrade to the 14TB level. Now you don't wish to saddle yourself with the upkeep requirements of an active drive pool, so you mount all those drives to folders, assign them to a separate pool, then fill them up with archives. Once they are filled you run dpcmd as a final check, wait 24 hrs for that pool to completely stabilize, then disconnect those drives from your system. Not much different than manually storing copies to old drives, just a bit more disciplined because you automatically store three copies with each group of drives...... What are the potential problems?
  4. I've dedicated an old tower, a separate high grade power supply, and ~30 worn EOL disk drives to an experiment running in my garage. It would be enormously useful if I wasn't required to spinup all those drives before loading test backup files into this array. IE if Drivepool could just remember if the files are duplicated and where they are. The objective here is simply to use up old drives for a good cause...These backup files are almost always error protected rar archives anyway. So getting a 90% read is enough to recover the data they carry. And all the disks are pruned of bad platter blocks *before* they go into that array. So the storage there isn't really as bad as you might imagine. Used up disks can be fine for backups. We just need better tools to use them effectively.
  5. Please explain your reservations. I will listen. But I must tell you I find balancing to be quite annoying and problematic with this Spectre free Atom box I use to manage 50TB of backup storage. If all disks are reasonably filled...why must I bother with automatic balancing? I would like to know if Drivepool "looks" for open space before writing new data into the pool.
×
×
  • Create New...