Jump to content
Covecube Inc.

xliv

Members
  • Content Count

    7
  • Joined

  • Last visited

  1. The more I think about it, the more I think I should have configured Snapraid to the content of the PoolPart folders. That way, moving the content of 1 drive to a new one would have kept the parity untouched. Of course, it protects only the content of the pool (meaning nothing outside of the PoolPart is protected), but in my case I don't mind as my drives are purely for use by the pool. Would there be a problem with that? For example, the snapraid.content files are at the root, so out of the PoolPart folder, and therefore are not part of the area protected by parity. Are those f
  2. I tried the following: Add new drive Robocopy data from the poolpart folder of the old drive to the new drive Insert drive in DrivePool, then stop service to move files to the new poolpart folder, restart service and remeasure pool Edit Snapraid config to replace the old drive by the new Diff command => massive "copy" and "remove" of all files Sync (with option --force-empty as otherwise sync does not work) => takes 20h for 5TB There's got to be a better way... I was thinking about putting the PoolPart folder instead of the root of drives
  3. Many thanks! Ok I'll do this next time, and will report here about if (and how much) the fact that all paths have changed (PoolPartA to PoolPartB) impacts Snapraid. It's actually probably very similar to what I've done (except that instead of using robocopy, I've let DrivePool handle the move, since my config was set to be sure that the data would be moved only to the new drive. But maybe robocopy does a better job at keeping the file attribues the same.
  4. Hi, I was looking for a specific topic or wiki page of the tips & tricks to best use DrivePool + Snapraid together. Today, my question is about: how to replace a drive with a larger one. "DrivePool way": you simply can add the new drive and remove the old drive from the pool, the rest is done automatically. Problem with Snapraid is that, if by chance (or right balancing config) the content of the removed drive ends up on the new drive, the paths to each file will have changed, so Snapraid will report a massive deletion / creation of files, and the sync command may take 12
  5. Hi, I've just finished setting up my pool with a 500GB SSD as a cache, and I have to say, it's pretty awesome! I was wandering though about the best configuration - there are many parameters, and it's not obvious how each one works, and how they interact with each other. Apologies if this has already been answered somewhere, or seem obvious in the user manual, I'm still a DrivePool newbie. I want the balancer to work only for SSD cache - moving out the files, as quickly as possible without impacting other activities on the pool. I however do not want the balancer to move files f
  6. So I had the first issue with DrivePool. I noticed this morning that one of my HDD had a SMART status of 53% using HardDisk Sentinel (Stablebit Scanner did not detect it since it's connected through a SAS controller, and as I understand, Scanner cannot get SMART data through SMART). So I did what I thought was recommended: remove drive from pool Attempt 1: I started the process, ticking all 3 options because I urgently needed it to complete without blocking on first error (btw should be default I guess?). It started, but somehow after few minutes I decided to empty the recy
  7. Hi, following my tests, I went ahead and acquired a license. I followed the seeding guide to migrate data from my FlexRAID pool, and got it running without much trouble. I have a question on file duplication. I did not enable any protection on my pool, but had initially warnings about duplicated files which were different. What happened is that, while I was running FlexRAID, a drive went missing, and one of my software re-created some of the missing files (media description files coming from automated scrapping). When I reconnected the missing drive, there were suddenly 2 copies of t
  8. Hi all, I'm new here - currently testing DrivePool as an alternative to FlexRAID, which unfortunately is not supported anymore. I was about to ask the most basic question about why my Pool was not showing the data that was already in the drives, but found the answer here. So I guess I'll follow the recommendation about seeding the pool. I'm moving off FlexRAID with a pool of 58TB, made of 10 local hard-drives (3 to 10TB), 48TB already full. I was not using FlexRAID parity, but would like at one point to do that. I have a couple of questions: The drive letter for the
×
×
  • Create New...