Jump to content
Covecube Inc.

Umfriend

Members
  • Content Count

    788
  • Joined

  • Last visited

  • Days Won

    41

Umfriend last won the day on March 10

Umfriend had the most liked content!

About Umfriend

  • Rank
    Advanced Member

Recent Profile Visitors

1033 profile views
  1. I don't know. I don't but I have duplication and backups. I've never noticed anything going wrong with having all ServerFolders on the Pool.
  2. Has the removal of the drive (through the GUI) been succesful (as in, it is not listed as part of the Pool or missing)? In that case, tried a reboot?
  3. No, that is just fine. There is no issue with adding a disk to a Pool and then place data on that disk besides it (i.e. outside the hidden PoolPart.* folder on that drive).
  4. Yeah, so it might have been better to stop the DrivePool service before starting but really the behavior you describe is unexpected by me.
  5. Ah, you would write files to G:\. DP will save two duplicates to E and F. Each of E and F would divide single instances to A+B and C+D respectively. Do not write to E or F. You can, but those files will only be part of Pool E (or F) and not be duplicated. Much like writing to a drive directly outside of the hidden PoolPart.* folders, those would not be part of any Pool.
  6. I would think that you could simply format the relevant drives. DP will then say they are missing and if you remove the drives in the GUI then it should be done instantly.
  7. Umfriend

    Simple Question

    Yeah, so now you have Drive A + B = Pool X (with x1 duplication) Pool X + Drive C = Pool Y (with x2 duplication) You can choose to store files in Pool X (and then they will not be duplicated and present on Drive A and B ) Or store them in Pool Y (and then they will be duplicated and present on Drive A/B for one instance and Drive C for the other instance). So you need to move the files from Pool X to Pool Y. The best way to do this is to "seed" them. You would: 1. Stop the DrivePool service 2. In drive A, locate the hidden Poolpart.* folder. The top level shows (part of) Pool X. Locate the hidden PoolPart.* folder within the first, that is (part of) Pool Y. Move files from the outer/first Poolpart.* folder to the inner/second PoolPart.* folder 3. Do the same for drive B 4. Start DrivePool service. DP will now, for Pool Y, find a shitload of unduplicated files and start a duplication run, in this case copying such files from drives A/B to drive C. Not sure what OS you use but you may need to keep in mind that shares do not automatically follow (i.e. in WSE2016 I would move ServerFolders through the WSE2016 dashboard, even if that takes a long time). As an aside, I would actually create three Pools. Pool X = Drives A/B, Pool Y = drive C and Pool Z = Pool X and Pool Y. I think it makes it easier to manage when adding/replacing drives. Only Pool Y would have x2 duplication. The procedure above still holds.
  8. I have WSE2016 with all ServerFolders on an x2 Pool (and I do backup both OS and one instance of the ServerFolder files as well). No issues. x2 duplication is noce but no replacement for a backup (let alone an offsite rotating backup).
  9. I think I understand now. As you say, ECC can correct some corruption in memory, say a bit erroneously flipping. Scanner does not correct the same type of corruption on a HDD. Scanner, aside from polling SMART data, periodically checks whether it is able to read an entire disk. If a bit on a disk is simply flipped (aka bit-rot I think) then Scanner won't pick that up. Scanner will pick up unreadable sectors (and file system issues as it also does a CHKDSK AFAIK), even if they are presently not used to store data. Unreadable sectors anywhere on a drive is an issue IMHO. But if you force snapraid to read everything periodically and have ECC memory to avoid the (remote IMHO) risk that a files' representation becomes corrupted in memory (while OK on the drive) then I guess it gets you could say it gets close enough.
  10. I don't see how ECC memory comes into play into all this. The benefit is that Scanner may be able to warn you of a drive that is starting to fail based on SMART data. Also, if sectors become bad, it would pick that up during a scan (I don't know snapraid, if it reads *every* file every night then that should be close although Scanner also checks sectors not currently in use). Scanner checks drives depending on a schedule that you can set (and you can force scanning of individual drives at any time through the GUI).
  11. Just install DP on the new machine. It will recognise the Pool. There is only one caveat I know if and that applies to the situation where you use balancers not installed by default. In such a case you would only connect the Pooled drives once DP + plug-ins are installed.
  12. That made me laugh. Thanks.
  13. You could have just one Pool with some files duplicated and others not. I understand you want good protection, but against what exactly? One drive failure? Accidental deletions? Fire/theft? For one drive failure and continuity, DP with x2 duplicarion is, IMHO, better (by far) but at the cost of needing more storage capacity. Just some thoughts.
  14. I really wouldn't know but I would wonder whether you'd be better off just using DP with x2 and a CloudDrive or a real backup solution. I am not familiar with SnapRaid but in your setup, as far as I can tell, you'd have data stored in Pool 1 and Pool 2 and then have both duplicates protected by parity. So, you'd use more storage capacity then either a simple x2 duplicaiton or x1 duplication + parity setup. You have single files in excess of 1TB? Anyway, it would then simply write to the archive things. BTW, if it is a NAS, what kind of network do you have? If it is "just' a 1Gb network, then there really is no need for SSD caches. On streaming and Snapraid I can't be sure but for the storage/DP part, a simple Celeron with 4GB would do the job even. Oh, and did you have a look at Scanner?
  15. Within Scanner you can do: Settings -> Scanner Settings -> Open Advanced settings and troubleshooting -> Advanced Settings -> Scanner - ScanMaximumConcurrent and set to a high(er) number. I think that would allow you to scan many disks at the same time. Having said that, I set Scanner to scan every 14 days and have ensured that the drives are all scanned at different days. Other than for a quick first scan, I would prefer scanning my collection to be distributed over time.
×
×
  • Create New...