Jump to content
Covecube Inc.

All Activity

This stream auto-updates     

  1. Yesterday
  2. I'm setting up snapraid to use along with drivepool - lots of threads on here I've seen. However, I'm running up against a couple things I haven't seen mentioned. Unlike some others, I AM using some duplication and some DP balancing. I realize that snapraid won't be able to complete parity until the balancing settles down - that's fine. Once it's all rebalanced, the drives I'm using with snapraid will have static media files. Question 1: I have some files that will work better outside of the pool - such as image backups. I've seen people recommend setting the base folder for snapraid as the poolpart folder rather than the root of the drive - though I've never understood the reason for this. But doing that would presumably exclude files outside the pool. Would I be better off just setting the data drive to the root rather than the hidden poolpart file? (Maybe if I understood the reason for doing the latter, this would be clear). Also - as snapraid does the sync, it's reporting files that it thinks are copies but have different file data than the "other file" with the same date and size. Those files appear to be file backups of a database that gets saved frequently in dated folders. Snapraid says if it's a "false positive" I can rerun sync with a nocopy switch. What's unclear to me is whether snapraid is saying "I'm not going to count these as different files unless you use that option , so if you don't want me to ignore the additional file, use "nocopy". Thanks !
  3. If been reading through the forum and Hierarchical Pooling posts but it's still not clear to me how should I proceed. Right now my localpool setup is 1 SSD 500GB 1 HDD 6TB Future another HDD. What would be the best way of having a VM in the SSD of localpool (performance) and also having the SSD as balancer with the SSD Optimizer plugin? Having 2 partitions in the SSD maybe, one for the VM another for balancing? If I put a file placement rule for the VM folder to be only placed in the SSD but I also activate the duplication for it, would it be copied to the HDD or not because the file placement rule? If later on I want to add a cloudpool and activate duplication for the VM to the cloud, I should create a Hybrid Pool with my localpool and the cloudpool and there again put a duplication rule for the VM folder? Thank you and sorry for my English.
  4. Last week
  5. Thanks! what is "the predefined order on the archive disks" though? Is that just the order they are in in the 'drives' section? essentially, I want DP to sequentially fill my drives (as per the ordered file placement balancer), but to use a SSD as the temporary storage space for newly created files. IE: new file in pool (copy from network, etc) ---> SSD ---> Archive disks, and fill in a set order
  6. To update you, Alex is already planning on implementing such a link. It's half there, as he's still working on that specific feature.
  7. I was trying to do something similar. Using an SSD as balancer and storage for VM folder (for performance). But a question came to my mind. If I add a file placement rule to have the VM folder always in the SSD... what happens if I also activate folder duplication for it?
  8. Interesting. I also use Macrium and will look at this with a view of moving my image files out of the pool.
  9. Got an old 640GB drive from a system being tossed (WD Blue) - manufactured 2009. Obviously it's pretty small, so it's not a big concern, but I was surprised to find that Scanner started marking every sector as bad - I stopped it after something like 30,000 bad sectors. I ran chkdsk with the full scan and fix, and it reported no problems at all. ( I also did a full format and again ran it through scanner - same thing. And again chkdsk says all good). The drive was plugged into a USB dock that I've had no previous problems with. It also said no SMART data was available - but I assume that's just due its ancient vintage. Are there known differences in older drives that would cause Scanner to think everything was bad?
  10. So I've been having trouble getting a potentially damaged drive evacuated and removed. I've posted other places about that so I won't go into those issues. However, after manually removing unduplicated files, I physically took the drive out of the system. When DP said there was a Missing Drive, I clicked Remove, and since it was absent, it pretty quickly got removed. I then plugged the drive back in and it immediately appeared back in the pool - rather than in unpooled drives. This is not what I would have expected - I'm wondering if this is intentional or a problem. My assumption is that once I click Remove and DP says it was successful, the drive will not re-appear in the pool unless it's deliberately added. I'm guessing that this is related to having used the Remove command when the drive was missing - but still, I wouldn't expect this. I couldn't use the Remove command while the drive was still present as there was no progress even after a couple days, despite there being only a few hundred GB of files on the drive. (Note my pool has been measuring for about a week, with minimal progress).
  11. I've been puzzled why unduplicated content on a drive with SMART warnings and damage was not being removed, despite setting various balancers to do so. I discovered today part of the reason - possibly all of it. I believe all the remaining unduplicated content were backup images from Macrium Reflect. Reflect has an anti-ransomware function that can prevent any changes to backup files. This was preventing drivepool from removing them. I realized this when I shut down drivepool service and tried to manually move those files to another drive. I'd have expected drivepool to report that files were inaccessible - but apparently it did not know other software was actually blocking it - which brings me to the next issue. Stablebit Scanner reported 20 unreadable sectors on this drive and 3 damaged files. SMART had indicated 5 pending sectors. I decided to re-run the scan after disabling the Macrium Image Guard - so far, it appears the unreadable sectors may have been caused by Image Guard and may not be bad. Remains to be seen whether the 5 pending sectors will end up being reallocated or become readable. The damaged files however were NOT image backups, so it's unclear if there was any connection. Bottom line: don't use Macrium Image Guard (or any other similar software) with pooled files. I may just move my image files out of the pool to avoid the issue.
  12. I've resolved a part of this but I'm going to start a couple new threads so people can find in the future.
  13. The bottom third or so of your screenshot contains all of the ordered placement settings under the SSD optimizer.
  14. Thank you for the reply! I would like to try this, but I don't see "ordered placement" in the SSD optimizer... am I just overlooking it? I've pulled the SSD for now, that's why it doesn't show up in the list ATM. Thanks again!
  15. I'm assuming your referring to unlinking the hard drive from the drive pool, yes I understand that. Want I'm wondering, is after unlinking the hard drive, how does Drive Pool handle the data that was on that specific hard drive that was unlinked, does it automatically start transferring the data to other available hard drives that are currently in the Pool? I'll try that out tonight and I'll see what happens.
  16. Thank you for the quick reply and suggestion. I am pretty sure it was a WHS2011 backup. I tried both, and got the same error on both..
  17. Ideally, you'd want to remove the disk from the Pool using the normal method and then add the new disk. Anything outside of that is basically not supported, because it is prone to issues and complications.
  18. That may be an issue with the version of the Client Backup database. Are you using this on? http://wiki.covecube.com/WhsDbDataDump_2.0 If so, that may be why, if you were using WHS2011. In that case, try: http://dl.covecube.com/WhsDbDataDump/BETA/WhsDbDataDump-1_0_0_6-BETA.zip
  19. Appreciate it, and love the vision for this cloud dashboard. Very excited to see future new features.
  20. I long ago made the switch from WHS to a Win 10 build with Stablebit DrivePool. I recently found a few files that I need to restore to an older version. I kept the client backup folder intact, and I see the PC and the dates of the backups when I run the whsdbdatadump command. However, each time I run it I get an error: "Unhandled Exception: System.OverflowException: Arithmetic operation resulted in an overflow." Looking for advice or perhaps another tool that might work? Thanks!
  21. my question is more of a how to, and I was unable to find an answer via search (most likely not using good key words, sorry). When we need to change the physical drive for whatever reason (damage, upgrade, in my case in the near future, upgrade), what is the process to ensure we do not lose data on the drive we are removing if it's apart of the Pool Drive? How do I ensure that data is migrated to another hard drive that's not being removed?
  22. Fully recognize that the current issue is not mine (but I'm the OP) however would highly appreciate if: 1. How do I find out which files on drives are unduplicated? 2. That this thread is anyway updated with recommended processes/commands need to be followed when a problem occurs. Or a link to such processes/commands. cheers Edward
  23. the disconnects were actually on two other drives - not the damaged ones. I'm not sure I'm understanding the dpcmd ignore -- I don't know which files on this drive are unduplicated and which are duplicates - so it's not clear to me how I should manually remove the files. That's why I set the drive usage limiter to remove only unduplicated files - but that hasn't happened at all. Feel free to respond on my ticket if you prefer.
  24. Well, the removal failing is. And unfortunately, because the disk is having issues, the remeasure is worse (since it's I/O intensive). And I don't think the remeasure is normal here, but I'd have to double check. And that there are disk issues .... if the drive disconnects and reconnects during this process, it will trigger a remeasure to occur, which may be what is happening here. As for balancing, yeah, that's intentional. And either way, the "dpcmd ignore-poolpart" command should be pretty instantaneous, and should prevent the issue from occurring (but may trigger a remeasure)
  25. Noted, and will pass that feedback along.
  26. Also , as I've explained in our private messages on my ticket, I tried the force damaged disk removal - its not that it's getting stuck on damaged files -it's that it hasn't even begun removing files from this drive because it's been measuring the pool for days. It did at one point start to balance but only moved a few files on other drives and barely touched this one.
  1. Load more activity


  • Create New...