Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


vfsrecycle_kid last won the day on April 4 2017

vfsrecycle_kid had the most liked content!

vfsrecycle_kid's Achievements

  1. Just noticed this I've now added a new drive after previous lost-disk rebalance was completed, old drives have a minus number (freeing up space) and new drive has a big number for where it'll increase to. This is probably adequate enough for most purposes.
  2. Maybe not immediately, hence why I said "can it calculate" - I'm aware it wouldn't instantaneously know and re-measuring is necessary. - Losing a disk - can the existing drives hold all the data and retain replication property (pool was not full) - Losing a disk - how big of a drive do you need to replace it to retain replication property (pool was full) My thing is DP should be able to figure out both BEFORE duplicating simply fails from lack of disk space. Right now all you get is a percent of duplication progress and for rebuilds that take multi-day you don't want it to randomly fail at 32% and it takes days to notice (yes, some people actually use this on their servers and don't check it daily) Also regarding the math, I've edited my post. This proves I can't even trust myself and I'd much rather trust what DP tells me. Thanks for the correction. Email notifications (which I use and adore) augmented with such information would be fantastic.
  3. Correct, in that case you don't know until the files are added to the pool and that is a different beast all together since you're talking in the realm of useable vs true disk space. However this is about resatisfying existing content replication rules. DP knows what was lost, how it needs to be replaced, and ultimately knows it needs X disk space to satisfy it. To further your point, I concede now that a "free space" counter does not make sense, however a "disk space needed for resatisfying replication" seems deterministic to me - in the case of a dead drive being replaced. A super simple example: 1. DrivePool - 4x4TB drives in global 2x replication. 8TB useable disk space, reported as 16TB to OS (no problems here) 2. DrivePool is 100% full meaning 16/16TB is being used (8TB of content replicated 2x) 3. DrivePool loses a drive due to hardware failure. No data loss, but replication rules are now unsatisfied 4. At this point, 3x4TB leaves 12TB of pooled space with approx 4TB* of unreplicated but not lost data With the above example, to me it seems fairly obvious and deterministic that DrivePool should be able to recommend please insert another drive with at least 4TB* in size Replace the above example with a myriad of replication rules, folders, etc, and the math should still check out - and maybe DP could recommend that to end user? Leading us back to my initial topic, which if its easily demonstrated how much data is needed to rebalance properly, you can also say how much data space will be left post balance.....as in, if a 4TB* drive is needed, but you insert a 8TB, you already know you'll have 4TB free in the end.
  4. Well that's the part I feel like could be pre-calculated. Balancing with the rules you have in place should be deterministic, no? At least in my mind balancing is following a plan, hence the percent completed being able to be displayed.
  5. That's why it's so complex, it is up to what the replication rules for the lost-data was set to. For example, the balancing finished and I was left with 1TB free (and with my replication rules in place that realistically means 0.5TB useable). I guess the ultimate question is if there's a way to calculate estimated actual disk space free.
  6. Hello, Hopefully this question makes sense. I have a big DrivePool and one of my 8TB drives just died. While waiting for my Ironwolf 16TB to replace it, re-balancing across the other drives has begun. However, it will be a multi-day rebuild, and drive space will definitely be tight - leading to my questions: 1. Is DrivePool smart enough to know if there's enough space to rebuild with my replication rules (2X globally, 3X for special folders) or not? 2. Does DrivePool know how much free space will be available once rebuilding is finished? 3. Is that presented to the user somehow? I see metrics like "Free Space" and "Unduplicated" and while Free Space > Unduplicated, I assume my duplication factor will determine whether there's enough space or not. Cheers
  7. My father when trying to access our pool encountered similar issues like this before. At the time, a restart fixed the problem and we all had permission to access the pool again (read was fine, writes weren't allowed). From there we just considered it a fluke and updated to the latest beta build. Havne't encountered it since.
  8. That sounds fantastic. And just to confirm - I can do this sort of functionality without relying on their Cloud? (That is to say, their free plan)?
  9. I definitely agree with you both - that I should employ some sort of cloud solution - to at least get rid of the 'single point of catastrophic failure' (and quite honestly, had slipped my mind, I'll look into this later in the week, Amazon has a $60/yr "Unlimited" Photos/Videos storage plan - but I'll dig deeper) However, that alone won't really address my issue - as you mentioned by yourself "as long as you noticed before the deletion goes out of rotation". I can't necessarily trust that we'll happen to find out the file was missing. Things could disappear and we wouldn't even know they are gone. For all I know some cloud providers (I know Dropbox keeps a detailed log of deleted files and sometimes offers the ability to recover deleted files....but..... I think in my case I really want a robust, highly documented/customizable solution. (Example: On Dropbox you can't say "keep deleted files for X period" - but can you on Amazon? I'm not sure, I'll have to check) Shane makes a fantastic point, that most likely something with explicit versioning might help me out with this. I've looked at Syncthing in the past (mostly because I helped setup my neighbors on a Syncthing cluster) as well as Crashplan. The former doesn't have email notifications however. But it seems Syncthing offers an interesting thing: Versioning: (See https://docs.syncthing.net/users/versioning.html- Most notably the Simple File Versioning mechanism) - Namely I can keep X copies of deleted files (in my case: probably 1 will suffice)Granted I know close to nothing about Crashplan so I will look into it. I'm a little biased to the opensource syncthing as it has treated my neighbors well (they have a 3machine 2laptop cluster that keeps their ~50GB of essential data safe).. What I think I will do is a test run of Simple File Versioning on my neighbors machine to see how this versioning mechanism works. It would be minimal effort for me to code a small script that sends out an email whenever files are moved to Syncthing's "versioning" folder (as mentioned above in the Syncthing docs, a ".stversions" folder is created) This way, I don't have to worry about the problem of 'noticing deleted files before they are rotated' - with Syncthing I can simply set it to 'move deleted files to .stversions, and NEVER remove it from there unless manually removed) And of course there is the added benefit of having a node in the cluster running outside of the house - in the case of fire. --- My mind is all over the place, but I am not necessarily in a hurry to implement all things. Thank you for the great ideas so far, especially since they never occurred to me....
  10. Hi folks, Got a question I figure maybe some people in here have thought about. So there's the age old debate of duplication vs. replication - and I get it. I've got family photos in a DrivePool with 3x Global Duplication (around 1TB total). Now this is all replicated in the pool. While this protects against hard drive failures, this does not protect against mistakes. My father could delete the Photo directory and it's effectively game over (yes there are undelete tools but lets ignore that for the sake of the argument). What I am wondering is if people have any tried-and-tested minimal effort solutions to this kind of problem vector? My initial ideas: Remove deletion privileges from the main user used to access the NAS Destructive actions can only be performed via a special "Deletion" designed Windows user, with a different login and password Alternatively: Create a second DrivePool for data I want to designate as "mistake proof" Use some form of incremental/differential/<something else> backup tool to routinely (every week?) mirror from the original DrivePool to the secondary pool. The idea being that, if files are ever deleted on Pool A, their "ghost" will always live on Pool B (aka Pool B should be forever growing, never decreasing) Or: Some sort hybrid solution between 1 and 2? The second solution assumes the data rarely changes - which in the case of my family photos. Now I'm not saying any of my solutions are right, I am still very much in the brainstorming process. I want my family to have confidence that I can keep their data safe (DrivePool is around 40TB big now, but for this post, only 1TB applies to my 'problem') - and that effectively means that I need to protect them.....from themselves. And who knows, maybe I'll write the wrong command in the CLI one day and accidentally nuke everything... Thanks folks!
  11. Thanks for the clarification. Just didn't want to make a mistake. My DrivePool (minus the one drive) is now balancing. Around 40% done so I will let that finish overnight. I'm running chkdsk /B on the potentially problematic drive now, that will also run overnight. Could very well be possible there's nothing wrong with the drive and was simply some weird state the OS was in. Fingers crossed. I've got chkdsk /B running on the drive inside another machine. Edit.
  12. You'll want to open diskmgmt.msc and from there right click the drives in an order that does not produce conflicts. See: http://wiki.covecube.com/StableBit_DrivePool_Q6811286 If you don't want WXYZ to be seen at all, then you do not need to give them drive letters (DrivePool will still be able to pool them together) - Instead of clicking "Change" simply click "Remove" when dealing with the drive letters. Probably the easiest order to do this if you want to keep every drive with drive letters: Change F to D Change E, G, H, I to W, X, Y, Z Change J to E (now that E has been freed up) Probably the easiest order to do this if you want the pooled drives to have no drive letter: Change F to D Remove Drive Letters E, G, H, I (see my picture for the Remove Button) Change J to E -- You should be able to keep DrivePool running during this whole transition phase (you don't need to remove drives from the pools). Personally I'd go with Option 1. While there are Folder Mounts that you could use, I think it would just be easiest to keep everything easily accessible the "normal" way. Plus without Drive Letters you won't be able to add non-pooled content to the pooled drives (just incase you wanted to do that) Hope it helps
  13. Hi folks, I've got a drive in a pool that has un-readable sectors (GLOBAL 2x DUPLICATION) As of right now, it seems Scanner is reporting 4838830 unreadable sectors (2.31GB) after finishing its scan. I have the option to start a File scan, but I have not started it yet. --- Meanwhile, on the DrivePool side: The affected disk has been limited: There is a red arrow offering: "New file placement limit: 0.00GB" There is a blue arrow offering: "Duplicated target for re-balancing (-2.04TB)" A few other drives in the pool have a blue arrow "Duplicated target for rebalancing (XXGB)" BETA x64 Win10 I'm a little confused as to the order I should be doing things here... DrivePool reports "Duplicating..." and then returns an error related to 2 files, giving me the option to Duplicate. This process seems to repeat. Looking at this thread: http://community.covecube.com/index.php?/topic/829-pool-going-to-hell/ I'm still a little confused as to the order of operations: 1. Drive has sectors unreadable 2. DrivePool has a new 0.00GB file placement limit 3. DrivePool attempting to evacuate data (is this done in my case? How do I interpret that negative number?) 4. Run file scan to "repair data" 5. Pull out failing drive 6. RMA failing drive Basically the heart of my question is: what do I do now? And when is it safe to pull out the drive and begin RMA? Thanks edit: I clicked file scan and it just says the MBR is damaged. Regardless of the issue (I suppose I can see if this is a real drive failure or something a reboot will fix) - has my data been evacuated? edit1.5: Running chkdsk complained about the disk being RAW - rebooting the machine fixed that issue - so I wonder if this just a false positive or some weird state everything was in. edit2 (newest): If I can trust that my 2x Duplication works, is it fair to assume that I can simply pull out the drive, let DrivePool re-balance everything with the remaining drives in the pool (there is enough space) - implying that I am temporarily in a state of no duplication for all the files on the drive I just pulled out? If that is the case, I'm fine with just formatting the pulled-out drive, and investigating further if I need to RMA it - or just throw it back in the pool as a fresh drive.
  14. Sounds good. Thanks for the background info. Successfully pulled out and formatted the two removed drives, and replaced them with 2 new 8TBs. No issues to report. Thanks again for the great product. Made a potentially scary problem very easy to handle.
  15. Hello, Thanks for the feedback. You are correct, while the first drive was mid-removal (lets say 1% or something), I queued up the second drive to be removed. I was not actively removing anything from the pool, but that is not to say something automated/service-like was accessing something on the system. And also to clarify: The lingering 400GB on both removed drives were equivalent, and match CRC on the pool post-removal. My initial issue was I didn't want to delete the PoolPart folders on the 2 removed drives unless I was 100% sure they were still persisted within the pool. After my own check it is clear that everything is good, and I can clean up the two newly removed drives. I'm currently out of the country and since removals are rare, I'm not going to attempt to upgrade to the beta version remotely. So hopefully all is well. Thanks for the insight.
  • Create New...