Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won


Thronic last won the day on September 8 2018

Thronic had the most liked content!

About Thronic

  • Rank
    Advanced Member
  1. Just did some spring cleaning in my movie folder and removed over 200 movies. Folders with the movie file and srt files. When I marked all of the folders and pressed delete, about 20 of them came back really fast. Then I marked those for deletion, pressed delete, and about 5 bounced back. And 2. And after deleting those 2 no more folders "bounced" back. I've probably had these files for 2-3 years, across a couple iterations of DP. Perhaps outdated ADS properties causing it? Not a huge deal, more of an awkward observation, didn't hurt my data but required a watchful eye to avoid leaving a mess. I'm about to turn off replication for my media as I feel the scanner read scans are good enough safety-net to discover realloc/pending sectors early. And with the evacuation balancer, I feel pretty safe. I have a strong feeling this was a replication related event, maybe race-condition based, internal OS/NTFS functions running side-by-side with what DP wants to do and being a little ignorant about eachother. Not thinking about turning off duplication anymore after a 3TB seagate just died suddenly in my desktop. It only hosted backups that would regenerate, but it's worse if it happens in the pool. Would render evacuation useless. So, just curious about the rubber-band dynamics of folders coming back from the dead now, but it's not a huge deal, mainly just curious. Thanks.
  2. Thronic

    Big Veeam files

    Yeah. It seemed to accumulate a little more data than it should over time but not much, but worked without errors and did test recovery OK too. I ran it for a few months. EDIT: oops I thought this was my old DP post. I've never done it with CD. I ended up with rclone and raw gdrive@gsuite instead.
  3. I'd be interested in hearing the outcome of this. Tagging to follow.
  4. Could it be the 90% default balancing rule in the prevent overfill plugin that starts overloading your USB array when trying to move things around continuously (will it actually do this, Christopher?)? USB is a very fickle thing and hubs/backplanes/controllers are extremely different in stability and actual bandwidth/multiple device handling. Slow drives, using system drive, and/or overfilling too much could have given you similar symptoms on SATA.
  5. Thronic

    Disk duplication

    If using the technology by design, you should never need (or want) to know at all. Simplicity is the point. DrivePool will handle duplication and drives in the pool safely for you. Use file placement if you want to place certain files on certain drives. It really doesn't matter where the duplication is as long as it's somewhere in the pool.
  6. Thronic

    Changing HDD's

    By design you'd add new drive and press removal on the old which will move things out of it safely. You can copy the hidden poolpart folder to the new and then replace them while it's off or the service stopped, but it's not clean if there's activity on the pool while you do it.
  7. Thronic

    how to delete a pool?

    I would probably just shut it down (unless you have hot/blind swap support) or make them offline via diskmgmt.msc so all are missing, then remove them from the pool in that state, like swapping a bad drive that can't be removed.
  8. Thronic

    File change limit?

    For rooms without people this is actually a very good concept. Wouldn't be bad at all, in reality all the valuable data in one building to another is already being done in major enterprise computing by time-based geo replication e.g. every 5 min. I compare that analogy more towards SMART and internal drive ECC. Like you say, mitigation is good even if it's not the entire solution. A pure system backup is perhaps 20-60GB max. The rest can be backup up in single files or through an atomic/browsable solution. I think this specific situation would be better served by you researching backup options that would make you happy rather than having Stablebit create something half-assed. I can't believe Alex would want to implementment only partial protection if ever moving in that direction, and behavioral/heuristic algorithms can quicly become a fickle beast to deal with. Think about it. If you only have a few files you want to protect, than your own solution won't even work. If you have many, your solution still includes accepting SOME damage but not be overrun by it, which is a horrible way to protect anything and a lot of users would rage against loosing anything if it was even an option that had the word "protection" in it. Why have that fire-door when you can easily protect it entirely with a scheduled separate copy and have no performance or usability penalties. Having to manually consider how much an application will modify your files (turning it on/off) is advanced/expert level computing. And at that level there are much better options, simple, obvious ones, to protect your data. Yeah I personally don't see why anyone would want something like this when 100% is so easy to achieve, and more often than not the stance is that you either accept loss or you don't. DP is loved over RAID because you'll rarely loose everything due to individual drives and their own intact FS, but this is a forward development from typical RAID where you would loose everything. What you're proposing is a backwards approach to rarely loose everything, from loosing nothing if you just bother to do basic separate copies in any of the multiple ways available even with native tools. If anything, I would rather prefer an option to have some files being copied to a read-only location for a retention of X days or something that would be a 100% protection with easier implementation. The very LAST thing I would like is to have DP try being intelligent and decide what's malware and not during normal operations. I don't want it to be its job at all, I just want GOOD pooling with GOOD integrity. I don't want anything non-related to even remotely possibly affect that. I'm a fan of tools that do mainly one job, and do it well instead of trying to adhoc anything and everything that comes to mind, especially partial solutions to data protection.
  9. Thronic

    File change limit?

    This is a horrible idea. First of all there would always be SOME damage done and there's lots of situations where you would write to a large amount of files at once. Second, ultimately backups are the only real data protection. DrivePool offers redundancy, which is not a backup. Look into 3-2-1. That said, you can ransomware protect your most important files by simply scheduling a separate copy of them locally on the server to a non-shared folder. I'd look into Veeam free agent for both GNU/Linux and Windows. It works great to local and network destinations, supports incremental backups and individual file browsing and recovery. I personally also use rclone to cloud for offsite copies.
  10. I think they must be cleaned up at some point, or my drives all the way back to '14 would be incredibly messy by now but there are just the odd empty here and there.
  11. Oh I've been over everything back and forwards and then some, it was just this little detail I was missing. Actually wrote a guide on SnapRAID itself in 2016 after testing it extensively, but then left it. Pretty excited to have everything covered finally as it's been hell deciding which way to go. Storage is one of the few hobby activities I got left, and budget wise I'm being challenged, but wasn't gonna give up. I've considered every available storage setup and variation there is to cover my needs, almost complete days and better part of the nights for over a week, maybe more. It's taken its toll but it's finally coming together. I'm a perfectionist so pretty much everything is an obsessive struggle, but this situation was in the extreme range for sure, weighing cost vs need vs want vs possibility.
  12. So, drives in a pool has a poolpart.N folder. What I'm thinking is pointing snapraid.conf to these. Will that restore the parent poolpart.N folder as well or just the sub content btw if leaving out \ in the end? Will it go something like this? New drive is different, so I have to remove/add to drivepool regardless because of serial/GUID or whatever. This will create a new poolpart folder. If I do 'snapraid -fix m' towards that drive after mounting it to the right folder, I'm guessing the old poolpart folder gets restored next to the new one. Now it should have 2 poolpart folders. Is the most correct thing here to move the old content into the new poolpart folder and delete the old before I remeasure the pool? Or will the old poolpart still work as long as the .N number is unique... I can't see that happening as there's no logical reason for DrivePool to handle 2 poolpart folders on a single drive. Thanks. EDIT: Nevermind, didn't think it through. When SnapRAID is aimed to the new poolpart folder it will fill it with the contents of the old. So tired of thinking of different storage setups the last few days that some parts of the brain are getting blurry I think... Sorry.
  13. This thread over at Spiceworks made me curious. The residential guru on storage writes: Anything real to this in a DP type setup?
  14. I'm rebuilding the servers soon and gonna split my current bare-metal into 2, where one of them is dedicated fully to DP, Scanner and perhaps SnapRAID. Nothing else beyond sharing the pool to other computers and servers on the network. What is the very best supported OS for DrivePool? I'm thinking: 2012 R2 for tested and polished support. Clean, simple, runs on the very stable 8.1 kernel before 10 was its own thing. 2016 for general OS improvements overall. Sceptical as it shouldn't be more stable than 10, and I have practically no OEM drivers for it. 10 Pro/Ent for best hardware OEM support as I'm using largely consumer parts, with GPO adjusted WU, AV and FW. (Doing this currently, but had a recent crash, so sceptical to solid 10 support). Before this I ran 2012 R2 for 3 years without a hitch. The reason I'm running 10 now is to take full advantage of AMD Vega 11 drivers, which won't be necessary anymore. My VM testings in HV (VMResourceMetering) also indicates that 2012 R2 has barely any IOPS activity more than Hyper-V Server has, while 2016 has a bunch. Even more than CentOS while Debian falls inbetween. Based on just 4-8 hours clean installs though, and won't matter too much as the system will be on its own dedicated SSD backup up by Veeam. Gotta say, I'm leaning towards 2012 R2 a little... Input welcome.
  15. Further investigation shows a failure from winsrv.dll telling event log that ProcessID 872(???) tried to read memory at X address that could not be read. Obvously this was bad enough to cause Win 10 Pro to reboot hardcore style and I guess this is what also cause Scanner to suddenly develop Alzheimer. I wonder if it could have been Scanner itself doing something weird... A few minutes or so before I did a test for Drashna to collect read striping information regarding a bug when syncing with rclone. Maybe something from this got stuck or worse "under the hood". I'd love to hear from Drashna when Alex or whoever gets around to look at those logs. If not I'm not sure wth happened, and those are the system errors I hate the most, the ones I have no idea how happened.
  • Create New...