Jump to content

Thronic

Members
  • Posts

    52
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Thronic

  1. Oh I've been over everything back and forwards and then some, it was just this little detail I was missing. Actually wrote a guide on SnapRAID itself in 2016 after testing it extensively, but then left it. Pretty excited to have everything covered finally as it's been hell deciding which way to go. Storage is one of the few hobby activities I got left, and budget wise I'm being challenged, but wasn't gonna give up. I've considered every available storage setup and variation there is to cover my needs, almost complete days and better part of the nights for over a week, maybe more. It's taken its toll but it's finally coming together. I'm a perfectionist so pretty much everything is an obsessive struggle, but this situation was in the extreme range for sure, weighing cost vs need vs want vs possibility.
  2. So, drives in a pool has a poolpart.N folder. What I'm thinking is pointing snapraid.conf to these. Will that restore the parent poolpart.N folder as well or just the sub content btw if leaving out \ in the end? Will it go something like this? New drive is different, so I have to remove/add to drivepool regardless because of serial/GUID or whatever. This will create a new poolpart folder. If I do 'snapraid -fix m' towards that drive after mounting it to the right folder, I'm guessing the old poolpart folder gets restored next to the new one. Now it should have 2 poolpart folders. Is the most correct thing here to move the old content into the new poolpart folder and delete the old before I remeasure the pool? Or will the old poolpart still work as long as the .N number is unique... I can't see that happening as there's no logical reason for DrivePool to handle 2 poolpart folders on a single drive. Thanks. EDIT: Nevermind, didn't think it through. When SnapRAID is aimed to the new poolpart folder it will fill it with the contents of the old. So tired of thinking of different storage setups the last few days that some parts of the brain are getting blurry I think... Sorry.
  3. This thread over at Spiceworks made me curious. The residential guru on storage writes: Anything real to this in a DP type setup?
  4. I'm rebuilding the servers soon and gonna split my current bare-metal into 2, where one of them is dedicated fully to DP, Scanner and perhaps SnapRAID. Nothing else beyond sharing the pool to other computers and servers on the network. What is the very best supported OS for DrivePool? I'm thinking: 2012 R2 for tested and polished support. Clean, simple, runs on the very stable 8.1 kernel before 10 was its own thing. 2016 for general OS improvements overall. Sceptical as it shouldn't be more stable than 10, and I have practically no OEM drivers for it. 10 Pro/Ent for best hardware OEM support as I'm using largely consumer parts, with GPO adjusted WU, AV and FW. (Doing this currently, but had a recent crash, so sceptical to solid 10 support). Before this I ran 2012 R2 for 3 years without a hitch. The reason I'm running 10 now is to take full advantage of AMD Vega 11 drivers, which won't be necessary anymore. My VM testings in HV (VMResourceMetering) also indicates that 2012 R2 has barely any IOPS activity more than Hyper-V Server has, while 2016 has a bunch. Even more than CentOS while Debian falls inbetween. Based on just 4-8 hours clean installs though, and won't matter too much as the system will be on its own dedicated SSD backup up by Veeam. Gotta say, I'm leaning towards 2012 R2 a little... Input welcome.
  5. Further investigation shows a failure from winsrv.dll telling event log that ProcessID 872(???) tried to read memory at X address that could not be read. Obvously this was bad enough to cause Win 10 Pro to reboot hardcore style and I guess this is what also cause Scanner to suddenly develop Alzheimer. I wonder if it could have been Scanner itself doing something weird... A few minutes or so before I did a test for Drashna to collect read striping information regarding a bug when syncing with rclone. Maybe something from this got stuck or worse "under the hood". I'd love to hear from Drashna when Alex or whoever gets around to look at those logs. If not I'm not sure wth happened, and those are the system errors I hate the most, the ones I have no idea how happened.
  6. Had about 10 of these happening last night in my event log. Disk 11 is Drivepool. This followed directly after Veeam (free agent for Windows) started, so maybe related to that. I'll update and keep an eye on it.
  7. When I do that and try to mark them as Good, they jump back to being unchecked. Kinda weird. Should perhaps reinstall it.
  8. Not even allowed to mark them as healthy since I know they are, it's greyed out. That's great.
  9. Well their status is "Last scanned: Never" which is very wrong. And the heat limit I had set on C drive was reset as well. After closer look, it seems the server had rebooted. A look in event viewer indicates unexpected shutdown. ... something with volmgr failing to create dump file right after that, relevant to a HarddiskVolume24. Still working it out...
  10. I was just watching Plex right now when it suddenly started to buffer, it never does that, ever.... I check the server next and loads of activity going and it acting all kinds of sluggish. I start scanner and notice it has reset all settings and all 10 drives suddenly marked as not checked ever.
  11. Thronic

    Big Veeam files

    From Veeam docs: I wonder, if I aim Veeam to CD, if it would re-upload the entire VBK file to cloud (all 960GB of it) under the hood, or just the changed blocks. I'm considering offsite backup of my local backups and not sure what path to take. Not sure if rclone does this or not, gotta find a solution that won't re-upload the entire file "just" because of 1-2GB change. Perhaps a tall order, but I'm thinking CD may work here due to the abstraction its FS layer offers through its chunks of data, GSuite won't see the big files. Just like to confirm it first if Alex or Christopher or anyone else who knows could provide som input. Also, what upload cache challenges may I run into here - if any - when uploading 1TB+ files? As well as download cache when retrieving. Thanks.
  12. It's just exaggerated. The URE avg rates at 10^14/15 are taken literally in those articles while in reality most drives can survive a LOT longer. It's also implied that an URE will kill a resilver/rebuild without exception. That's only partly true as e.g. some HW controllers and older SW have a very small tolerance for it. Modern and updated RAID algorithms can continue a rebuild with that particular area reported as a reallocated area to the upper FS IIRC and you'll likely just get a pre-fail SMART attribute status as if you had experienced the same thing on a single drive that will act slower and hang on that area in much the samme manner as a rebuild will. I'd still take striped mirrors for max performance and reliability and parity only where max storage vs cost is important, albeit in small arrays striped together.
  13. Parity based yes (I dislike parity in general, except parchive/par2 for cold backups) for HDD's (SSD/NVME is taking a liking to it as the typical URE danger is nonexistent), striped mirrors are still fine with a proper controller, spares available. Motherboard is just software RAID pretending to be hardware RAID and is a bad lock-in to be tied to. Most cards are not a "HW" option either if entirely SW based, no BBU or writeable flash cache, no RAM. They are however excellent HBAs when flashed to the version you suggest, I run a few 9211 based myself. While HW RAID still has a place in enterprise, I would not really consider real HW RAID anymore for private projects, mostly because the price doesn't outweigh the power of SW these days when UPS and ECC protected. HW can support blind swapping which is very practical in big enterprise, but SAS HBA's still have hot swap which is fine for most. While I love the individual and mobile NTFS JBOD aspect of DP, I often ponder the complicated OS layer it deals with and potential driver/kernel/memory problems that may occur - I'm always careful when using the UI not to challenge it too much, I've experienced some weird fs race conditions etc. since using it from 2014, but most were fixed along the way. I sometimes wonder if I would be better off growing a simple MD RAID in pairs instead as I run 2x duplication anyway and depending on the amount of mirrors I would be able to actually loose a bunch of drives simultaneously. I wouldn't even want LVM so it would just be simple ext3/4 on top with gparted to extend when needed, on a clean Debian install with automated smartmontools monitoring in the background. E.g. a backup server in a workspace running this, suddenly it dies. Mr. X suddenly needs to restore his workstation. Getting the array back up should be as simple as moving them to any computer and booting of a live-cd and mdadm --assemble --scan to copy out needed data. Can't really think of any negatives outside of having to run a GNU/Linux installation for it, as well as a much higher risk of making accidental catastrophic administration mistakes (human error is #1 for complete array failures). Maybe I should have already focoused in this direction, but have been lazy (mildly tired of IT) and blinded by the excellent simplicity that Stablebit offers, not to mention their support. When I already have the proficiency to do this and have for many years, it's hard to view it as someone who doesn't and just want a simple solution - I can't manage to realize if this is the real DP market or if it makes sense that I use it as well. Then I start thinking of the developers of DP, why did they bother making this when they could have done the same thing? Is it just to fill the demand after the drive extender exit? Would they have done it today with Storage Spaces on the rise (even if heavily powershell dependant) if staying in Windows is the reason? I'm also curious about the business aspect Stablebit ultimately wants to move in direction of; just storage hoarding consumers, or enterprise as well? And if they're supporting any big enterprises already. Time to feed the kids before bed here in Norway... Sorry for wall of text (perhaps also bad grammar), I've had storage on my mind a lot lately as I'm itching to do another project soon. Feel free to ignore at will.
  14. I only store media files in the pool. I know how normal degradation works but was curious if the process or events differ with hierarchical pooling. The last question was about caching/cache size for clouddrive. I'm going to detach it and dedicate a 128GB SSD I have available for expandable caching so I don't have to worry about it filling up or affecting the system drive.
  15. I'm a bit hesitant on this step: 5. Move all my Files from I: to I:\PoolPart.XXXYYY Will it start a full cut-and-paste operation or transparently just move the index since it's on the same drivepool mount. That drivepool has 10 drives with 18TB duplicated data on it at the moment. When that's done I was thinking of moving drive letters around, so my services including Plex will be non the wiser. Also, just to be clear, if a drive goes bad, how will the master pool react to a degradation of the local part in that pool until the missing or damaged drive is swapped? Will the master pool go into read-only and indicate the local pool as a damaged drive until the drive in that pool has been replaced? I'm guessing and hoping all details of such events have been tested and thought of. And, how will the clouddrive cache behave when it suddenly needs to duplicate massive amounts of data? Will it rotate the 1 GB default I've set? or rape the entire drive (I'm using C: as cache, a 150GB available SSD which I do not want to fill up and crash the system). Do have I have to use a dedicated drive as cache for this or set it fixed for it not to run wild? Thanks
  16. Thronic

    Running out of RAM

    Is this a Win 10 / 2016 issue only? I run 2012 R2 with the latest stable still and no RAM issues.
  17. With the Scanner service running (vds/virtual disk service?): With the service deactivated:
  18. I actually just reproduced it in both Rufus latest version, and Diskpart for cleaning the drive to see if that helped. These screenshots are in Norwegian mainly, but they all claim no access to drive / it's busy from another process. And it went away when I stopped the scanner process. This was also one of the drives that worked before, a Kingston Datatraveler 8GB. EDIT: This is in Windows 10 Home Versjon 10.0.10240 Build 10240
  19. Stopping the service works. And error comes back when it's started again. But I've done some more tests, and it seems it just happens with the Kingston Datatraveler 2.0 16GB drive. I've tested a couple others, a non-brand cheap'o and a Datatraveler 3.0 8GB which worked fine. So it's not a big issue really.
  20. When StableBit Scanner is installed the Easy2Boot batch file MAKE_E2B_USB_DRIVE (run as admin).cmd fails when trying to format a USB drive. The script runs RMPrepUSB for the job, and it claims the drive is not accessible and suggests closing explorer windows or applications that are using the drive. The only thing that actually worked was removing the scanner. I suppose there's some low level conflict somewhere with how it occupies or "tags" USB drives. I just wanted to tip about it. I checked beforehand if the Scanner was doing anything, but it had not started scanning it yet at all.
  21. Thronic

    2.1.1.561 bug?

    I investigated the possiblity of "corrupted" folders from un-duplication further.. it bore fruit. Plex has an internal service called LibraryUpdateManager that watches for changes inside the folders of media files. These folders were duplicated before with Pool duplication, I changed that about a week ago to 1x (no duplication) and then turned on Folder duplication on only selected folders (nothing relevant to Plex). I just regenerated these library folders and moved old content into them. Lo and behold, Plex now automatically detects file changes again. I think something happens when removing duplication on folders, that makes them somewhat invisible to changes inside of them again for the OS.
  22. Thronic

    2.1.1.561 bug?

    After thinking about this for a few hours now, it comes down to when I changed from Pool to Folder duplication. Can that have messed up the OS detection of new files in any way in existing folders that was un-duplicated?
  23. Thronic

    2.1.1.561 bug?

    If it's the race condition bug mentioned in the other thread I've commented on, then I can't see how that effects Plex for folder change detection, and if it's only affecting deletion of folders then I might survive until its in a final release than temporarily using BETA. Plex state they use "OS functionality" for it, and it has worked fine on my 2012 R2 server so far, until it suddenly doesn't. /scratch_head
  24. Could you go into more specifics about that? I just want to understand it more. Is it the Windows API crashing with some kind of DrivePool filter when taking care of deletion, causing (in my best way to explain it) "multiple access attempts" in a bad sequence?
×
×
  • Create New...