Jump to content

PocketDemon

Members
  • Posts

    62
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by PocketDemon

  1. Okay, so these are a couple of minidumps from one of the machines - Minidump.rar 

     

    As to the other bit of it, d.t. the controller not passing SMART commands on then I wasn't 100% sure which drive was faulty on the other machine - so easier to kill the pool & test them independently... ...well, in theory at least.

    & separately I could obviously see that you'd made a change &, with the explanation, can understand the reason why you've decided to do so - but, whilst this may be positive for some people, if no one comments that they're finding that there's significant downsides then there's no chance that you might either revert it to the way it was historically OR create an option for enabling/disabling the new behaviour in the menus.

    Anyway, no point in saying any more on that score, as reiterating further won't help either which way. :)

  2. 38 minutes ago, Christopher (Drashna) said:

    Unfortunately, I can't reproduce this behavior, at all. 
    If you could, upload the crash dump: 
    http://wiki.covecube.com/System_Crashes

     

    Yes, this is normal, and expected.  This happens so that duplicated data doesn't get out of sync, causing duplication issues. 
    And even if you're not using duplication, there are some settings stored on the pool that is (the .covefs folder, namely).  

    Reconnecting the drives, or removing the missing drives will fix this. 

     

     

    Right, there's no memory.dmp file, despite it being set up as shown, but the minidump files - which are correctly (as this was the BSOD message) showing that there was a PNP_DETECTED_FATAL_ERROR - are reporting a 0x000000ca Bug Check Code in partmgr.sys each & every time; which then leads to an error in ntoskrnl.exe...

    ...&, as said, this occurred every time when removing the last drive in a pool on the 2 different systems - &, including the USB dock, 4 different controllers.

    Oh, & as I possibly wasn't entirely clear, I was also blocked from deleting the last drive in Computer Management, without going into safe mode - hence going into safe mode.

     

    As to the 2nd part, it was largely just a comment - but it's a really frustrating change, as it significantly slows down the process of replacing a pool with larger drives. Well previously it's always been possible to set up a few drives for the new pool & copy data from some of the drives from the old pool... ...which, with half decent controllers, is vastly more efficient than doing them one by one d.t. being forced to have all of the drives in the old pool connected (with one being shifted to the USB dock).

  3. As the slightly lengthy title describes, there's 2 notable issues which weren't there the last time that I needed to do something like this - on an unknown previous build.

    Well, it's only spasmodically that I need to do something major with the pools that's better done by creating new ones from scratch.

     

    So firstly, whilst removing other drives from a pool works fine, removing the final one *always* causes a BSOD - this occurring on 2 different machines & with the drives on Intel, Marvell & a LSI controller; as well as if the final drive's connected via USB... ...& irrespective of whether there's any user data on the last drive in the pool.

    (it can be worked around by either booting into safe mode & deleting the partition in Computer Management / or connecting to a machine without DP installed - however these aren't exactly user friendly)

    &, secondly, if any drive is missing from the pool, it's making all of the files/folders read only - again on both machines - which, given that there are finite SATA/SAS ports (& a USB dock), means that the new pool(s) cannot be made with all of the drives that 'should' be there; leading to time wasting with the new pool having to move data around again once it's possible to add the remaining drives.

    (obviously the data can be 'recovered' from the RO drives, using something like R-Studio, however this really isn't a great solution - as 'if' instead the issue was a failed drive & someone couldn't afford to replace it instantly then this significantly limits the ability to use the data)

     

    Okay, so having moved most of the data around - 5 new pools created / 4 of the new pools having all of the data on / & 4 of the 5 pools removed - then any solution probably won't be in time to help...

    ...so it's simply reporting that these issues have been a b nuisance & they really need sorting.

    Oh, & (obviously) using v2.2.0.906 - the 64bit Windows version.

  4. Hi Christopher

     

    I've just sent the logs via the contact page - so perhaps it can shed some light on things.

     

     

    Otherwise, my apologies for the phrasing of the Scanner thing in the last message... Well, whilst I obviously got across why I was doing what I was doing (or rather why I wasn't bothering & that it wasn't having a negative effect), I neglected to note that your advice, whilst not what I am choosing to do on the HP server, would potentially have been useful if I'd not known about it.

     

    Yeah, re-reading it then it comes across as being slightly too dismissive; which wasn't my intent.

     

    Cheers

     

    TiM.

  5. With the licensing issue happening on both machines the DP is installed on, it seems unlikely that the odd corruption issue on this machine.

     

    Hopefully it's just a temp glitch/bit of incompatibility with some other s/w upgrade that'll resolve itself.

     

     

    As to scanning multiple drives on the same controller simultaneously, it's the way I've got the other system set up - well, with 37 drives connected up then it's perhaps kind of a necessity... ...& naturally the lsi card can easily handle it.

     

    With the MicroServer only having the 5 drives in & it being on 24/7 though, I simply didn't see it as being time critical enough to warrant turning it on tbh... 

     

    So I've just opened the scanner gui up again &, along with moving all of the data about, duplicating it all, it running as a torrent box 24/7 & a few hours of media streaming, it's finished scanning all of the drives as well; so clearly it's not taking an inordinate amount of time.

  6. Hi Christopher - many thanks for the reply... Deleting the Service folder in C:\ProgramData\StableBit DrivePool (I was trying one thing at a time - & there naturally wasn't the option via the cog as there's no pool set up) has sorted the first issue completely.

     

     

    As to the licencing issue, from what you're saying then whilst *an* occurrence could potentially be down to a real major h/w change (as I swapped from a 9260 to a 9271 card in my main server on Monday), the only other thing that's altered recently is now having a g-sync monitor; the g-sync element of which is re-detected every time it (the monitor) is turned off for longer than the monitor power setting.

     

    Could it be that the s/w is being overly sensitive to h/w changes?

     

    Cheers

     

    TiM.

  7. For some unknown reason DP is not seeing the 4 HDDs in a HP Microserver (Gen8), just the boot SSD.

     

    It was working fine under Win 2012 R2, but a separate s/w incompatibility has meant changing to Win10 Pro... Initially the drives were seen, however there was a constant duplication error notice - so, having used alt s/w to compare my data on the pool (which was fine) I uninstalled DP -> formatted 2 of the drives -> copied the data across to them -> formatted the other two -> & reinstalled DP.

     

    Now SB Scanner can see all 5 drives - & I've tried both the 2.2.0.619_x64_BETA (as that's the version I had to hand) & 2.2.0.651_x64_BETA...

     

    post-1830-0-78015200-1452014945_thumb.jpg

     

    ...so, any ideas?

     

     

    Completely separately, for the last week or so I've been getting regular pop ups saying that I need to activate both DP & Scanner on both this & my main server - & whilst it accepts the license each time, it keeps on reoccurring... Is this a issue at your end?

  8. Damn, you got my hopes up there with the promise of those 8TB drives... Well, whilst I really don't rate their consumer drives, their enterprise ones are pretty great in my experience (I've still got some 15.7Ks that I use for video editing as, with enough of them & separate source & destination arrays, they're still quite nippy & have been hugely reliable) & I'd gotten all excited thing about what to fill them all with. ;)

     

     

    Yeah, normally I would also test first, but, as & when I have the bits, I need to turn the thing round asap - esp as part of the funding for the server needs to come from flogging some of the old kit, & so need to be able to take it all offline & advertise it as quickly as I can once I've got the server up & running.

     

    Similarly, creating failures (though I'm not entirely sure how I'd make the Scanner think that a disk was failing(?)) with enough data on to work around chance (re)placement & waiting for it to do its thing back & forth would obviously be somewhat time consuming even if there were no other pressures... ...& it's going to be much better to use what time there is available to look at the day to day quirks of the s/w that'd be instantly meaningful than hours/days(?) on emulating failures that, touch wood, would never happen & only need a simple answer as to what the s/w actually does.

     

    It's also not like the kind of issues that could occur with a failed array, controller failure, non-Windows compatible file system, etc, where the data can't be natively read from any of the drives & money's got to be thrown at data recovery to get anything back - R-Studio's come up trumps in the past in those kind of instances btw - & with both a backup pool & a second identical raid card (there will also be at least a third further backup of more critical stuff naturally) then, within the realms of what's affordable, fingers crossed that 'should' be robust enough.

     

     

    SAS expanders are really the way forward imho... Well, a 24 port SAS card is simply all the money, whereas, for this initial server, it'll be an 8 port LSI card (nothing special, just a 2960, as it matched the one I've owned for a few years) paired with one of those HP ones - which tend to be about £100-120 2nd hand over here & work very well with a large array of cards providing if they've got a late enough f/w on them (or you've got access to a HP system & controller to flash them on).

     

    Yeah, those Chenbro things are simply a reasonably cost effective option that I've came across in looking at options for if/when I need to add more drives by connecting up a 2nd 4U case - &, rather than that Norco, in the UK XCase do their own slightly updated version (plus a singing & dancing version with SGPIO & stuff), which I'll be going for.

     

     

    Anyway, enough waffling on.

     

    Thanks again.

     

    Tim.

  9. That's answered exactly what I needed to know - ie that post any kind of drive failure, catastrophic or otherwise, or pulling, any placement rules will need setting up again on the replacement drive.

     

    Yeah, it's simply about knowing the limitations/quirks of the s/w beforehand to make sure that there's not a better option - or rather that, as it seems pretty certain that I'll be buying it, in the event of any issue then I don't do anything stupid by wrongly assuming that the s/w will be doing something that it can't.

     

     

    Otherwise, if you're offering to buy me some 8TB drives then that's really very kind of you & I'll look forward to using them. ;)

     

    More sensibly, the aim here is to try to rationalise lots of different bits of storage (a couple of 8 bay DASes & a NAS & a load of offline backup drives & some drives in a couple of different machines that I've bought as needed) into something a bit more cohesive & useful - so whilst there will be either 2 or 4 new 4TB drives bought alongside the 4U case (hopefully this week assuming some other h/w turns up & all works as it should), it's unfortunately mostly got to work with what I've got...

     

    ...though I'm sure it'd be vastly cheaper to buy a 2nd 24 bay 4U case plus something like a Chenbro CK13801 + CK23601 combo (+ the 8088 & 8087 cables needed to connect it all) & populate the entire thing with 4TBs, than it would be to replace enough of the drives I own with 8TB ones.

     

     

    Anyway, thanks again for your assistance here - it's both really appreciated & is really quite reassuring to see, first hand, that there's such a prompt reply when there's questions asked.

  10. Hi - thanks hugely for the prompt reply.

     

    I've obviously not been as clear as I hoped I'd been with my questions/examples, as I was attempting to find out very specific things; whereas you've quite reasonably, given that I'm clearly someone who's never used the s/w, started by explaining some far more general points about it. 

     

     

    So, for example, I fully understood already that, if a drive fails, "If you have unduplicated data that means that you will lose any files that were on the disk" - as this is obviously the same as having 2 non-pooled (or non-R1 or non-storage space'd mirrored or...) drives with the same contents on & one of them dying.

     

    Similarly, I fully understand (with the exception of the what's being asked & would be covered by Scenario 2 below) how things would operate with a catastrophic drive failure 'if' I were to be using duplication - however, as mentioned, there are just major pros to having a backup vs any kind of automatic duplication which matter far more to me than having to manually copy data twice onto to 2 pools; given that, again, I cannot afford the 50% extra no of drives (or the extra 4U case & h/w to attach them to the raid controller for that matter) that would be required for both a backup & duplication.

     

    Duplication, as with raid, is not a backup naturally.

     

    Instead, I need a solution to use a larger no of drives than there are letters, & the advantages for me of the s/w, vs the alts, being able to both send some specific data to specific drives & have the drives being independently readable (on an identical raid card, as I have 2)...

     

    ...having already ruled out assigning drives to folders, which would have accomplished pretty much everything that I actually need for free (accepting that auto-balancing data that could be anywhere within a pool & automatically moving data if a drive were starting to fail would be additional advantages), as there's some other s/w that I use that has issues with accessing drives using this approach.

     

    (I know that that s/w limitation isn't going to be an issue with DrivePool from something incidental gleaned from another thread btw - I did search for stuff first)

     

     

    So, to attempt to rephrase what I was trying to find out, it's primarily trying to establish how placement rules are defined within two non-duplicated pool drive failure scenarios - ie are the rules drive based or pool based (& drive independent)?

     

    Scenario 1. So, drive B in an A, B, C non-duplicated pool (there's then a separate D, E, F pool that has the self same contents on as a backup) suddenly catastrophically fails without any warning & there were rules attached to this disk; ie 'send all data in the folder XXXX to drive B'.

     

    [NB Thinking about it, this would also be relevant if I wanted to upgrade drive B's capacity - given that it would be far quicker to pull the drive, stick the new larger one in & copy the data back across manually, than telling the s/w that I was removing the drive & it all needing to be copied to A & C & then back again (either automatically or by setting the rules up again - see Scenario 2).

     

    Well, I'd still have 2 copies in the interim (both a working drive B that I'd pulled & could connect via the raid card in another machine or whatever, & the data somewhere on the D, E & F pool) so that maintains the backup whilst it were done without the double copying.]

     

    Obviously (whether through failure or active choice), drive B no longer exists & so is missing from the pool - but would putting a replacement in & adding it to the pool set up any pre-established placement rules automatically again on its replacement, or would this need to be done manually?

     

     

    Scenario 2. &, slightly differently, drive B in an A, B, C non-duplicated pool is picked up as starting to fail by the Scanner - drive B having some placement rules on.

     

    (again there's a D, E, F backup pool)

     

    As you wrote, & I understood already from the manual, the s/w will then attempt to move all non-damaged data onto drives A & C - along with warning that there's problems brewing & whatnot - which is great as it then helps to maintain there being a backup of as much data as possible.

     

    Now, once it's done it's thing, I tell the s/w that I'm removing drive B, stick a replacement drive in & add it to the pool, but what does the s/w then do with the data that's been moved onto A & C - given that, again, there were pre-existing placement rules &, obviously, the pool will now be completely unbalanced?

     

     

    Anyway, I hope this better explains what I am trying to find out here.

     

    Thanks again

     

    Tim.

     

    [Edit] Oh, & I do realise that I should really bin drive B before I start, as it's clearly b useless & keeps on failing in different ways, but I like to live dangerously. ;)

  11. Hi - i'm looking at purchasing DrivePool (& the Scanner) in order to pool drives to reduce the no of drive letters & whatnot; whilst being able to set the placement of directories & being able to read the drives individually & whatnot; which is all clear.

     

    However, since the auto duplication process obviously isn't a backup, i'd be looking at running pairs of pools separately (with identical drives & settings) - plus check-summing important data & whatnot... ...& having 3 copies of the data, ie a duplication + a backup, isn't financially viable.

     

     

    Now, within this usage scenario there's a couple of things that aren't clear from the manual...

     

    Well, imagining a drive in a pool suddenly catastrophically failed, obviously i'd have the separate pool to recover any files from, but is it simply the case that swapping in a new drive & assigning it to the pool would recreate the settings so that I could simply copy the files across... ...or would all of the file placement settings & whatnot for that drive have to be redone or would it rebuild them?

     

    &, similarly, if a drive in a pool were detected as failing by the Scanner, i can see that DrivePool starts automatically moving files to other drives in the pool (which is great for my purposes obviously as it helps to maintain as much of the backup as possible); however what then happens when the drive is replaced? is everything then moved back to the new drive or does this have to be done manually?

     

    Thanks

     

    Tim.

     

     

     

     

×
×
  • Create New...