Jump to content

PocketDemon

Members
  • Posts

    62
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by PocketDemon

  1. There's no issue with different sizes - &, within the context of your drives & what you're likely to be doing, there's no issue with sizes & using the capacity. Yeah, the only time that there would be an issue is if the number of times you're duplicating wouldn't work... ...so, imagining someone were looking at duplicating the entire pool, for example - - with 2x duplication &, say a 1TB & 8TB drive, they could only actually duplicate 1TB - & with 3x duplication &, say a 1TB, 4TB & 8TB drive, they could still only actually duplicate 1TB ...however, unless you're after >=6x duplication (which is highly unlikely), there's no problem whatsoever. If you are using duplication & your current drives are pretty full already, after adding the new drive then I would suggest pushing the "Duplication Space Optimiser" to the top & forcing a rebalance run just before going to bed... As this then should prevent there being any issues moving forward.
  2. I'm not quite sure what you're meaning here. Well, if for example, you had 2x duplication enabled for a folder (so it was stored twice on 2 drives) & then disabled it, it would delete one copy of each file & sub-folder. Whereas if, instead, you're meaning that you chose to copy (not move) a file from one folder to a 2nd folder on the pool - then it would exist in 2 folders x the duplication for those folders... ...& you'd have to manually delete it from one or other folder if you only wanted 1 copy x the duplication.
  3. The hot swap on the motherboard would only apply to drives connected to the onboard intel controller - not the HBA card. Now, ttbomk (as I don't see any reason for using it) the intel hot swap option works the same as 'safe removal' with USB - so it'll treat the drive as being an external drive & you then have the option of ejecting it on the taskbar... ...randomly pulling it out & you'd stand the chance of data loss as there's obviously not a BBU cache. In terms of DP though, in my experience then having missing drives in a pool prevents writes to that pool until you add a replacement drive, assign it to the pool & remove the missing drive as being part of the pool... ...or you switch the machine off, reinsert the original & turn on again. &, whilst I haven't tried this with hot swapping, that's the way it's worked for me with having a drive from a pool in a USB dock, when (without duplication) swapping one drive for a larger capacity one... ...as, whilst DP can recognise that a drive's suddenly gone missing, it doesn't appear to be able to tell that it's been added again - with that check seemingly only happening at boot. Anyway, irrespective of whether you were looking at hot swapping on the onboard controller or the HBA... ...if you had, say, a couple of drives which you were looking at spasmodically backing stuff onto & then storing elsewhere, I would personally look at an external USB3 dock instead.
  4. I'd deleted part of what I'd written... but obviously not quickly enough.
  5. Having just had a quick test, in terms of setting up a rule then it's dead simple, & you then get the same options for each rule as you do for directories in terms of drive usage & %ages. However, having checked, what you don't get is anything additional in the duplication options - so each file clearly still inherits the duplication from its parent directory. So that means that with 2x duplication on the parent folders you'd have to set limit the rule to using SSD + one other drive to get the read boost on all of the db files... (& then disable the SSD as a storage option for all of the directories) ...I believe we know for certain from other threads the DP will prioritise reads from the fastest drive(s) with Read Striping enabled.
  6. Thanks for finding this & proving me wrong - as, along with give the OP proper advice, it's honestly really useful to learn something new. Yeah, I can't think of an application for it for my personal use, but it's much better to be aware of useless (to me) capabilities than believe a load of nonsense.
  7. I'm truly sorry, as it clearly can be done. I won't delete the previous posts, but I will strike through everything that's incorrect so as to not to confuse anyone.
  8. I thought I'd made myself clear, but DP also cannot put selected file types within a pool on selected disks. The ONLY thing that you can do is to tell it to put a folder on 1 (or more) drive(s) - which it will carry out until there's not enough space on the drive(s). So - D:\DBfiles\[all of the *.db files] - could be on a specific drive (or drives). But with a structure akin to - D:\MediaFiles\Media File0000001\[bunch of files, inc a *.db file] D:\MediaFiles\Media File0000002\[bunch of files, inc a *.db file]... ...D:\MediaFiles\Media File9999999\[bunch of files, inc a *.db file] - then EVERYTHING in the D:\MediaFiles\ folder hierarchy would follow the same drive limitations. So, again, you would need to look at the documentation for or discuss it on a forum about Zer0net to see if it's feasible to do the former... ...as there is no solution within DP that will move ONLY the *.db files; UNLESS you can set it up to put them in a separate folder (or folders) from the media files. Otherwise, the only other option I can think of that 'might' work for what you're doing is look at SSD caching s/w to see if any of that can meet your needs. So something where you can set up a SSD to cache the most used data from a specific HDD (or DP pool or array) 'might' work - since they would normally tend to ignore very large files - which would then to prioritise your *.db files... ...but if you could find something that was more explicitly controllable then it would obviously be better.
  9. I can't speak for whatever s/w the OP is using, but taking, for example, most of Adobe, then forcing it to place the temp files/scratch disk/media cache on a decent SSD can make a significant difference; irrespective (within reason of course) of where the main data files are stored... …&, naturally, that SSD doesn't need to be the system drive; which would be particularly relevant if budget limited the capacity of the SSDs you could afford to buy. So, for example, with <=1080p editing in Premiere, I've personally seen no benefit in using anything better than using short-stroked 4 drive R10 HDD arrays for the main video & audio files for a project - but that certainly isn't true for all of the other files. Then, as another example, with 16 threaded batch audio lossless compression/decompression (& sundry tasks), there was no speed difference whatsoever between using 2 reasonable SATA 250GB SSDs in R0 (repurposed 830 Samsungs that I'd bought when they were the new thing as a system drive) - vs using a 1TB 970 Evo... ...but the R0 set up was noticeably faster than using a single SATA SSD... ...& it's significantly quicker to move in the order of 100-300GB of audio from a DP'd HDD to the R0 array, do all of the batch processing I need to, & then move everything back. Now these are just 2 examples of my experience with my main setup of course... …but it's just about illustrating that whilst you're 100% correct that there's no reason why everything couldn't be on any drive type, that's not to say that different storage options can't be more appropriate for different processes/parts of processes. That said, I now have no idea 'if' the s/w they're using will show a material benefit or not - as I originally assumed that there must be a very good reason for what was being proposed - so they were working with something like massive databases that, for some reason, needed to be in a generic *.db format... ...but based on what's since been written I'm really not certain what the gain would be - which is part of the reason for suggesting looking into the specific s/w they're using. Well, I would imagine that on whatever forums are dedicated to the s/w, people would tell the OP if it was a worthwhile proposition.
  10. Okay, to be clear, DP cannot do this... ...& if whatever s/w you're using *has* to have the db files in the same folder as the data then there's no workaround that I can think of - as either they're all in the respective folders or the db files would be completely useless & your s/w would just create a new one in the respective folders during whatever process it's using. Yeah, with no info given as to what program is creating these files then it's impossible to try to find an answer - so I was simply working on the premise that most s/w allows you to alter the standard directories for specific things... ...however I'd suggest that you either look in the documentation &/or ask on a forum dedicated to that s/w to see if it's possible to relocate all of the db files into a single alt directory; as you're then getting the answer from people who have explicit knowledge.
  11. The short answer is - not in the way you're describing within DP itself. Now, the fundamental problem with what you're suggesting is that *.db is not a unique file type - so a load of Windows system files & odd drivers & bits of Adobe & many other pieces of s/w also create & modify them over time - & those need to be in specific folders... ...so if you randomly tried to move all of the new & modified db files to somewhere else then it would break loads of stuff. Anyway, because the group of drives in the DP will need to be assigned a drive letter - let's say D, with the system drive being C... ...then if you want something saving onto the pool you'll save it to, for example, D:\DBFiles ...& if you want it on the system drive then C:\DBFiles So this is something that you'll have to change within the 3rd party s/w you're using that's creating the db files - if it's a sensible thing to split those db files from other data of course. Alternatively, given that I'm guessing you're looking at the system drive being a SSD & you want the db files on it for faster read speeds - I suppose you could create an additional partition on the SSD & add that to the pool - again D. With a D:\DB Files folder, you could tell DP to limit the placement of that folder to the SSD partition (& also tell it not to store anything else on that partition) - but again you'd obviously have to direct the s/w creating the db files to save them in that folder - as opposed to C:\DBFiles However, unless you also wanted to add (real time) duplication - so you've got the db files stored on both the SSD (for read speeds) AND a HDD (for duplication)… (in this situation you'd limit the placement of everything in the D:\DBFiles to both the SSD partition & one of the HDDs - & add 2x duplication) ...this would seem pretty pointless; & partitioning the SSD to do this would have the major downside of limiting the flexibility of usage of the SSD's capacity.
  12. That's assuming the OP's in the US, as certainly in the UK then the 10TB WD Elements drives have typically been cheaper per TB than the 8TB for a while now. (& the EasyStore's are effectively not available) So, whilst not the best price they've been as the price of the 10TBs is pretty volatile; well I picked up another 2 on the 5th of March for £174.10 each &, at the time, they were within a couple of pounds of the 8TB… ...atm Amazon have the 8TB for £174.99 (£21.87 per TB) & the 10TB for £194.78 (£19.48 per TB). Obviously the downside to what we're suggesting though is voiding the warranty by shucking them... …& certainly the latest 2 drives I bought needed the pin 3 mod to work with the backplane in my server case; & so had to buy some Kapton tape... …& whilst only needing a tiny bit for each, it was all of £1.54 delivered for 33 metres of the stupid stuff.
  13. This is just my personal way of doing things, but likewise having 10s of TBs of random archive stuff - where it'd be really quite annoying, but neither critical nor devastating, to lose stuff - then for each of a couple of sets of things I have 2 separate pools with one copy on each instead of letting DP duplicate as one pool. (naturally more important data is treated differently, with it being stored with some combination of multiple drives (or arrays) &/or duplication (or R1) &/or on more than one machine &/or on cloud storage as appropriate) There's then some basic main folder rules so that if a drive were to fail then it'd be very easy to know what's been lost & needs copying from the sister pool onto a replacement... ...though, more usefully, being pull a drive & connect it via a USB dock if the capacity needs increasing for a specific subset of data is handy - as it's only moving the data once to remove one drive & add the other... …but it also means that, along with 2 drives with identical data on having to fail simultaneously, I'd have to manually delete the same thing twice... ...& in your situation then only one of the two pools would be the share. I also rely heavily on checksums so if there were to be any data corruption on a drive then I could identify which of the 2 versions of a file was correct - though, with the exception of checking things when something's added to a subfolder before updating that folder's checksum, it's maybe twice a year that everything's checked through. Well, whilst it's not that I have to sit & hand-crank the thing to get it to check 100s or 1000s of checksums & report any issues, it's not worth the effort to do it constantly. Okay, it took a bit of time to get an organisational system that I was happy with, but it works for me for doing my version of what you're talking about.
  14. Yeah assuming that they're the right diameter for the fins (the gaps have to be a little smaller than the diameter of the screws) then it's just a screwdriver. Naturally though, a bit like fitting a CPU fan/block, you'd want to work around the screws rather than tightening each one to the fullest extent individually - as this significantly helps in terms of overall positioning of the all of the screws & the fan. Oh, & it's a 'rest the card on something on a table' job - as it's easy to get the screws off vertical unless you can push straight down.
  15. Now, the manual for the HBA you were talking about states "Minimum airflow: 200 linear feet per minute at 55 °C inlet temperature"... ...which is the same as my RAID card. Beyond that, all I can say is that, even with water cooling the CPU & GPU (& an external rad) so most of the heat's already taken out of the case/ the case fans are primarily cooling the mobo, memory, etc, then I've had issues without direct cooling with all of my previous LSI RAID cards - both in terms of drives dropping out & BSODs without there being an exceptional disk usage. (it's not that I'm running huge R50 arrays or something - primarily that I simply prefer using a RAID card, vs a HBA, in terms of the cache & BBU options) Similarly, the Chenbro expander I have - which, other than the fans, drives, cables, MOLEX-to-PCIE (to power the card) & PSU, is the only thing in the server case - came with a fan attached which failed; & again I had issues... ...so it's now got one of the Noctua fans on instead. So, whilst you 'could' try it without & see, personally I would always stick a fan on something like this. I couldn't advise you on monitoring for PWM as that's not how I do things - since I'd far rather have the system being stable irrespective of whether or not I was in a particular OS or not. Well, not that dissimilarly, whilst the rad fans are PWM, for me it's about creating a temp curve within the bios for the CPU (& hence, by default, the GPU), & so is entirely OS independent. So, whilst I couldn't recommend anything specific, 'if' I were looking for a fan controller then I'd want something which I could connect a thermal sensor to (& attach that to the h/s above the IOC) AND I could set the temp limit solely with the controller.
  16. With the fan, you're not looking at mounting it using the 4 non-return pins that hold the h/s on - but instead using self-tapping screws, of a size that will screw between the fins (cutting a thread into them) on the h/s... …& it's completely irrelevant if the fan is square to the pins or at a 45 degree angle or somewhere in between - solely that it's centred roughly in the middle of the IOC. Obviously this modifies the card - however from personal experience of re-selling a couple of older cards previously then buyers have seen a fan already being fitted as being a major plus... ...as it's a far more elegant solution than trying to mount a large fan angled toward the h/s - & the pcie slot coolers really aren't man enough for the job in my experience. Now, having a quick check in my previous eBay purchases, the last ones I bought (which were mounting 2 of the 40x10mm fans on, respectively a 9271 -8i & a Chenbro expander) were - Countersunk Cross Head Self-Tapping Screw M3X15mm... ...but I can't 100% promise that these will be appropriate since I don't know the gap size between the fins... But you can measure this & find out - & obviously if you bought a 40x20mm fan you'd want them to be 25mm long, not 15mm. [Edit] & they need to be countersunk as that's what those fans need. [End Edit] Otherwise, it wouldn't surprise me if adapters like that existed - well, a 2 second google search & there's certainly the components to wire one up yourself... ...however, unless you are now looking at buying a really underpowered PSU where there was insufficient current on the 5V line (for the HDDs) then I honestly can't see the point. Well, back before modular PSUs then all we had were the world of splitters to connect lots of HDDs - & whilst there'd be no sense in trying to run all 24 drives off of the same cable for no good reason, a reasonable sharing between what's there is fine. [Edit] Yeah, unless something's materially changed in what you're proposing, my honest opinion is that what you're asking here would be finding a solution to something that simply isn't a problem.
  17. All perfectly sensible imho - & I'd agree that Startech would be 'a' reputable supplier for splitters. Yeah, I guess my approach was to show how the OP could calculate things (& the importance of working to either Watts or Amps) - but certainly, if they get back with what they're proposing as a complete build, it'd be feasible to look at finding a more commensurate PSU for them... ...though we'd also need to know where they're located - as comparative pricing can vary by region. Well, it'd be better to have a bit of extra cash towards something else in the build - or beer or whatever. The one other thing I've thought of is, having decided on a PSU, it might be worthwhile contacting the manufacturer to see about getting alt cables to better meet what you need & reduce the number of splitters. Now, whilst YMMV of course, I've had Corsair (at least historically, some of their PSUs were built by decent suppliers - not looked for a while though) in the EU post extras to me a couple of times in the past for nothing.
  18. Whilst you're certainly correct that consumer SSDs don't use the 12V line & have a max 5V power draw that's higher than that for HDDs, just to give a rough idea... ...then you're looking at a peak of around ~5W on the 5V line - which, dividing Watts by Volts gives a max of ~1A. So yeah, 2-3x higher than a typical 3.5" HDD draw, but that'd still be looking at a safe 22-23 SATA SSDs on that PSU - assuming no other 5V power draw. [Edit - just for clarity, whilst, tbh, I don't imagine you'd have many people trying to run that many consumer SSDs together, the amount of time that you'd actually be running at ~5W/~1A is minimal - & clearly far less time than the same usage on a HDD... ...however, whilst it's very unlikely that you'd ever have all of the SSDs hitting that point at the same time - unless, I guess, you had them all in a single R5 array or something equally weird - it's reasonable to calculate for the worst case scenario... …& this applies equally when looking at all of the components that are being powered.] For the sake of completion though, the one other time that I'm aware of that there could be a significantly higher 5V draw is with consumer 2.5" drives without a spin up delay. So, whilst the enterprise 2.5" drives use a mix of 5V & 12V, the consumer 2.5" drives also only use 5V; & taking the 5TB Barracuda as an example, the datasheet gives a 1.2A draw on start up - https://www.seagate.com/www-content/product-content/barracuda-fam/barracuda-new/files/barracuda-2-5-final-ds1907-1-1609gb.pdf ...which, using the OP's PSU & no start up delay then would limit to an absolute max of 20 drives; assuming no other 5V power draw. Now, the key thing to note about this datasheet is that, whilst that figure is in Amps, it then gives the running power usage in Watts... …& 2.1W/5V = 0.42A... ...so, assuming you could stagger the initial spin up of the drives, this takes us back to a stupid number of drives - 47 to be precise - assuming no other 5V power draw. Okay, why someone would want to use a load of consumer 2.5" drives like this would be a bit of a puzzle - but it's just about trying to make the thing as complete as possible. Oh, & obviously if someone were using ancient drives of whatever sort then they'd also need to check the current on the respective lines... ...but in that everything's just example figures then if you're looking to limit the PSU to being something commensurate to your requirements then you should be checking the specs of what you're buying anyway. As to the 8TB Archives, they wouldn't be my choice either. Whilst I've got an array of drives, the best value in the UK has been buying 10TB WD Elements external drives from Amazon & shucking... ...though obviously this voids the warranty & I know that the latest stock they have in (since they dropped below £200) needed the 3.3V pin mod doing to be detected by the back plane in my 24 bay SAS DAS - but 33 metres of Kapton tape was only £1.54 delivered. (due to my usage, I use external SAS to connect between the computer with the raid card in & the server case with the expander in - hence it's a SAS DAS)
  19. Running through - 1. I've no idea where you're getting a 2A on the 5V rail typical HDD current draw from, as even the couple 2.5" 10K & 3.5" 15K drives I have here are rated as needing less than 1A on the 5V line. Just thinking aloud, but perhaps you're looking at the wattage instead - as, going back to the WD Reds as an example, the 10TB is rated at 400mA (ie 0.4A) on the 5V line - & multiplying that by 5V would give 2W. 2. So you're meaning the 5x 6pin ports on the PSU... ...however the cables that connect to them go to either SATA &/or MOLEX. Now the standard cable list for the PSU includes - SATA (550mm+100mm+100mm+100mm) x3 = 12 SATA drives SATA (550mm+100mm) / MOLEX (+100mm+100mm) x1 = 2 SATA drives + 2 spare MOLEX MOLEX (550mm+100mm+100mm) x1 = 3 MOLEX - which are the 5 cables that you can connect to the 5x 6pin ports on the PSU of course. So, that obviously gives you 14 SATA drive power connectors to start with... ...& you can then add more by using either - MOLEX-to-SATA splitters... ...or MOLEX-to-MOLEX splitters AND MOLEX-to-SATA splitters coming off them... ...or by using the MOLEX connector (with or without splitters) to connect to a backplane. 3. Okay, so we look at the data sheet for the 8TB Seagate Archive drives - https://www.seagate.com/files/www-content/product-content/hdd-fam/seagate-archive-hdd/en-us/docs/100818689b.pdf ...& at figure 2.7.1 it lists the 5V operating amps at 0.33 - which, given the 24A max, would give a limit on the 5V line of 72 drives all spun up simultaneously; assuming no other 5V draw... ...so again I've no idea where you're getting 2A from. 4. That's one mighty large heatsink on the 9305-24i!!! Right, I can't immediately see anything that will tell me the complete layout under the h/s... ...however, since the official Broadcom page (https://www.broadcom.com/products/storage/host-bus-adapters/sas-9305-24i) explicitly describes it as being a single IOC &, a guick google image search is showing nothing unusual on the back of the pcb, it should only be the area above the main chip that needs a fan. (ie the area within the 4 retaining pins) So I personally have used Noctua NF-A4x10 FLX fans (with some self-tapping screws of the correct size for the respective heatsinks of course) for years - & they're uber reliable... ...but there are variants of the 40mm Noctua fan which might better suit your needs https://noctua.at/en/products/fan Tbh though, unless you've got the facility to add a thermal sensor then that'd write off all of the PWM fans... ...there's no real value in having a 5V one... …& the 10mm one gives more clearance than a 20mm one. Otherwise, as before, imho the PSU you're looking at is complete overkill unless you've got some completely separate major power usage (GFX cards in SLI or something) that you've not listed.
  20. No problem at all... But as I hope I got across you're completely correct that the OP should check out the exact specs of any PSU that they're considering - as my example of "a decent 450-550W PSU" was a little vague... ...so imho at least then you've materially added to the thread.
  21. Whilst that's a fair point overall - I had believed that I'd covered it in context by talking about the relative limitations of the specific rails within both spin up delay & non spin up delay environments - where they logically had to be PSU specific. So what i'd done first was to look up the specs of the PSU - https://www.evga.com/products/product.aspx?pn=220-T2-1600-X1 - which states that it can provide 1599.6W on the 12V rail (there was no value in not rounding to 1600W) & 120W on the 5V. Likewise, just to work things the other way using the stated amperage on the 12V line then, using the 10TB WD Red example - 1.79A x 74 drives = 132.46A... …& it's rated for 133.3A - so it works either way around.
  22. Oh, & it might be helpful for me to explain calculating the max wattage - since most HDD specs give the max values in amps. So, taking the WD Red drives as an example using the info from the data sheet - https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/internal-drives/wd-red-hdd/data-sheet-western-digital-wd-red-hdd-2879-800002.pdf This gives a max of 1.85A on the 12V line for the 8TB model - so, multiplying the two together gives a 22.2W max per drive - & with a 1600W max draw from the PSU on the 12V line then that'd be ~72 drives with no spin up delay (& no other 12V draw)... ...with the 10TB having a max of 1.79A - this gives a 21.48W max - & so ~74 drives... …& the others being 1.75 - gives a 21W max - & so ~76 drives.
  23. it depends on whether you can add a spin up delay or not - as the big power draw comes from the drives initial spin up on the 12V line. Now, whilst every drive will have its own power draw (& there's no info on what you're looking at using), it's going to be easiest to use - https://www.45drives.com/wiki/index.php?title=Start_up_Power_Draw - as an example; since they've used a mix of consumer drives & high power draw enterprise drives in their testing. So, with 15 consumer drives spinning up together (no spin up delay), this tested to be in the order of 425W - so, with the 12V line on the PSU being ~1600W, this would limit it to a max of around 55-56 drives; assuming that there's no other draw from the PSU of course. If, on the other hand, you can add a spin up delay between the drives (both on boot & after sleep) & were using high power draw enterprise drives - which, ttbomk, needs the drives, RAID/HBA card(s), expander(s) & backplane(s) (as appropriate to your build) to support it - you're instead looking at the 5V line being the limiting factor... So, with a 120W max on the 5V line from the PSU - 15 drives <=8.5w (random read from all drives) - & so in the order of 210 drives being randomly read from simultaneously; again assuming that there's no other power draw. Okay, so these are only example figures - so obviously you'd need to check the 5 & 12V max draws of the drives you're going to be using - but unless you've either got some uber extreme drive requirements or are also powering other components that have a high power draw then the PSU you're looking at would be complete overkill & a total waste of money. Well, for something like a 24 bay media server (so reasonably low total consumption from the other components - & using a RAID/HBA & expander combo with spin up delay; & usually a case with a backplane), the general recommendation is for a decent 450-550W PSU to give plenty of headroom... …&, tbh, choosing a 500-550W one is generally because of there being very few decent 450W ones. As to splitters, all I can suggest is to buy something basic from somewhere fairly respectable since I have had a couple of cheap eBay ones burn out & kill things over the years; but there's no need for anything bespoke - & the normal way to increase the count would be to use a handful of molex-to-molex &/or molex-to-sata splitters to increase the count... ...however I do wonder what case you're going to be using with stupid numbers of drive bays that doesn't have a backplane - where you'd be powering the drives via molex instead?
  24. As to the pcie lanes, your board spec is - 2 x PCIe 3.0/2.0 x16 (x16 or dual x8) *2 1 x PCIe 2.0 x16 (x4 mode, black) *3 2 x PCIe 2.0 x1 2 x PCI *2: PCIe 3.0 speed is supported by Intel® 3rd generation Core™ processors. *3: The PCIe x16_3 slot shares bandwidth with PCIe x1_2 slot. The default setting is x2 mode. Go to the BIOS setup to change the settings. (https://www.asus.com/Motherboards/P8Z77V_LK/specifications/) - so with an 8x pcie card in as well then the 2080ti will be limited to 8 lanes. That said, the difference in gaming performance is in the order of 2-3% - see https://www.techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_PCI-Express_Scaling/6.html as an example. Unless you've got an odd usage, you could also look at the 9260-8i raid card as they're often pretty cheap now on eBay - though I believe that some of the rebrands can have odd limitations so personally would go for the genuine article... ...or a 9272-8i would be another fairly cheap alternative if you're happy getting one from China or HK. +1 to sticking a fan on the h/s - the Noctua NF-40 range are very good imho, though you'll need some self tapping screws. & I would have recommended the Chenbro CK23601 as an expander - but a quick check & it's EOL.. …& whilst YMMV, I had nothing but hassle with the 24 port HP expander that I tried before.
  25. Touch wood the new version (2.2.1.926 - didn't try the RC as the later version was there) has resolved the issue on both machines. Many thanks for the prompt action.
×
×
  • Create New...