Jump to content

gringott

Members
  • Posts

    34
  • Joined

  • Last visited

  • Days Won

    6

Reputation Activity

  1. Like
    gringott reacted to PocketDemon in How many hdds an evga 1600w t2 can support?   
    As noted before, I'm using a RAID controller, not a HBA, so you'd need to explore the f/w, drivers & s/w for your card.
    That said, a quick google search & there's this -
    - however, as far as I can see, 4&83E10FE&0&00E0 is not necessarily a fixed device ID - so you'd need to look in the registry for the equivalent.
     
  2. Like
    gringott reacted to PocketDemon in How many hdds an evga 1600w t2 can support?   
    Now, the manual for the HBA you were talking about states "Minimum airflow: 200 linear feet per minute at 55 °C inlet temperature"... ...which is the same as my RAID card.
    Beyond that, all I can say is that, even with water cooling the CPU & GPU (& an external rad) so most of the heat's already taken out of the case/ the case fans are primarily cooling the mobo, memory, etc, then I've had issues without direct cooling with all of my previous LSI RAID cards - both in terms of drives dropping out & BSODs without there being an exceptional disk usage.
    (it's not that I'm running huge R50 arrays or something - primarily that I simply prefer using a RAID card, vs a HBA, in terms of the cache & BBU options)
    Similarly, the Chenbro expander I have - which, other than the fans, drives, cables, MOLEX-to-PCIE (to power the card) & PSU, is the only thing in the server case - came with a fan attached which failed; & again I had issues... ...so it's now got one of the Noctua fans on instead.
    So, whilst you 'could' try it without & see, personally I would always stick a fan on something like this.
     
    I couldn't advise you on monitoring for PWM as that's not how I do things - since I'd far rather have the system being stable irrespective of whether or not I was in a particular OS or not.
    Well, not that dissimilarly, whilst the rad fans are PWM, for me it's about creating a temp curve within the bios for the CPU (& hence, by default, the GPU), & so is entirely OS independent.
    So, whilst I couldn't recommend anything specific, 'if' I were looking for a fan controller then I'd want something which I could connect a thermal sensor to (& attach that to the h/s above the IOC) AND I could set the temp limit solely with the controller.
  3. Thanks
    gringott reacted to JasonC in External Options for DrivePool Disks   
    Well, I've migrated the 2 disks I had in my 4 bay over to the new StarTech. I forgot I re-organized my disk layout a while ago, so I only had 2 SSDs in the 4 bay, so I can't yet speak to heat.  I don't recall how the StarTech is setup, I'll say this for the MediaSonic's, I thought they had excellent cooling in them.  But I also have my equipment in the basement where the ambiant temperature is always fairly cool.
    Now, the sorta good news, the SSDs are holding virtual disks for VMs, so they are very sensitive to disconnects. It's only been about a day, but no issues so far, and the performance seems good.
    I know what you are saying here, but I've had a different experience, the drivers have become mature, there aren't updates because there's nothing to update. They are stable and performant enough, that is, you are held up by disk speed or fundamental bus speed of SATA at this point, there's nothing left to optimize. I'm not sure what issues you have, I'm running Sil3132 based SATA controllers on Windows Server 2016 with no issues connecting eSATA with port multipliers. I've gone pretty good runs without reboots, it's more been external events (power failure) or Windows patching that's made me reboot.
    I had almost done what you did, if port multipliers didn't pan out, I was looking at going to SAS, but boy was it expensive at the time.
     
     
  4. Like
    gringott reacted to GreatScott in OMG I love the activation system   
    VERY IMPRESSED!
    Didn't need to create an account and password Same activation code covers EVERY product on EVERY computer! Payment information remembered so additional licenses are purchased easily Nice bundle and multi-license discount I'm in love with the Drive Pool and Scanner.   Thanks for a great product and a great buying experience.
    -Scott
     
  5. Like
    gringott got a reaction from Christopher (Drashna) in Windows 10 support etc   
    Then I will. Glad I checked in. Happy Holiday Season to all of you.
  6. Like
    gringott reacted to Christopher (Drashna) in Windows 10 support etc   
    If you want.
    I'd recommend it.
  7. Like
    gringott reacted to Christopher (Drashna) in Windows 10 support etc   
    Did you see yet?  
    2.2.0.881 was released as a public beta last night.  
    Barring any major issues, we should have a public release version in a couple of weeks.
  8. Like
    gringott reacted to Christopher (Drashna) in Do I start buying 8TB archive drives or not?   
    I've actually seen where the writes drop off. They hit 0 bytes/sec for a few seconds and then bounce right back up. And do this repeatedly.  So there is definitely a write issue with the drives.
     
    That said, I get ~180-190MB/s read from these drives when I switched the allocation unit size to 64k.  Worth doing (manually). 
     
     
    And if write performance is an issue, then get a couple of SSDs and use the SSD Optimizer balancer. This way, data is written to the SSDs and then moved off. You'll never see the issue then.
     
     
    (I'm up to 5x 8TB Seagate Archive driver and am very happy with them)
  9. Like
    gringott reacted to Christopher (Drashna) in SATA card   
    Well, I've used a HighPoint RocketRAID 2720SGL, specifically.  Aside from the quality of disks I was using, it was rock solid. 
     
    But they are just rebranding Marvell chipsets, and installing customized firmware (which rely on proprietary commands for stuff like SMART).
     
    But you're right, if you're going to have a lot of drives, getting something like an LSI card is a much, much, much better investment.
  10. Like
    gringott got a reaction from Christopher (Drashna) in The Largest Stablebit Drivepool In The World!!   
    Yes, externals. Seagate externals have one year warranty, the Samsung [same Seagate drive inside] three years. Why the difference? I suspect the Seagate case have a higher return rate due to poor cooling, they aren't willing to take the hit on returns. When they are removed from the case, the cooling is on you. Given proper cooling they should last as long as the desktop versions, it is exactly the same drive.
     
    Is a desktop drive warranty not real? Really?
     
    A NAS drive warranty from the same drive manufacturer is more real?
     
    All your drives listed are desktop drives, with a "NAS" label stuck on them.
     
    I don't want to get into a flame war here but I have problems following your financial and warranty logic, after looking at your temp charts.
     
    Rail 1: The 4TB Seagates at the top of you temp chart are "NAS" - and have the same 3 year warranty as the desktop drives.Certainly not an Enterprise class drive. No real enterprise would use these drives for anything mission critical.
    Rail 2, 3 The 4TB WDs  are also called "NAS" - and have the same 3 year warranty as the desktop drives. Certainly not an Enterprise class drive.No real enterprise would use these drives for anything mission critical.
    Rail 4 The 8 TB Seagates also have a "NAS" label stuck on them, and have exactly the same warranty as a desktop drive. Certainly not an Enterprise class drive.
     
    No real enterprise would use these drives for anything mission critical, if at all. Plus the known issues with the technology [8TB] would prevent any enterprise implementing these drives for the near future until it is sorted out down the road - keep in mind, you might get an pat on the back for buying cheap "prosumer" drives, but you will be fired for lost data because of it or performance hits that have an effect on production. I know, I have seen it. That's "reality" and what "Enterprise" really means. Jobs are at stake. Nobody wants to hear you saved $500 bucks when a plant shuts down for a hour or two. They want you gone. These drives you have are aimed directly at the home market for what we are using them for. I refuse to pay big money for the same warranty because they put a NAS label on it. I am not saying they are BAD drives, I am saying they created this classification to squeeze more money out of consumers.
     
    I won't dig any deeper, maybe you have an "Enterprise" drive not in the chart, but I do know what actual Enterprise drives are and what they cost [i'm sure you do too] - and I certainly wouldn't buy one and stick it in a notoriously bad cooling NORCO case with the power supply blowing air into the case. I would buy drives with the same performance [purchase an extended warranty if you want 5 years], use the savings to get a professional case and proper power supply setup. Or correct the issues with the NORCO and the power supply.
     
    What does it all boil down to? With Enterprise drives, the hope is it will be more robust that plain old ordinary desktop drives. My experience with servers is that they are, meaning they last longer, but how much of that is due to good case engineering and cooling based on years of engineering experience that the major manufacturers have? I suspect it makes the difference. We will see with these so-called NAS drives aimed directly at consumers two or three years down the road. I suspect the failure rate will be the same as desktop drives. The drive manufacturers obviously know this, hence the same warranty as ordinary desktop drives. As for the price premium, what did you gain by paying more for a "NAS" drive - the warranty is the same. If you do have real Enterprise rated drives not on your chart, well, they are again another step up in price from the consumer "NAS" drives - and you get an extra two years warranty. You also get a performance increase in general over these NAS drives.. So you pay a hundred or more extra for speed you don't need to stream and store video - and an extended warranty. Five years down the road the drive will be outdated and practically worthless, so if it fails on year 4.5, you get a replacement drive [with Seagate it may be a "refurbished replacement drive"] that has outlived it usefulness in size and performance.
     
    As for data integrity, you are the same as me - you use drivepool and scanner, both great tools for monitoring and finding problems, and duplicate everything. Hence I can save money on drives using these tools and still feel safe.
     
    Suspenders with a belt. Very expensive suspenders. When thousands to millions of $ ride on it, I get it. Suspenders and a belt with a rope backup. Go Enterprise and go real Enterprise casing / cooling. When it is movies and tv shows I don't.get it.It does not compute as they say.
     
    Or maybe I'm full of "it". But my brain does not allow me to see it your way, using logic. As they say, different strokes for different folks.Please point out where I am going wrong so I can make sense out of this.
  11. Like
    gringott reacted to airjrdn in How I replaced my 8-bay Calvary enclosure on the cheap   
    I ended up biting the bullet on a 3rd cage and SATA card last week.  I had a couple extra 2TB drives and one 1TB drive lying around and wanted to get them added to the pool.  Total usable storage now sits at 19.1TB.  I've since enabled 3x duplication on almost everything and DrivePool is in the process of working its magic.  I love that software!
  12. Like
    gringott reacted to airjrdn in How I replaced my 8-bay Calvary enclosure on the cheap   
    I had maxed out my nearly 4 year old 8-bay Calvary enclosure (EN-CADA8B-SD) with 2TB drives and was beginning to have issues somewhere in the storage chain (eSATA port multiplier card, drivers, eSATA cables, Calvary enclosure, etc.).  After troubleshooting off and on for a couple weeks or longer, I was ready to replace it with something less problematic that wasn't limited to 2TB drives.  I didn't however want to spend half a grand for something that came with RAID functionality I knew I'd never use.  For the time being, I wanted to continue using my existing drives and just replace the enclosure and connection to it.
     
    The enclosure was hooked to a Dell Inspiron 3847 (4th Gen Core i7, 16GB, running Win 8.1).  Duties for the machine include Plex, Subsonic, DNS updating, Crashplan backups, Syncback backups from the web, and of course, storage.  The machine sits out of sight in the basement, so going with something less pleasing to the eye was fine.
     
    I ended up going with a pair of Rosewill 4 drive cages and a couple of IO Crest SATA cards for connecting them to the Dell - which only has 2 PCI-e x1 slots and a single x16 slot.
     
    $45 for each of the Rosewills, and $34 each for the IO Crests and I was almost ready to go.  The Rosewill cages came with SATA cables that were long enough to reach from the inside of the case to the cages sitting right behind the Dell.  Power to the cages was supplied by an extra power supply I had lying around.  A quick short from the green wire to any black one will make the power supply think it's hooked to a motherboard and power on.  I used a paper clip to accomplish that.  If you go this route, keep in mind you'll need about 10w for each drive to be on the safe side for power requirements.
     
    If you're interested in doing something like this, what you get is four drives worth of connectivity for about $80.  There's no port multiplier functionality going on, one drive connects to one port on a card.  If you have extra SATA ports, you can skip the card purchase.  If you have extra molex power connectors for the Rosewill cages, you can go a little less ghetto than I did and skip the power supply lying on the desk.
     
    You are limited in performance to what a PCI-e x1 slot can handle (about 240MB/sec if I remember correctly), which seems fine, but remember, you're running four drives off of that, and 240MB/sec is theoretical.  Real world performance will be lower.
     
    All in all, I definitely consider it a win.  For not much out of pocket, I replaced the Calvary, gained the ability to use larger drives, and also the ability to buy another card and cage for a total of 12.  Not bad for an initial outlay of about $160 out of pocket.
     
    Hope this helps someone out there looking to do something similar.
     
  13. Like
    gringott reacted to YagottaBKiddin in DrivePool FTW vs. Storage Spaces 2012 R2   
    Good evening everyone.  I found this very interesting, fascinating even:
     
    I had just set up a Windows 2012-R2 Storage Server, with ~20 TiB of drives to function primarily as a media server running Emby and file-server endpoint for various backups of machine images etc.  
     
    CPU: Intel Core i7-4790K
    RAM: 32 GiB DDR3 2666MHz.  
    OS is hosted on an M.2 PCIe 3.0 x4 SSD (128 GiB) *screaming fast*. 
    5 x 4 TiB HGST DeathStar NAS (0S003664) 7200 RPM/64 MB cache SATA-III
     
    Initially, the machine was set up with one large mirrored storage pool (no hot spares) composed from 5 primordial 4 TiB HGST sata-III disks all connected to sata-III ports on the motherboard (Z97 based).
     
    The graph below was taken during the process of copying ~199 files in multiple directories of various sizes (15K - 780 GiB) for a total of ~225 GiB.
     
     

    Fig 1: Copy To Storage Spaces 2012-R2 mirrored pool
     
    What isn't apparent from the graph above are the *numerous* stalls where the transfer rate would dwindle down to 0 for 2-5 seconds at a time. The peaks are ~50 MB/s. Took *forever* to copy that batch over.
     
    Compare the above graph with the one below, which is another screenshot of the *exact* same batch of files from the exact same client machine as the above, at almost exactly the same point in the process. The client machine was similarly tasked under both conditions. The target was the exact same server as described above, only I had destroyed the Storage Spaces pool and the associated virtual disk and created a new pool, created from the exact same drives as above, using StableBit DrivePool (2.1.1.561), transferring across the exact same NIC, switches and cabling as above also. More succinctly: Everything from the hardware, OS, network infrastructure, originating client (Windows 10 x64) is exactly the same.
     
    The only difference is that the pooled drives are managed by DrivePool instead of Storage Spaces:
     

    Fig 2: Massive file transfer to DrivePool (2.1.1.561) managed pool.
     
    What a huge difference in performance!  I didn't believe it the first time.  So, of course, I repeated it, with the same results.
     
    Has anyone else noticed this enormous performance delta between Storage Spaces and DrivePool?
     
    I think the problem is with the writes to the mirrored storage pool, I had no problems with stalls or inconsistent transfer rates when reading large batches of files from the pool managed by Storage Spaces. The write performance is utterly horrid and unacceptable!
     
    Bottom Line: The combination of drastically improved performance with DrivePool over Storage Spaces, plus the *greatly* enhanced optics into the health of the drives provided with Scanner:  DrivePool + Scanner >>>> Storage Spaces.  Hands Down, no contest for my purposes. The fact that you can mount a DrivePool managed drive on *any* host that can read NTFS is another major bonus.  You cannot do the same with StorageSpace managed drives, unless you move all of them en-bloc.
     
    Kudos to you guys!  <tips hat>
     
  14. Like
    gringott reacted to Christopher (Drashna) in The Largest Stablebit Drivepool In The World!!   
    Very nice!
     
    And there will always be somebody with more storage than you, somewhere.
     
     
    And I'm up to 73TBs now.  Had my first DOA drive ever. From Newegg. They paid for the return shipping, and replaced it promptly. New drive works great (my 4th 8TB Seagate Archive, BTW).
  15. Like
    gringott reacted to McFaul in The Largest Stablebit Drivepool In The World!!   
    Lol.. does that mean I only get the silver medal? (plus.. in the true internet style.. photos or didn't happen)
     
    also sadly very full.....

  16. Like
    gringott reacted to McFaul in Stategy for filling Archive drives.   
    Yes Chris, definitely slacking!
     

     
    however I haven't just hijacked the thread for no reason!
     
    I have 12 very fast drives... 12 slowish drives.. and 6 archive drives (deathly slow for writes)
     
    Can I set it up so that new files written to the pool ONLY go onto the 12 very fast disks
     
    and then in the background the balancers make sure that all 30 drives stay the same % full?
     
    I assume I need the SSD plugin, but there seem to be too many options. help please! hehehe
  17. Like
    gringott reacted to McFaul in Stategy for filling Archive drives.   
    Yes, where ever possible they are simply rips
     
    even the 4GB/episode stuff looks blurry and soft to me, compared to the 8-12GB/episode bluray rips
     
    yeah.. in terms of DP plugins I've only got the scanner, SSD, and space equalizer on.
     
    I've told the SSD plugin it can let the drives get to 100% full.. so hopefully it will only let new files get added to the very fast disks
     
    then the space equaliser should move stuff in the background so all the disks are filled to the same percentage
  18. Like
    gringott got a reaction from Christopher (Drashna) in The Largest Stablebit Drivepool In The World!!   
    Couple of thoughts -
    Multiple NICs are great if you know how to use them and don't buy crap. Intel PRO NICs never failed me, in hundreds of servers where I worked. Other brands may work OK, but the off-load from the CPU as implemented by Intel is very impressive. I never regret paying a little more for Intel. I have had some good luck at work with HP & Dell rebranded NICs, but easier to source drivers etc and know what I am getting with Intel at home.
     
    The HP workstation [dual XEON] I use for my archives has a LSI controller on board, 8 ports. Beside 8 standard SATA ports. Great stuff.
     
     I left WHS due to single CPU support, and I was finding it restraining rather than liberating. The main reason I used it was for backups and drive pooling, once drivepool came out and I found it usable, I ditched WHS.
    For me, the perfect OS is Windows 7, I don't have to fiddle with it, it just works, streaming without issues. No features I don't need. I'm using W7 Ultimate N on my archive server and W7 U on my current server, all x64.
    When the archive server dies of old age I will evaluate the market, but right now I see some impressive small form factor motherboard / cpu combos that can do what very expensive hardware was needed to do five years ago. The electricity savings, lower CPU wattage [heat] mean modernizing could save a lot of money. Fans and air conditioning cost a lot of money to run 24 - 7.
     
      Just moving a large part of my "rarely accessed" storage to the archive server saved me a ton of money, if I'm not using it I turn it off. I have further sub-divided the archive into two pools on that server, archives and offliine, I have offline on a individual surge protector, it holds long term storage that I am not likely to need more than once or twice a month. The creation of an archive server was well worth having two drivepool and scanner licenses, they paid for themselves long ago. In a sense, it is a poor man's tiered storage, I got the idea when we were being ptiched new SAN units EMC, Dell etc. The big point was tiering your storage and using fast drives for current workload and data migrating to slow cheap drives when it wasn't be accessed anymore on a regular basis. The difference is I migrate manually.
     
    Here are my current stats, not the impressive over 246 TB that other guy has, but right now 107TB [formatted] with 14.3 TB of free space across all three pools, if I got my math right.
     

     

     


    As for "management" I just use RDP, built in and just works. If I had to manage hundreds of servers like I did at work, that's another story. The less cables the better for me. I don't live in a McMansion, pretty easy to walk over and see what's up if there is a problem. I don't run my server headless, I have a monitor and keyboard/trackball hooked up [monitor powered off] for local access.
  19. Like
    gringott reacted to RobbieH in The Largest Stablebit Drivepool In The World!!   
    I'd put it this way, if you don't know why you need multiple NICs, then you probably don't need them.
  20. Like
    gringott reacted to RFOneWatt in The Largest Stablebit Drivepool In The World!!   
    You old?
     
    I was 12-13 y/o when I was running my C-64 BBS ... on C-Net!   So don't feel bad.
     
    Those 1581's were a finicky bunch until about their last run when the finally got most of the bugs out.. by then it was too late though! 
     
    ~RF
  21. Like
    gringott reacted to RFOneWatt in The Largest Stablebit Drivepool In The World!!   
    Don't feel inadequate -- we all started somewhere.  I don't think anybody here is using 8TB drives yet.  Out of all of my drives a 5TB is my largest. The rest are mostly all 4TB HGST and Seagate drives.
     
    For the best bang for your buck I would recommend doing your research and buying (quality) used hardware.  That's how I was able to afford my first Xeon server.  Right now there are a ton of multi-processor Xeon boards/servers out there that are a dime a dozen with the only caveat being electrical usage due to being a bit older.  
     
    I have a hard time recommending Highpoint stuff.
     
    It's finicky to say the least. I've been doing this for almost 30 years and I've never had a brand of controller that gave me as many headaches as the Highpoint's. I own, use and rely on two 2740's, a 2720 and a couple of their smaller Rocketraid cards (that I started with) and they all had weird/annoying issues. After some headaches I was able to work through all of the issues but had to jump through quite a few hoops to do so. Once they are up and running I have no complaints (so far) but I would not recommend them to someone who is unfamiliar with them and I'll never purchase another Highpoint product.  If you're willing to play, learn and be frustrated go for it. Once they are "happy" they are pretty darn fast for the $$.
     
    If you're on a smaller budget I would suggest checking out some of the LSI stuff for your controllers.  Drashna and some of the other guys here have more experience with them but I recently purchased a 9240-8i and I've had no issues and really like it.
     
    If you need a large drive case and are willing to forego a couple of amenities such as hot-swap trays you can get a much better deal on a Rosewell or something similar.  
     
    I'm by no means a hardware guy but I'm sure  you'll get some better answers from some of the other folks here.
     
    Good luck!!!
     
    ~RF
  22. Like
    gringott reacted to PossumsDad in The Largest Stablebit Drivepool In The World!!   
    I also ran a BBS on a C64 back in the day. I had 2 3.5 inch drives on a 300 baud rate modem. I ran Mustang BBS software.
     
    I must be one of the old ones around here as I was in my twenties when I ran it. lol
  23. Like
    gringott reacted to Christopher (Drashna) in Minor annoyance (maybe Directory Opus)   
    There is no problem with that. 
     
     
    However, myself and various other members do use these beta builds in production. 
    And for the most part, they are fairly stable. They're released in responses to specific issues, for the most part.  But if you're weary, then please hold off.
  24. Like
    gringott reacted to RobbieH in The Largest Stablebit Drivepool In The World!!   
    That stuff was a lot of money for me back then. I had a couple of grand in my Atari ST, then there was the cost of the hard drives and Adaptec 4000 (close to a grand total), my first USR modem set me back $699 because I didn't order it on the sysop deal, and so on... and back then our household income was just over $18,000 per year. And that was for the system that ran the BBS, never mind the other equipment in the room.
     
    But like I said, look where it got me. I'm paid well, I work for a large, stable company, and I walk from my bedroom to my office to go to work - no commute, no dressing up, nada.
  25. Like
    gringott reacted to RobbieH in The Largest Stablebit Drivepool In The World!!   
    My wife didn't like me buying all that gear either. But the experience I gained in all my tinkering is what led to the IT job I have now.
×
×
  • Create New...