Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11583
  • Joined

  • Last visited

  • Days Won

    368

Reputation Activity

  1. Like
    Christopher (Drashna) reacted to iceaura in Time to build a proper server   
    I have installed W7 64 Pro and have not had a single crash! I looked in the event viewer and I am still getting some errors relating to the 3ware driver BUT it has not crashed and it has been 3 days now. Does this mean that I could maybe build my server around W7 and not have any problems? What do you think?
  2. Like
    Christopher (Drashna) reacted to ctopherc in My Rackmount Server   
    Bissquitt,
    It's a custom built rack out of 3/4 plywood and audio equipment Raxxess rails. If you want the specs I could look for them but it was made to house my APC UPS being the deepest and enough room below for cables and a less deep top shelf to help with airflow. Additionally I mounted caster wheels to roll it in and out of my office closet.
  3. Like
    Christopher (Drashna) reacted to pmow in Using a friends house as a cloud provider   
    I had the same issue with BT Sync.  That's why I love Stablebit...can duplicate across vdisks.
  4. Like
    Christopher (Drashna) reacted to otispresley in Using a friends house as a cloud provider   
    I had issues when I last used BitTorrent Sync (When it was in beta) and ended up losing some files and was also looking for a way to use a friend as backup. We settled on AllwaySync, which works very well and does not impact our files at all. It does cost a little, but it is well worth it in my opinion.
  5. Like
    Christopher (Drashna) reacted to Gishi in product packages?   
    okay well thats grate news ill see if i can buy it next month.
  6. Like
    Christopher (Drashna) got a reaction from propergol in M1015 or M1115 dual link to SAS expander RES2SV240 working?   
    Please, definitely do check, and let me know what you find.
     
    Also, a good way to max out the bus speed... use StableBit Scanner's Burst test option (right click on a drive).  This runs the drive a higher speed than the disk is capable, but maxes out the bus.  It's great for testing for connectivity issues, but doing this on multiple drives may be a good way to test the controller link.
     
     
    And Expander Cards make for excellent cable management. 
  7. Like
    Christopher (Drashna) reacted to kihimcarr in LSI SAS 9201-16i HBA 16-port internal 6Gb/s SATA +SAS PCIe   
    http://www.lsi.com/products/storagecomponents/Pages/LSISAS9201-16i.aspx
     
    Driver version: 2.0.60.2
    Firmware version: 16.00.00.00
    BIOS Version: 08.00.00.00_68.51.34.17_09000000
     
    Using specific method: ScsiPassThrough48

     
    Using specific method: AtaPassThrough

  8. Like
    Christopher (Drashna) reacted to propergol in SSD Optimizer Balancing Plugin   
    Thanks a lot : it instantly worked 
    For now I am still testing SSD pluggin on unduplicated files. I am waiting for a third 240gb SSD to arrived (Sandisk PRO, not the slowest SSD  ...) to start testing with x3 duplication.
    From what I can see for now, it would work very well!
  9. Like
    Christopher (Drashna) got a reaction from hansolo77 in Building new server from scratch!   
    The controller should handling 64 drives, or it was 256. It depends on the controller firmware, really. And I'm not sure.
    http://www.avagotech.com/products/server-storage/raid-controllers/megaraid-sas-9240-8i#specifications
    http://www.avagotech.com/products/server-storage/host-bus-adapters/sas-9211-8i#specifications
     
    And you can chain Expander cards, IIRC. 
     
    The 34-36 limit is for the number of drives you can scan at the same time, assuming 120MB/s throughput. This is based on the PCI Express 2.0 8 lane (8x) connection.  This would be an issue for RAID arrays, but because DrivePool isn't going to be using every drive at the same time, it shouldn't be an issue for your Pool.
     
     
     
    And I'm very sorry to hear that you're feeling sick. I hope you feel well soon!
  10. Like
    Christopher (Drashna) got a reaction from propergol in SSD Optimizer Balancing Plugin   
    This depends on the balancing settings. 
     
    Try setting the balancing to occur immediately, and disable the "but not more often than" option. Also, set the ratio slider on the main tab to "100 %" and setting the "or needs to move this much" to "1GB". 
     
    This should help it be more aggressive about moving data out. 
    And this should definitely work, as that's exactly what I'm doing on my system, and it's constantly moving data off of the 2x128GB SSDs I have.
  11. Like
    Christopher (Drashna) got a reaction from propergol in Question regarding Intel SAS expander (ebay auction)   
    lol, I definitely understand that!
     
     
    As for the auction, that's not a bad price at all. As you've said, it's much more expensive for a new card (and more so with cables, at around $350-400, in fact).  So, it's not a bad purchase at all!
    But I definitely understand what you mean.  Just keep in mind that this is enterprise grade tech, and is designed for heavy loads and 24/7 operation. That's why there is such a premium on the price.
  12. Like
    Christopher (Drashna) got a reaction from propergol in My Rackmount Server   
    Please do!  
     
    The 153 event is a "retried" error.  This may be harmless (it's rather normal, actually), and if you're only seeing it during boot, it may just be a timing issue with the drive controller.
     
    Check the disk number it mentions with Disk Management (run "diskmgmt.msc") and check which drive it is.  
  13. Like
    Christopher (Drashna) reacted to propergol in Question regarding Intel SAS expander (ebay auction)   
    Yep. Sounds right. I will offer at $90.
     
    Since I have a very cramped build : 9 HDs + 4 SSDs inside a litle mid tower, the expander powered with a Molex would offer better cable management than an other M1115.
  14. Like
    Christopher (Drashna) reacted to Shane in FAQ - Parity and Duplication and DrivePool   
    The topic of adding RAID-style parity to DrivePool was raised several times on the old forum. I've posted this FAQ because (1) this is a new forum and (2) a new user asked about adding folder-level parity, which - to mangle a phrase - is the same fish but a different kettle.   Since folks have varying levels of familiarity with parity I'm going to divide this post into three sections: (1) how parity works and the difference between parity and duplication, (2) the difference between drive-level and folder-level parity, (3) the TLDR conclusion for parity in DrivePool. I intend to update the post if anything changes or needs clarification (or if someone points out any mistakes I've made).   Disclaimer: I do not work for Covecube/Stablebit. These are my own comments. You don't know me from Jack. No warranty, express or implied, in this or any other universe.   Part 1: how parity works and the difference between parity and duplication   Duplication is fast. Every file gets simultaneously written to multiple disks, so as long as all of those disks don't die the file is still there, and by splitting reads amongst the copies you can load files faster. But to fully protect against a given number of disks dying, you need that many times number of disks. That doesn't just add up fast, it multiplies fast.   Parity relies on the ability to generate one or more "blocks" of a series of reversible checksums equal to the size of the largest protected "block" of content. If you want to protect three disks, each parity block requires its own disk as big as the biggest of those three disks. For every N parity blocks you have, any N data blocks can be recovered if they are corrupted or destroyed. Have twelve data disks and want to be protected against any two of them dying simultaneously? You'll only need two parity disks.   Sounds great, right? Right. But there are tradeoffs.   Whenever the content of any data block is altered, the corresponding checksums must be recalculated within the parity block, and if the content of any data block is corrupted or lost, the corresponding checksums must be combined with the remaining data blocks to rebuild the data. While duplication requires more disks, parity requires more time.   In a xN duplication system, you xN your disks, for each file it simultaneously writes the same data to N disks, but so long as p<=N disks die, where 'p' depends on which disks died, you replace the bad disk(s) and keep going - all of your data is immediately available. The drawback is the geometric increase in required disks and the risk of the wrong N disks dying simultaneously (e.g. if you have x2 duplication, if two disks die simultaneously and one happens to be a disk that was storing duplicates of the first disk's files, those are gone for good).   In a +N parity system, you add +N disks, for each file it writes the data to one disk and calculates the parity checksums which it then writes to N disks, but if any N disks die, you replace the bad disk(s) and wait while the computer recalculates and rebuilds the lost data - some of your data might still be available, but no data can be changed until it's finished (because parity needs to use the data on the good disks for its calculations).    (sidenote: "snapshot"-style parity systems attempt to reduce the time cost by risking a reduction in the amount of recoverable data; the more dynamic your content, the more you risk not being able to recover)   Part 2: the difference between drive-level and folder-level parity   Drive-level parity, aside from the math and the effort of writing the software, can be straightforward enough for the end user: you dedicate N drives to parity that are as big as the biggest drive in your data array. If this sounds good to you, some folks (e.g. fellow forum moderator Saitoh183) use DrivePool and the FlexRAID parity module together for this sort of thing. It apparently works very well.   (I'll note here that drive-level parity has two major implementation methods: striped and dedicated. In the dedicated method described above, parity and content are on separate disks, with the advantages of simplicity and readability and the disadvantage of increased wear on the parity disks and the risk that entails. In the striped method, each disk in the array contains both data and parity blocks; this spreads the wear evenly across all disks but makes the disks unreadable on other systems that don't have compatible parity software installed. There are ways to hybridise the two, but it's even more time and effort).   Folder-level parity is... more complicated. Your parity block has to be as big as the biggest folder in your data array. Move a file into that folder, and your folder is now bigger than your parity block - oops. This is a solvable problem, but 'solvable' does not mean 'easy', sort of how building a skyscraper is not the same as building a two-storey home. For what it's worth, FlexRAID's parity module is (at the time of my writing this post) $39.95 and that's drive-level parity.   Conclusion: the TLDR for parity in DrivePool  
    As I see it, DrivePool's "architectural imperative" is "elegant, friendly, reliable". This means not saddling the end-user with technical details or vast arrays of options. You pop in disks, tell the pool, done; a disk dies, swap it for a new one, tell the pool, done; a dead disk doesn't bring everything to a halt and size doesn't matter, done.   My impression (again, I don't speak for Covecube/Stablebit) is that parity appears to be in the category of "it would be nice to have for some users but practically it'd be a whole new subsystem, so unless a rabbit gets pulled out of a hat we're not going to see it any time soon and it might end up as a separate product even then (so that folks who just want pooling don't have to pay twice+ as much)".
  15. Like
    Christopher (Drashna) got a reaction from propergol in SMART values for SSD SanDisk Extreme II and PRO 240 Gb   
    Thank you guys from grabbing the info. 
     
    I've submitted the issue and info to Alex (the developer), and he'll take a look at it soon (I suspect it maybe a simple issue).
    https://stablebit.com/Admin/IssueAnalysis/20806
     
    However, please do install the beta version (2.5.2.3103). There are a few fixes in it in regards to how SMART data is handled, and it may fix the issues you are seeing (specifically). 
     
     
     
    Yeah, sorry to not specify, but the ID is for all the devices in the system. It grabs the SMART data and other information about the disks and submits them to BitFlock. 
  16. Like
    Christopher (Drashna) got a reaction from propergol in Perfect Disk + Drivepool/Scanner?   
    Mostly correct.  3 plus 1. Each duplicate and the file on the Pool. And if poorly designed, could impact performance.
     
    If you're experiencing performance issues, either switch antivirus programs or exclude the Pool from being scanned or the PoolPart folders on each pooled disk.
  17. Like
    Christopher (Drashna) got a reaction from deleteme-4217489 in Plans for ReFS support?   
    I was wondering when somebody would pick that up.
     
    This is preliminary support for ReFS.  It is untested, and VERY beta. Since this is a drastically different file system, there can (and likely will be) issues with it. 
     
    We do plan on testing it out, so we can add proper support for it in the future.
     
     
     
     
    As for using it, we'd recommend using all ReFS disks for this, and not place anything critical on it, for the time being.  But if you want to test it out, by all means do so, and let us know if you run into any issues.
  18. Like
    Christopher (Drashna) reacted to Ryo in Building new server from scratch!   
    I'm running xeon 1246 V3 socket 1150 and its an awesome chip.
  19. Like
    Christopher (Drashna) got a reaction from propergol in Testing read striping (Win 10)   
    I'm not sure.
     
    Run a burst test just in case. 
    Otherwise, the "nuclear solution" is to reset the settings.
     
    But see below:
     
     
    Try the latest internal beta build:
    http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.0.649_x64_BETA.exe
    There have been some serious changes to the read striping and related code. It may fix the issue you're seeing.
     
    Unfriend, you should try this out as well, as it may fix the issue for you as well. 
  20. Like
    Christopher (Drashna) reacted to iceaura in Time to build a proper server   
    Here's hoping with the firmware upgrade and me checking the options in the BIOS, I could have this solved tonight . Although, I'll still be worried about adding 8TB drives (which I want to do). Maybe I could start by adding 8TB to the mainboard, if I fill those, maybe get a second, newer controller for the 8TB drives.
  21. Like
    Christopher (Drashna) reacted to hansolo77 in Building new server from scratch!   
    UPDATE!!

    Got my 2 Reverse Breakout cables from Chris today, so I'm in the process of installing 8 more drives into my case.  Only have 1 left to do as of this update, and my total drive space is now up to 18.2TB.  Thanks Chris!
     
    I've also made a decision to future proof my motherboard choice.  The motherboard has to be the most important thing in the case, as without it you don't have anything to connect all your parts to.  As I mentioned before, I'm not all worried about things like Hypervisor or EXSi or any of those lab type requirements.  It's just a simple file server for streaming content on the network, and the nightly computer backups.  HOWEVER, I don't want some piddly-ass cheap piece of junk either.  It needs to be future proof for upgrades down the line.  I still agree with Chris's suggestion of using the ASRock Rack board, and the E3C224 models seem to have what I would need.  USB3 (future!), PCIe AND PCI slots (past!).  The -4L model has 4 lan NICs as compared to the original one that only had 2.  So I think I'm going to go with that one.  I think with the Teaming functionality, it will help improve overall network performance.  I also considered the cost, and the -4L is only about $50 or so more, and the extra benefits justify it I think.  The only nagging thing I have is the future proof for the CPU.  I looked at the qualified CPU list, and based on what's currently available, it looks like the BEST I can do would be the "Intel Xeon E3-1276 v3 Haswell 3.6GHz 8MB L3 Cache LGA 1150 84W Server Processor BX80646E31276V3".  I was kinda bummed about that, since I've been researching processors and like how the E5's are HUGE on their L3 Cache, but they're also HUGE on their prices (like $1.5k for a top of the line).  So I think in the end, I can justify saving up and spending $350 for the E3 down the line.
     
    Since I'm still in the "save up" mode for the mobo/cpu/ram combo.. does anybody have any suggestions on something better before I pull the trigger?  Here's the basics of what I think I need:
     
    - ATX/mATX/EBB Form Factor (to fit in the Norco 4224 case)
    - 1 PCIe 8x slot (RAID Controller [Chris's MegaRaid M105 card])
    - 1 PCIe 8x slot (SAS Expander [will buy after everything else (HP 24-36 drive with external SAS for additional addon case later if needed)])
    - 1 SATA (for OS SSD Drive)
    - 1 SATA (for future proofed OS SSD Mirror?)
    - 1 SATA (for future proofed DrivePool SSD Optimize [still need to learn what that's all about])
    - 2 USB Pin Headers (for front of case keyboard/mouse)
    - 1 IPMI Port (sounds cool, would like to utilize it)
    - 2+ NICs (for enhanced stability and throughput on the LAN)
     
    I think that's it.  Not really a big list.  That's why I think the mobo I'm looking at will be perfect.
     
    EDIT ->  Anybody ever dealt with SuperBiiz?  When I go to pcpartpicker.com and look at builds other people have done, and check out the prices of their parts, I'm constantly seeing a major price discount with SuperBiiz as compared to the "Big Two" of NewEgg and TigerDirect.  They're a Google Trusted Store, so that's gotta mean something.  I'm just curious how their company is, and how we they would handle DOA shipments and RMA's should an issue come up.
  22. Like
    Christopher (Drashna) reacted to thnz in I/O deadlock?   
    It took several attempts, but I finally reproduced it again after lowering the local cache size to 20MB (though that could just be coincidence). 'Drive tracing' seems to have turned itself off after the hard restart, but hopefully it caught enough to be helpful. I've uploaded it via the form on that log collection page.
     
    Quick summary of how I reproduced it:
    New 1GB unencrypted drive on DropBox with 20MB cache Copied ~700mb file across Hard reset after the file finished copying (ie. disk activity in resource monitor had finished), but still uploading (was maybe 50mb into the upload) File is now corrupt (has different hash) after drive recovers Just want to add, that its times like this (constant restarting) that you really appreciate having an SSD. Reboots so fast!
  23. Like
    Christopher (Drashna) reacted to propergol in Disable (or not) TLER on RED drives?   
    ok thanks, I will go with TLER enabled.
  24. Like
    Christopher (Drashna) reacted to bman in Backup & Windows 10   
    I figured it out.
    I removed my license, installed/wiped new Windows 10, installed DrivePool and added my license and all is well.
  25. Like
    Christopher (Drashna) got a reaction from deleteme-4217489 in Plans for ReFS support?   
    Noted,  and I have talked to Alex about this recently (I keep on pushing it, because ... well ReFS).  
     
    If we do implement it, the entire pool will have to be ReFS (eg, all the disks in it... for simplicity, as mixing file systems is a nightmare).  And it would be Server 2012R2 and up (do to the file system features we require)
×
×
  • Create New...