Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11572
  • Joined

  • Last visited

  • Days Won

    366

Reputation Activity

  1. Like
    Christopher (Drashna) reacted to logbit in dashboard on latest drivepool v2   
    Just migrated both my WHS2011s to 2.0.0.387 beta and all is working well
     
    Thanks for doing this, I feel much happier having my 4 drivepool installs all working on the same version of the software.
  2. Like
    Christopher (Drashna) reacted to AMCross in dashboard on latest drivepool v2   
    just installed working fine so far
     
     
    thanks alex
     
     
    and Christopher for twisting his arm :-)
  3. Like
    Christopher (Drashna) got a reaction from AMCross in dashboard on latest drivepool v2   
    Hey guys, in case you haven't see it yet:
    http://blog.covecube.com/2013/08/stablebit-drivepool-2-0-0-387-dashboard-tab/
    Guess what Alex added for us!
  4. Like
    Christopher (Drashna) reacted to logbit in dashboard on latest drivepool v2   
    +1 For this
  5. Like
    Christopher (Drashna) reacted to saitoh183 in dashboard on latest drivepool v2   
    only reason i havent moved to v2 ...but if i migrate to 2012..well thats another story
  6. Like
    Christopher (Drashna) reacted to Alex in StableBit Scanner - Identifying Disks Uniquely   
    Keeping this board on topic, I'd like to talk about something pretty technical that is central to how the StableBit Scanner operates, and perhaps get some feedback on the topic.
     
    One of the benefits of running the StableBit Scanner is not just to predict drive failure, but to prevent it. The technical term for what the StableBit Scanner performs on your drives to prevent data loss is called Data Scrubbing (see: http://en.wikipedia.org/wiki/Data_scrubbing). By periodically scanning the entire surface of the drive you are actually causing the drive to inspect its own surface for defects and to recognize those defects before they turn into what is technically called a latent sector error (i.e. a sector that can't be read).
     
    In order to do the periodic surface scan of a disk, the StableBit Scanner needs to know when it scanned a disk last, which means that it needs to identify a disk uniquely and remember which sectors it has scanned last and when. The StableBit Scanner uses sector ranges to remember exactly which parts of which disk were scanned when, but that's a whole other discussion.
     
    I would like to focus this post on the issue of identifying a disk uniquely, which is absolutely required for data scrubbing to function properly, and this was overhauled in the latest BETA (2.5).
     
    The original StableBit Scanner (1.0) used a very simple method to identify disks. StableBit Scanner 1.0 used the MBR signature to differentiate disks among each other.
     
    For those who don't know what an MBR signature is, I'll explain it briefly here. When you buy a new disk from the store, it's probably completely blank. In other words, the disk contains all 0's written to it throughout. There is absolutely nothing written to it to differentiate it from any other disks (other than the serial number, which may not be accessible from Windows).
     
    When you first connect such a blank disk to a Windows machine it will ask you whether you want to "initialize" it. This "initialization" is actually the writing of the MBR (master boot record, see: https://en.wikipedia.org/wiki/Master_boot_record), or GPT (GUID Partition Table) if you so choose. The MBR and GPT define the header (and perhaps footer) of the disk, kind of like when you write a letter to someone and you have a standard header and footer that always follows the same format.
     
    One of the things that initializing a disk does is write a unique "signature" to it in the MBR or GPT. It's simply a long random number that identifies a disk uniquely. The problem with a MBR signature is that the random number is not large enough and so it is only meant to be unique on one single system. So if you connect a disk from a different computer, the disk signature on the foreign disk has a miniscule chance of being the same as a disk on the system that its being connected to.
     
    Well, for the StableBit Scanner 1.0 this would be a problem. It would recognize the new disk as being the old disk, which would cause all sorts of issues. For one, you can't have the same disk connected to the same computer twice. That's simply not possible and we would write out an error report and crash.
     
    StableBit Scanner 2.0 improved things a bit by utilizing the GPT signature, which was guaranteed to be unique across multiple systems. The only problem with using the GPT disk signature to identify disks uniquely is that disk cloning software is capable of placing the same signature on 2 different physical disks which would end up causing the same problem. In addition, many disks still utilize MBR, so we can't solely rely on GPT to resolve this issue.
     
    As you can see this has not been an easy problem to solve
     
    In the latest StableBit Scanner 2.5 BETA I've completely overhauled how we associate disk scan history (and other persistent settings) with each disk in the system. This is a major change from how things used to work before.
     
    In 2.5 we now have a brand new Disk ID system. The Disk ID system is heuristic based and picks the best disk scan history that it knows of based on the available information. We no longer rely on a single factor such as a MBR or GPT. Instead, we survey a combination of disk identifiers and pick the disk scan history that fits the available data best.
     
    Here is the list of factors that we use, starting from the highest priority:
    Direct I/O disk serial number GPT Signature + WMI Serial number + Disk size GPT Signature + WMI Model + Disk size GPT Signature + Disk size MBR Signature + WMI Serial number + Disk size MBR Signature + WMI Model + Disk size MBR Signature + Disk size See the change log for more info on what this change entails. I hope that you give the new build a try.
     
    This post has definitely been on the technical side, but that's what this forum is all about
     
    Let me know if you have any comments or suggestions (or can find a better way to identify disks uniquely).
  7. Like
    Christopher (Drashna) reacted to Alex in Beta Updater   
    I've been debating how to handle automatic updates while we're in BETA.
     
    There are 2 issues I think:
    Too many updates are annoying. (E.g. Flash) Because this is a BETA there is the potential for a build to have some issues, and pushing that out to everyone might not be such a good idea. So I've come up with a compromise. The BETA automatic updates will not push out every build, but builds that have accumulated some major changes since the last update. Also, I don't want to push the automatic update as soon as the build is out because I want to give our users the chance to submit feedback in case there are problems with that build.
     
    Once we go into release final, then every final build will be pushed out at the same time that it's published.
  8. Like
    Christopher (Drashna) reacted to sspell in StableBit DrivePool Per-Folder Duplication   
    mrbiggles you pretty much shared my thoughts exactly. I always say keep it simple and that makes drive pool so nice it's simple and easy to work with and the per folder duplication has that in spades.
  9. Like
    Christopher (Drashna) got a reaction from 580guy in Pool Still useable while removing a drive?   
    When you remove the drive, it sets the pool as "read only", so you can't write to it.  But you can still access the pool during this time.
    However, because it is read only, any thing that relies on altering data will fail, such as client backups. But media streaming should work fine during this time. However, because one or more drives maybe written to during the removal process, it may make the system laggy.
     
     
    If you want though, you could use the "File Placement Limiter" balancer to move the data off the drive in question, and once that's done, remove that drive from the pool immediately. And since there should be no pooled data after it's "balanced" the data, it should take minutes (if not seconds) to remove it from the pool.
  10. Like
    Christopher (Drashna) reacted to Alex in Beta 320 BSOD   
    I've looked at the dump that was referencing this thread and this has already been fixed and the fix will be available in the next public build.
  11. Like
    Christopher (Drashna) got a reaction from normal in How To Replace Hard Drives?   
    When you remove a disk from the pool, part of the removal process is to move all the files off of the disks. You can skip "duplicated" files which may be a good idea!
     
    Given that you don't have the space to remove the drives *and* maintain duplication... That would be the bigger issue here.
    If you have the money, opt for the "advanced replacement". It requires a credit card (or debit card), and they put a hold on the card equal to the amount they charge for the HDD. Then they send one out to you immediately and give you a window of time to return the bad drive. Once they get it back, they remove the hold from your card. Otherwise, if you take too long, the charge goes through.
    I'd recommend this method to you, because you can get the two new drives, add them to the pool also, and then remove the bad drives from the pool.  
     
    Otherwise, you'll want to use the "skip duplicate files" option when you remove the disks, and disable duplication temporarily.  Which isn't optimal at all.
  12. Like
    Christopher (Drashna) got a reaction from AMCross in dashboard on latest drivepool v2   
    ... With my dying breath, it will be in there one way or another.
    Alex has said that he is considering it. But to be blunt, he's been very busy with fixing bugs, and trying to get v2 to a "stable" build, so this isn't a priority right now.  In the future possibly, but not at the moment.
     
     
    But yes, I would absolutely love to see the dashboard integration back as well. And Alex knows how I feel about this.
  13. Like
    Christopher (Drashna) reacted to saitoh183 in My Non Rack Server   
    My server started out as my Desktop and then became a dedicated server when i was tired of having to manage data for the entire network from my desktop
     
    Hardware:
    Case : Thermaltake Armor+
    Motherboard: Asus P5KLP-AM EPU
    CPU: Core 2 Duo E6300
    Ram: 2X2GB OCZ (OCZ2G8002G)
    PSU: Seasonic M12II 620W
    HDD internal Cage: Coolmaster 4-in-3
    HDD External bay: Mediasonic Probox
    Cards: Syba SD-SATA2-2E2I 4 Chnl SATA II Card , SY-PEX40008 4-port SATA RAID Controller , Mediasonic ProBox HP1-SS3 PCI-Express 2.0 x1 SATA III (6.0Gb/s) Controller Card
     
    Storage: 
    64GB Adata SP900 (OS only)
    250GB WD (OS Mirror...Raid1)
    400GB Seagate (Application/script/Downloads)
    1TB Hitachi (DP data)
    5x2TB Seagate (DP data)
    2TB Toshiba (DP data)
    2TB Hitachi (DP data)
    2TB WD Green (Parity drive for Flexraid)
    HDD External: 2TB WD MYBook Live (connect to network) Only stores Backups of machines on network
     
    Software on Server(always running only):
     
    OS: WHS2011
    Drivepool
    SB Scanner
    Goodsync
    VMplayer(XBMC second copy)
    MYSQL
    XBMC
    NZBDrone
    Sabnzbd
    Playon
    Flexraid
    Teamspeak server
    Teamviewer
    Various scripts via Task scheduler
     
    Server is on 24/7 I still have room left for expansion inside case (3 more drives) then i will have to get a second Mediasonic probox (probably a 8 Bay)
  14. Like
    Christopher (Drashna) reacted to sl4ppy in My Non Rack Server   
    I might as well jump in here too.
     
    Hardware
    Case : CoolerMaster Elite 120
    Motherboard: ASUS P8Z77-I Deluxe 
    CPU: Core i3-2105 3.1GHz
    Ram: CORSAIR 8GB (2 x 4GB) DDR3 1600 (PC3 12800) 
    Optical: Blu-Ray drive.
    Cards: RocketRaid 644L 6Gb/s 4x eSata
    External Enclosure: 2x Sans Digital 8-Bay eSATA RAID 0/1/10/5/JBOD Tower Storage Enclosure
    OS: Windows Server 2012 Essentials
    UPS: CyberPower 1500AVR provides about 40 minutes of up time on battery power for both the server and the 2 arrays
     
    Total storage using 9 of the 16 total bays is ~24Tb.
     
    This machine functions as the heart of my entire house network.  It servers every TV in the house's mediaPC with DVD/BluRay content (over 700 ISOs, etc) as well as all our various other network storage needs.  Really it's main function is MyMovies and and auto-ripping and storing DVDs & BluRays.  I can walk up, throw a disc in the drive, and come back later and remove it from the ejected tray.  The entire rip, cataloging, adding to database, etc., is 100% hands off.  I can optionally have it auto convert the ISO to an mp4 (or other) automagically (thus the i3 cpu).  I simply love it.
     
    The entire house is wired with shielded Cat6 and it sits next to the switch in a closet of my office on a shelf, no monitor or keyboard attached, completely managed remotely and it works like a charm.
  15. Like
    Christopher (Drashna) got a reaction from Shane in Scanner Scheduling?   
    Or if you want to keep very organized:
     
       <setting name="Scanner_RunningFile" serializeAs="String">                 <value>C:\ProgramData\StableBit Scanner\RunningScan.txt</value>
  16. Like
    Christopher (Drashna) reacted to Mr_Smartepants in Migrating back to WHS2011, Drivepool v2 or v1.3?   
    Alex fixed it!
    Turns out the virtual (pool) drive was starting in offline mode after restart.  Bringing the drive "online" (in Disk Management) allowed SBDP to detect the pool and re-engage all the pooled drives.
    Woohoo!  Thanks Alex.
  17. Like
    Christopher (Drashna) got a reaction from Shane in Can I set up duplication for specific child folders under a parent share folder?   
    nwtim, 
     
    DrivePool 2.x is currently able to install on WHS2011 just fine. But it lacks any really integration. It just adds a link to the UI in the Server Folders tab. 
    But 2.x is still in beta, but we are trying to get to a "stable" version as soon as we can.
  18. Like
    Christopher (Drashna) reacted to Shane in Install fails on Windows 8   
    Protip: installing unofficial betas without the developer giving you the go-ahead is a quick way to end up in the scary end of "big red button" territory. 
     
     
    Try this link for DrivePool removal instructions: http://wiki.covecube.com/StableBit_DrivePool_Q8964978
     
    Note that it is for the 1.x version, but it can be applied to the 2.x version as well (just ignore the Remote Desktop and Dashboard references).
  19. Like
    Christopher (Drashna) reacted to sspell in My Non Rack Server   
    Started this thread seems the other one was for rack mounts
     
     
    This server also does double duty as a windows media center pvr.
     
     
    LIAN LI Model PC-Q25B Mini Server Case
    ASRock A75M-ITX Motherboard : Upgraded to Asrock FM2A85X-ITX Motherboard 7 sata ports wow!!
    AMD A4-3400 Llano APU : Upgrade to AMD A10-6700 Richland APU
    Intel Pro 1000 PT dual Network N.I.C.
    Corsair XMS3 1333 8gb Memory : Upgrade to Patriot Signature DDR3 1600 8gb Memory
    Agility 4 256gb ssd O.S. & Feeder Drive : Upgraded to Samsung 840 Pro 128gb SSD
    Crucial C300 128gb ssd Feeder Drive
    Seagate Barracuda ST2000DL003 2tb Archive Drive
    SAMSUNG Spinpoint F3 1tb Archive Drive
    Windows 8 Pro & Media Center O.S. : Upgraded to WHS 2011 O.S.
     
     
    I hope to upgrade the two archive hard drives to 2 3tb WD Reds soon.
    This has run rock solid so far.
    Note: A few upgrades this is running great also with one exception. Yet to get usb 3.0 controller working seems to be a driver issue
  20. Like
    Christopher (Drashna) reacted to Shane in Unable to Remove disk from Pool   
    After you unplug the disk, you should tell DrivePool to remove the missing disk from the pool. It will check for and reduplicate any files that are supposed to be duplicated.
  21. Like
    Christopher (Drashna) got a reaction from joe_m in Scanner Scheduling?   
    Again, thanks for the suggestion, I'll pass that on to Alex as well.
  22. Like
    Christopher (Drashna) reacted to Blake in Testing Out DrivePool   
    Thanks for the responses Shane and Dane. I'm slowly taking the 'safer' method of Cut/Pasting everything over. So far, my experiences have been good, and if they still are at the end of the month come payday, you can expect another customer.
  23. Like
    Christopher (Drashna) got a reaction from AMCross in Reinstall Issue after replacing system C disc whs 2011   
    .... Ah, technology.... a fickle mistress...
     
    Well, glad it suddenly decided it wanted to work... but still looks like you're having other issues.  Maybe you should consider doing a fresh install of WHS2011. It sounds like the system has issues (or maybe run "sfc /scan").
     
    But what's happening specifically with the connector software? And have you tried disabling IPv6 yet?
  24. Like
    Christopher (Drashna) got a reaction from AMCross in Reinstall Issue after replacing system C disc whs 2011   
    Check DrivePool, to make sure it's not reporting any missing disks.
    After that, use the WSS Troubleshooter to "Reset NTFS Permissions on the Pool", and then the "Rebuild DrivePool Shares". But make sure you run the tool as Admin, to make sure it has full privileges.
  25. Like
    Christopher (Drashna) reacted to Alex in Symbolic Link Support   
    Thanks for testing it out.
     
    My initial implementation in build 281 above was based on the premise that we can reuse the reparse functionality that's already present in NTFS.
     
    I've been reading up some more on exactly how this is supposed to work and playing around with some different approaches, it looks like the entire concept of reusing NTFS for this is not going to work.
     
    So don't use build 281
     
    I'm going to take the current reparse implementation out and rewrite it from scratch using a different approach.
     
    Using this new approach, reparse points (or symbolic links) will appear as regular files or directories on the underlying NTFS volume, but will work like reparse points on the pool. This will also eliminate the burden of accounting for reparse points when duplicating or rebalancing, since they will be regular files on the underlying NTFS volume.
     
    Look for that in a future build. I don't think that it will make it into the next build because that one is concentrating on updating the locking and caching model, which is a big change as it is.
×
×
  • Create New...