Jump to content

SantiagoDraco

Members
  • Posts

    27
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by SantiagoDraco

  1. I don't think you need (or would benefit from) 10G for DrivePool based drives since unless you are duplicating more than 2x you will only see a possible boost in read performance from DPs performance boosts from split reads.

     

    I have a Synology 3615xs NAS with 2 10G ports and I have that directly wired to another 10G adapter in my Windows 2012 Server Essentials box.   The Synology is running 10 drives in Raid 6 with 2 SSD cache drives and I host all of my high data rate files (blu-ray etc) there.   All of my lower overhead data resides on a DrivePool.

     

    So long story short unless you are running SSDs under DrivePool I don't see you saturating 1G with spindle drives on a DrivePool "pool".   Certainly the cost for a switch is overly excessive and you will not see the benefit of that expense.   The cheapest switch will run you about 750.  Better to run direct with adapters and run another 1G back out to the LAN if you are focused on file transfers between servers.

  2. I'm pretty much completely lost as well.  Where are "drive options"?  I see this nowhere.

     

    Here are my issues so far:

     

    1.  Can't figure out how to "reset" a connected profile to point it to another account.  For example I authorized OnDrive through my hotmail account but decided to create my own business office 365 account but cannot for the life of me figure out how to relink.   All I can do is connect over and over again.

    2.  Can't figure out how to view the details of a linked account.  I want to see the account name that was used and other pertinent details (and change them as needed similar to 1 above).

    3.  Expected to be able to "right click" on each provider to bring up available actions (such as requested above).

  3. First off... great start!   Been waiting for this for some time since posting a while back and so happy to see it.  Now to get more drives to make room for this to work ;)

     

    So to my questions and suggestions.

     

    In the Balancing > File Placement > Folders section I see the list of folders and then the list of available drives on the right for creation of that relationship...

    1.  Would it be possible to add the actual drive model/size to the list of drives?  

    2.  If I remember correctly you said you do some level of performance measuring in DP (not sure to what extent).   Does this info lend itself to applying a "good/better/best" color flagging to the drives in this list to gauge relative actual performance?

    3.  If not (or maybe as an enhancement to 2 above) have you thought of adding the ability to, on a schedule or manually, "test" all drives for performance indexing that can be applied to my suggestion 2 or some other way?

    4.  If you are selecting folders/drives in the folders tab and you hit "save" the entire dialog closes.   Please add an "apply" button to create the actual rule rather than having every click create a rule.   The problem here is you can easily end up with rules that don't automatically remove themselves if you "uncheck" what you changed.  

    5.  When navigating down folders in the tree child folders should always display the drive selection of it's parent folder unless explicitly changed. 

    6.  Along the lines of the apply/save comment maybe it could be "apply to rules" in the folders tab for the current selection(s) and then a ".   Of course I know buttons should usually have single word but just stating as for examples sake.

     

     

     

    Have you thought of creating more traditional rules that can be grown/expanded as needed? For example I might have a rule called "HD movies" and within that rule are all the folders and drives that apply to that rule.   So I can have one rule for a given set of folders and assigned drives.  If I delete that rule I remove all those custom relationships.  If I ever want to assign a new folder to the rule I can just add the folder to the rule without having to create an entire new set of folder/drive assignments per folder.   Assuming this makes sense.    So in your Rules pane you'd have:

     

    RuleName1

    +Folders:

    -- Folder 1

    -- Folder 2

    -- Folder 3

    +Drives

    -- Drive 1

    -- Drive 2

    -- Drive 3

     

    RuleName2

    +Folders

    -- Folder 5

    -- Folder 6

    +Drives

    -- Drive 2

    -- Drive 4

    -- Drive 5

     

     

    Questions/comments aside thanks for such a great product.  I don't know what I'd do without it... maybe use raid again /shudder.

     

  4. Here's a few tips on using 4tb drives based on what I've had to go through.

     

    1.  If Windows doesn't see the "new" (uninitialized drive) as 4tb (3.6 or whatever) then most likely the sata controller you are using does not support it properly.  This is usually a driver issue or controller.   As Drashna said beware the cheap dock external enclosures as many don't properly support 4tb drives.

    2.  If you run into this issue consistently then you should think about getting a new controller, if feasible.  

    3.  Assuming you DO see the drive as a 4tb drive you MUST initialize as GPT, not MBT.   GPT is capable of supporting dirves in the petabyte range I understand ;)

    4.  Let's assume you put the drive in and it saw it as 2tb and you initialized and formatted it and are stuck with just 2tb.  No problem.   Assuming you have no data on the drive! take the drive out, put it into another system that does see the drive as 4tb.   Once in the new system I would do the following:

     

    a.  In the Windows Disc Management tool find the drive in the list and verify it sees the drive.   Be sure you have the right drive located!

    b.  Right click the left column block next to the drive in the lower middle area (you'll see Disk 0, Disk 1, etc) and in the resulting popup look at the options available for Convert.      If it's GPT you will see "Convert to MBR" as an option.   Go ahead and convert to MBR.  You will now see two partitions on the drive.

    c.  Repeat b above except this time convert back to GPT.    This process (as far as I understand it) will have effectively rebuilt the GUID partition table (GPT) correctly for the 4tb drive.

     

    5.  Now go ahead and format the drive as NTFS.

    6.  Verify the drive is working and you can copy data to and from the drive without issues.

    7.  Move the drive back to your server and try again.

     

     

    I followed this process on my server since my RAID controller will not present the drive to Windows as a legacy drive unless I initialize it elsewhere first.   However it works flawlessly once prepared.

     

    Also keep in mind that you COULD get an external adapter that you KNOW works and do the same process on the server itself.  I used this adapter:  http://www.amazon.com/gp/product/B000UO6C5S/ref=wms_ohs_product?ie=UTF8&psc=1

     

    Lastly verify that your controller will support a 4tb drive at all.   The process above is not necessarily a guarantee as I do not have the knowledge to state that it will work for all systems.   It just worked well for me.

     

    Good luck!   And be sure to double and triple check things are working before committing, as Shane said.

  5. <p>After more investigation it appears that removing the drive letters may have just been coincidental timing and another change (activating the directIO for scanner to get S.M.A.R.T. to work) may have caused the issues.   Testing now to see.</p>

    <p> </p>

    <p>Update:  After two hours of testing it was definitely turning on directIo in the scanner config that caused the errors.   Now to try turning off drive letters again to prepare for adding more drives to the pool ;)</p>

  6. I just tried removing the drive letters for all of my pooled drives and ended up with fairly chronic "ntfs errors" where there were numerous event log errors stating that the ntfs file structure was corrupt or errors were found and were repaired.

     

    I should add that this is on Windows Server 2012 Essentials.

     

    This is an example of the errors I am now seeing.   They are appearing even after restoring drive letters to all of the drives...

     

    Log Name:      System
    Source:        Ntfs
    Date:          10/23/2013 10:07:19 PM
    Event ID:      55
    Task Category: None
    Level:         Error
    Keywords:      
    User:          SYSTEM
    Computer:      Skeeter-HS1.SKEETERSSPOT.local
    Description:
    A corruption was discovered in the file system structure on volume U:.

    The exact nature of the corruption is unknown.  The file system structures need to be scanned online.

    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Ntfs" Guid="{DD70BC80-EF44-421B-8AC3-CD31DA613A4E}" />
        <EventID>55</EventID>
        <Version>0</Version>
        <Level>2</Level>
        <Task>0</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2013-10-24T03:07:19.551471500Z" />
        <EventRecordID>4721</EventRecordID>
        <Correlation />
        <Execution ProcessID="4" ThreadID="628" />
        <Channel>System</Channel>
        <Computer>Skeeter-HS1.SKEETERSSPOT.local</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="DriveName">U:</Data>
        <Data Name="DeviceName">\Device\HarddiskVolume13</Data>
        <Data Name="CorruptionState">0x1c</Data>
        <Data Name="HeaderFlags">0x802</Data>
        <Data Name="Severity">Critical</Data>
        <Data Name="Origin">File System Driver</Data>
        <Data Name="Verb">Force Proactive Scan</Data>
        <Data Name="Description">The exact nature of the corruption is unknown.  The file system structures need to be scanned online.
    </Data>
        <Data Name="Signature">0xe2b3f0fb</Data>
        <Data Name="Outcome">Pseudo Verb</Data>
        <Data Name="SampleLength">0</Data>
        <Data Name="SampleData">
        </Data>
        <Data Name="SourceFile">0x42</Data>
        <Data Name="SourceLine">1436</Data>
        <Data Name="SourceTag">345</Data>
        <Data Name="CallStack">Ntfs+0x178e59, Ntfs+0xb9ce1, Ntfs+0x178d6b, ntoskrnl+0x3a65d, ntoskrnl+0xe3c80, ntoskrnl+0x1542c6</Data>
      </EventData>
    </Event>

  7. Just saw this thread, should have read it sooner!    This sounds great and I posted another thread about creating a performance index using a new performance test that might be added to Scanner in order to determine real world drive performance.   That could be used to determine which drives to place which data.

     

    My suggestion was more of an automatic thing (ie flag a folder as needing high/medium/low performance disc) using the same list we currently use for duplication... that said I can also see, after reading the thread, the benefit of manually associating folders with drives.   I like the Disney example.

     

    One thing I think might need addressed is making sure that users are able to properly identify discs more specifically.   Ie not just showing drive letter/volume name but also the drive model, if possible.

     

    I can't figure out why no applications really let you see the physical drives detailed information associated with the Windows drive letter/controller channel, etc.   Drives me batty sometimes trying to identify a disk :)

     

    Anyway this is a great thread and I'm looking forward to some of the proposed enhancements.   I think Drive Pool is fantastic and hope to see it grow and really get noticed by the market.  These kinds of things will do that I think.

  8. I tried changing the UnsafeDirectIO value to true in the config and after this I received this error when attempting to start the service:

     

    "Error 14001:  The application has failed to start because it's side-by-side configuration is incorrect.  Please see the event log...."

     

    This occurs if I make the config file active by renaming it.  If I rename it back to _default then the error does not occur.

     

    Thanks again!

  9. As the title says I'm trying to read SMART data from Scanner on drives connected to Sans Digital 8 bay enclosures (TowerRAID TR8M-B).   The controller card is a HighPoint RocketRaid 2522 (hardware raid with port replicator support).

     

    So far no SMART data, all Scanner can do is see the drives and scan them for surface health.

     

    Have tested on both WHS 2011 and Windows Server 2012 Essentials R2.

     

    Thanks.

  10. Related to another topic I started for DrivePool I was thinking it might be helpful to add a feature to Scanner that would do the following (and also work in conjunction with DrivePool).

     

    1.  Once (or manually) Stablebit Scanner would perform a set of performance tests on a drive and set a Performance Index value.

    2.  The index value could be a single average of certain types of reads/writes or maybe an index based on file size.   The idea here is to create an index value that can be used intelligently to optimize performance of drives in a pool.

    3.  In DrivePool a user can flag folders as needing "high/medium/low" performance (or high/medium/low data rate, whatever is most intuitive to the user).

    4.  Drive Pool's balancer function could then have "performance optimize" turn on/off and this optimization function would reference the drive performance index values from StableBit Scanner to then intelligently move folders around to best optimize performance.

     

    Hopefully that makes sense.  The reason for the suggestion of the performance index test is that there should probably be a consistent way of designating high/medium/low performance drives relative to each other (the index should be relative) rather than using some external manual tools then having to add UI management functions for users to associate folders with drives explicitly.  Automatic is always best ;)

     

    Thanks!

     

    I think this would also help sell more licenses of Scanner!    For myself Scanner has been of limited use since Smart doesn't work in my configuration.  A feature like this, a synergy between DrivePool and Scanner, would make me run out and buy a license right away ;D

  11. HI p3x,

     

    One thing that's important for folks to consider when building a server is future use.   The system you've described above is a decent file server but people should think about what will happen if they start adding services to their "home server" and pushing the envelope of their hardware (should they go the low powered route).  For some cost (initial and over time) are key considerations.   It's just important to know that you'll find that limiting expansion down the road.  

     

    For example I use Plex, SickBeard, SABnzdb, and a couple of different DNLA type servers as well as serving 3 HTPCs in the house.   As those services work, particularly on a server with a lower performance processor and non-hardware assisted sata controller (raid or otherwise) you'll see slowdowns that will affect delivery of video streams, for example.

     

    That said if you aren't playing blu-rays from your server while the server may be transcoding or moving a lot of data around like I am you probably won't have an issue, but it's something to keep in mind.   I went the higher cost route (I went i7 but i5 would work as well) including an Intel t350-2 gigabit nic (to team to 2gb) and a hardware raid controller... even though I'm using drive pool... to have some headroom for expansion, both hardware and software wise.  

     

    A quick note:  For those that don't know the TPM does not provide hardware encryption assistance it only provides hardware level protection of the Bitlocker keys during startup.  But having a TPM is important if you want to be able to use Bitlocker without having to have a flash drive "dongle" plugged into your server to unlock Bitlocker.

     

    Otherwise your system looks like a good deal and as long as you are happy with it that's all that matters.   Some of us spent a lot of time getting to our server "nirvana".   I can tell you it took me ages! 

     

    Anyway, why don't you post your system in the systems thread as well.

     

    Cheers,

  12. Hopefully my title is clear and explains what I was looking for.   Basically I'd like to be able to assign affinity between folders and drives so that I can keep files with high data rate requirements (ie Blu-ray rips) on faster drives.  

     

    This may already be possible, I'm not sure, but if not please consider this a feature request.

     

    Thanks!

  13. That hotfix *could* have corrupted the disk counters, which is what we read. That or the memory leak may be related to that issue. But either way, that's probably at least RELATED to the issue. And that should definitely fix the issue, if that's the case. 

     

     

    Just make sure to reboot after running the command lines.

     

    Actually I applied the hotfix after the problem appeared so it wasn't the cause.    But it's working now so that's all that matters :)

  14. Here's my current server setup:

     

    Hardware

    Case: Rosewill Challenger Gaming Case (repurposed) with 5 internal and 5 external bays.   Only 3 used currently.

    Motherboard:  EVGA Sabertooth x58

    CPU:  Intel Core i7 950

    RAM:  6x 2GB Corsair 12800 DDR3 modules (12GB total)

    Network:  Intel i350 Dual Port PRO/1000 adapter teamed to 2Gbps

    Switch:  D-Link DGS-1216T 16 port Gigabit Managed Switch

    HDD Controller 1: RocketRaid 2522 controller with 8x esata connectors running to 3 (soon to be 4) Sans Digital 8bays (see below)

    HDD Controller 2:  Chipset based.

    External Enclosures: 3x Sans Digital 8 bay eSata enclosures hosting (47TB total):

    • 1x Intel 160gb SSD (OS)
    • 1x Seagate 1tb drive (OS backup)
    • 2x 4TB Seagate
    • 6x 3TB Seagate and Hitachi drives
    • 10x 2TB Seagate and Hitachi drives (9 in a 16gb raid 5 array soon to be migrated to Drive Pool)
    • 5x 1.5TB Seagate drives (being replaced over time by 4TB drives)

    Storage Config:

    • Array:  16gb Raid 5
    • Drive Pool:  30.9TB (100% duplicated)
    • Misc drives:  1 1.5 not in pool)

    UPS:  APC 1500

     

    Software and Services

    OS: Windows Home Server 2011 (may migrate to 2012 essentials)

    Storage:  Drive Pool 2.0

    Media Management:  My Movies 2011 and Plex

    Remote Management:  Splashtop, RDP and Windows Home Server remote access web

    Content Services:  Sick Beard and SABnzbd

    XBMC:  XBMC central user profile store and MySQL db.

  15. I know but it requieres an USB key... A bit useless for a server. Always plugged in then and so I can turn encryption off... which would be the same since a thief would have the USB key. Otherwise you won't be able to remote-start the server (if the usb key isn't attached) which is also bad since this is a Server not a workstation...

     

    Hmm, I haven't tried this myself... but why would you need to leave the key in?  Doesn't Bitlocker support removal of the token after Bitlocker authenticaiton?    I would expect so (but I am assuming here).

     

    Bitlocker is also much faster than Truecrypt so I'd think it would be more desirable.

     

    Lastly just get a TPM based motherboard /wink. 

  16. In the WHS 2011 dashboard the DP plugin is not showing any disk activity on the pool at any time.   This includes times of user generated use and functions like duplication and balancing.

     

    Also, and possibly related, I have performance issues if a scan starts and I'm trying to stream from the pool, say a blu-ray rip, which should be well within the performance of the pool I would think.  

     

    I had some issues with a bad drive previously and the pool becoming corrupt and I was forced to reinstall clean.   I'm going to try clearing again but I'd like to understand (if this solves the issue) why clearing configuration data would fix performance issues.

     

    Thanks!

×
×
  • Create New...