Jump to content

dbailey75

Members
  • Posts

    136
  • Joined

  • Last visited

  • Days Won

    16

Posts posted by dbailey75

  1. That really brings into perspective what my needs are and I have actually been eyeing an ASRock Rack mobo http://www.asrockrack.com/general/productdetail.asp?Model=H87WS-DL for quite some time.  It's about $110 and supports the Intel G3220 which is perfect for my Plex/Subsonic needs as I'm only serving up max 2 streams at one time.  And it's very upgrade-able if I require more power down the road.

     

    Thanks very much for taking the time to respond.  I feel a bit more empowered to make some decisions (finally).

    my two cents, you'll do fine with a desktop board, server boards are nice, and will give the benefit of better compatibility when it comes to drivers if running a server OS, but for the most part it comes down to budget. i just upgraded an Asus desktop board that I'd been running 24X7 for over two years, nothing wrong with the board, I just needed more sata, it was an ITX.  got a new (old Z77 I found open box real cheap) asus board atx board and case, had to tweak the intel Nic (yes :)  it has an Intel nic) driver to run on WHS 2011, but it was easy as folks figure out the issue a couple of years ago when the intel nic came out, just marketing ploy from intel so you don't buy a desktop board to run as a server.

     

    The proc, G3220, if there is even a remote possibility that you will need on the fly video transcoding, plex streaming video to mobile device=transcoding, then you really need to consider an I5, you can get by with a high end i3, but the i5 will only be a few bucks more and give you 4 real cores of processing.

  2. Hey Christopher,  If a KB is released and causes some strange behavior with other programs is there a way to report to some one that will actually look into the problem?  KB2977629 would constantly fight TMT 5 to be the active window, meaning the task bar, and any window not minimized would flash on the screen while watching a movie, quite annoying, and I narrowed it down to KB2977629, uninstalling fixed it.  Here's a link to a youtube video, https://www.youtube.com/watch?v=0YCeG_YuWF0&feature=youtu.be

  3. So here is what I did please let me know if this is ok or not.

     

    1. created the pool ( Drive letter assigned was G:\)

    2. Moved all files from drive E into the hidden folder on drive E

    3. In disk management for windows 7 changed the drive letter of E to F

    4 In Disk Manamanget change the drive letter for G ( Drive Pool) to E:\

     

    Seems to have worked just fine this way.  

     

    I also noticed that it automatically balanced the drives ( even though documentation says that default is no balance)

     

    Is there anyway program out there that will create a list of all files on the drive( or folders) so that if 1 drive fails, I know what was on it?

    I'm not going to use the duplication feature quiet yet as I don't think the 50 % overhead is worth it right now.

    Create a hash digest of the hidden pool parts of each drive with exactfile,  Or you can create a digest of the share, and when you lose a drive, run the digest check, and it will tell you what files are missing.  

  4. for ease of use, would definitely recommend using a drive letter somewhere near Z for DP or work your way backwards if Z is already taken,  if you plug up USB drives and add new drives often, windows my highjack your drive letters.  

     

    also, UNC files paths are easier to manage, one the one hand with DP, your data management is much easier, but a UNC path is going to make your life easier if you have to change drive letters for any reason.

     

    UNC path, note that your drive letter is irrelevant of what drive the share is stored on. 

     

    \\servername\share\file_path

     

    instead of
    E:\share\file_path

     

    some applications that depend upon the drive as a letter may not be compatible, in the end its up to you and what works.

  5. I just got bit by this exact same thing.  A drive died, I didn't have all data set to at least 2x duplication, and I have no idea what I lost.  I know I didn't lose anything too critical, but the fact that I don't know what I lost is concerning.  It isn't the fault of DrivePool, but if DrivePool isn't going to at least be able to report what was lost, I guess I'll have to come up with something.  It would be nice if that were a DrivePool feature though.

     

    I am curious though, as to why Scanner didn't report anything ahead of time.

     

    Agreed like a file inventory with hash, and a way to compare the inventory at various points in time, I'd buy it.    I've brought this up a number times,  for now your best bet is a file hash too like  exact file or fileverifier++, both will create an export inventory of all your files.

  6. IIRC,  bad sectors info is stored in the HDD's smart stats, so any new data written to the drive would not be written to the the bad sector, this data is fed to the scanner as soon as the server boots.  Bad sectors are ok, but you need to keep an eye on the drive in case the count starts to increase.   

     

    And it could be coincidence of sorts with the bad cable on the drive with the bad sector.   I ran in to an issue a while back with a new controller card that came with new cables, and had a lot of issues with drives dropping, etc, replaced the sata cables with some of the ones with the clips, all was good.  Maybe it was the cable, or the connector its self was not making a good connection with the cable,  IDK, 

     

    I believe you can ignore the bad sector warnings, but again, make sure the counts are not increasing, which may be why your still getting the warnings.

  7.  

    A quick look on eBay shows I can get a WHS2011 disc/license for under $100, and I can do Windows 8 Pro easily.  $400 for the 2012 R2 Essentials would blow my budget.  I'll look in to the other options.

     

    I had every intention of buying a copy or two for same keeping prior to it going EOL around the holidays last year, completely forgot,  great OS, even better OS for the money, I paid, in the $30 range for the two copies I'm running.  What a shame.  

     

    I received a coupon in June from Newegg, 20% off all MS Server OS's, would have been a good deal to get WSE 2012 R2, but I didn't really have a need to upgrade. 

  8. Thanks for all the help!

    I replaced the disk with another one (same model) and applied the WDidle fix to it (with the /d option).

    I'll monitor it and see if the problems comes back.

     

    Thanks for the great software that helped me notice.

    Without it, the drive would just have failed and I wouldn't have had a clue.

     

    glad the fix worked for you, and good to know that it works on this particular drive.

  9. How do i tell what is missing now?  I only have one folder duplicated (home photos) in my WHS drivepool settings.  The disk is clicking when i try to hook it up via an external interface and does not show up in windows...It is one of 11 drives, so i have no idea what was on it, is there a log or a file i can get any ideas?  I remember when drives lasted more than 2 years or so...

    I hope you have a back up set, if so you should be able to do a restore of only the missing files.

  10. Naw, I powered off, added new drive, booted up,  new drive was there, initialized as GPT, quick format, rebooted, long format, I left for work, and then noticed the temp, restarted remotely, and it all looks good,  Yeah, I'm thinking this drive is going to run a little warm at 7200, but performance has it's price ;)

  11. Hey, just installed a new HGST 7.2k NAS drive,  well, it finished the extended formatting, took about 9 hours, and scanner has been running for maybe 30 minutes, well, the Temp in Scanner has been 86 F all day, while the drives on either side of it are currently 102.2 and 105.8, and nothing is really going on with those drives, so I'm fairly certain, my new HGST is heating those up, and there's no way my HGST drive is 86F.

     

    and it got me thinking that my SSD, which has perpetually been 86 degrees for the last several months, that maybe the data on these two drive is not being polled correctly.

     

    Forgot I had PerfectDisk installed, it's showing 111F on the HGST 7.2k NAS drive.

     

    Edit,

    Well, just did a reboot, all is well, scratching my head on this one.  

  12. Power on time is 131 days and 13 hours.

    The Load Cycle Count is 301757 now (previous value was 300881) so 900 more then my first post.

    Is it still safe to use then?

    And how do you know that the tool is compatible with your drive?

    This goes against Christopher's advice, but I have a "newer" green drive, and it was not on the WD compatibility list, but it worked fine for me.   You can run the tool as describe in the instructions, and  see if the drive is recognized by the tool, that's the only way to tell.  as far as making the changes, proceed at your on risk, but the drive will slowly kill it's self, if you don't/can't change the time out feature. Just make sure you back up it before you attempt to make any changes.

     

    If this is being used as a data drive (not OS drive) there's a tool you can down load that writes data to the drive every 8 seconds, or what ever you set it too, to prevent the heads from parking, 

    http://keepalivehd.codeplex.com/, I've being using this tool for about a year for a second pool I have using 2 1TB HGST 2.5 drives

  13. It's not a WD Green drive, it's a WD Black drive (2,5"). Serial number of the drive is WD5000BPKX-75HPJT0. Unfortunately it's not under warranty any more  :( 

     

    I will try to use that tool on my other drives which are Green drives.

    What's the best setting for the idle timer?

    5 minutes is the max, but I believe the value is in seconds.  I'm not sure if the tool works with your drive, but it never hurts to check, even if your drive is not listed in their compatibility chart, it may still work.  I was going to get a black 2.5" drive from my OS drive, and decided against it for this very reason.   

  14. Today I got the message that one of my disks has a SMART error.

    The Load Cycle Count is over 300000 (300881 at this moment).

    How serious is this exactly? Do I need to replace the disk immediatly or can I still use it for a while?

     

    Just to be sure I removed it from my drive pool (data was evacuated without a problem).

    WD Green drive, right?  You can ignore it, as its just a count of how many times the heads are loaded/parked, and it's a common issue with WD Greens, make sure you change the idle timer with wdidel3, http://support.wdc.com/product/download.asp?groupid=609&sid=113.  but definitely keep an eye on the drive and if it happens to be under warranty, I would suggest trying to get a replacement.

  15. I just purchased a new board for my Windows Server 2012 install.   I plan on hosting a Drivepool arry and the machine will mainly be used for Plex.

     

    The machine has been running well but I'm wondering if I should install the native chipset / SATA drivers.  I've read a few threads that indicate that the drivers can have a large impact on the performance of the array but I don't really want to add any additional "failure" points. 

     

    So, are the default Microsoft AHCI drivers good enough or should I install the AMD driver package?

     

    I'd be inclined to use the manufacture drivers, especially when it comes the chipset and controllers to name a few.  They typically offer additional functionally over the standard MS drivers, I can't comment on better performance, performance is relative depending on your hardware anyway.  just my 2 cents.

  16. MB, it is an off the shelf Dell XPS Studio 8100 for what it is worth.

     

    As for the disks being advanced format (4K sector size), I think I have read somewhere not too long ago that external USB HDDs in particular, do some kind of sector size trick to maintain wider compatibility which would probably explain why Windows didn't complain about the larger disk size despite being mounted as MBR. Anyway, not too sure about this topic. All I can say is, they are the off the shelf, WD My Book external USB 3.0 HDDs connected to my Windows 7 pc via usb 3.0 hubs...One of them 3 TB and other 2 were 4 TBs.

     

    -Hiranmoy

     

    ,

    Your 100% correct, these drives (enclosures) have funky controllers in them,  if you pulled a 3 or 4tb drive from the enclosure, you'll see that there are multiple partitions on the drive, while connected to the controller, you only see one.  I've initialized 4 drives latest year, 2 4TB hitachi, 1 4TB seagate, and 1 3 TB WD,  all drives after  creating a single partition, GPT, format,  etc, running scanner, while connected via usb 3, stressing the drives before pulling them from the case, but when connected via a sata I had any where from 3-5 separate partitions.  

  17. Well, if it makes you feel any better.... while I haven't been bugging him daily about it, I have brought it up at any chance I get. :)

    Because I do think it would make a great product, and a great addition to our "line up".

    agreed,  this feature will make you stand out a bit, not quite ZFS, but close enough.

  18. They were on, though I've checked with them each enabled and disabled with the same result.

     

    Does the logs I uploaded show anything I can try?

    If all your drives are identical, then this is no help, but I have a hodgepodge of drives, some faster than others,  and depending where DP places the data, this often determines the transfer speed, slow drives are around 60-70, faster drives are around 90 and up.  

  19. :)

     

    Im thinking on grabbing some but amazon now is charging taxes to florida, im kinda interested on HGST 4TB Deskstar Coolspin 3.5" SATA III Internal Desktop Hard Drive that its $150, but this is consumer drive not NAS  :blink:, the difference would end $21.20 per drive that its making me think on pulling on the NAS or desktop....

    I'm running two of HGST coolspins right now that I jail broke from an external enclosure, I have 1 year and 150 days on one of them, and 297 days on the other, both have been running 24x7, your mileage may vary,  i'm not convinced the NAS drives are any different mechanically, they may have different firmware but on the other hand, the 4tb NAS drive have come down in price by roughly $50 since they came out last year, so they are more affordable,  your call,

  20.  

     

    ... Neither was I actuall. I just have a passion. :)

     

    But as I said above, for files that are very important, or irreplaceable, you should definitely use that "3-2-1" strategy. Because what happens if you accidentally delete the files? Sure, there is data recovery, but that may not get it back 100% intact.

     

    And I do think that I really did get Alex to seriously considering creating some sort of integrity checker. Hopefully.

     

    No. Normally, when it checks the duplication, it checks that the there are the expected/correct number of copies, and compares the file hash of both. I'm not sure if you've seen it, but the "file mismatch" error? That's what it's complaining about. The contents don't match. If you want, test this out with a simple text file. The notification may not be immediate, but it WILL happen.

     

    I just completed a some sales training for Veeam Backup, and the 3-2-1 strategy was on the test, lol.  

     

    I was not aware of this feature, this is good to know.

  21. Actually dbailey, if I understand you correctly, as long as I do not reformat ,my backup HDDs and start anew, I should be fine? A file that has not been modified for a long time would only be written to the backup disks _once_ at initial backup? I understand you _actively_ look for changed/corrupted files by comparing against backup but I could simply wait until a file becomes corrupted and then restore that file from the backup, no?

     

    Anycase, bit-by-bit compare of DP-duplicated files would suffice for me.

    I'm not a tech guy by trade, just on the weekends, and maybe I've read too many articles on bit rot, file corruption, etc, but how do you know when this occurs?   I put in some checks in place to help flag these issues should they occur. Maybe I'm a little anal, but i'm the CIO and CTO of the family,  I lose those pictures, and I'm in big trouble. lol.

     

    I do believe that StableBit DrivePool does do a hash check of files when it runs a duplication pass.

     

    Also, there are a couple of reasons I mentioned accessing the file via the PoolPart path directly. If it's duplicated, it's possible that only one version is corrupted, and the service hasn't run a pass on it yet. This way, you can verify and resolve the issue yourself.

    Also, it could be an interaction with the pool causing the issue, and the actual files are fine. And by "interaction", I mean with something like Avast or other antivirus software. They install file system filters and could cause complications if they don't handle the Pool drive properly.

     

    So if the original file becomes corrupt,  would drive pool not just duplicate this new, bad file?  

     

    hi all

     

     

    not sure if its drivepool related but I am having issue with some pictures stored having the following effect as per attached

     

    looks like pixels have moved

     

     

    now im not sure how drivepool works but can it be the cause ?

    running latest r c and latest scanner beta

    OP, I apologize for hijacking your thread, but at the very least I hope this information is beneficial.  

  22. It does not. With pictures, home movies etc, files may be untouched for years. I have DP x2, 3 Backup Disks of which 1 is offsite at all time. But Server Backup retains backups back to, say 4 months or so (or, it may be I started anew with Server Backups in which case it is my bad), anyway, there is a limit to how far one can go back. As there is no *active* check on file integrity, by the time I'd open an old file, it may be corrupt and I may only have backups of that corrupted file.

     

    Not sure if DP actually does a regular _compare_ of duplicates (so not just meta-data but the actual content) which would, for me, make this all a non-issue, it ends somewhere.

    I had scanner run weekly until a week ago then thought I was maybe being a bit, uhm, anal? about it? Default was 30 after all so I settled for bi-weekly now. But would it sense an accidental/incidental flipping of a bit?

     

    Agreed. most backup tools, especially those for home use that are affordable, will not touch a file that has not been "modified" since the last backup,which is why I went with syncback, as part of the backup profile for my irreplaceable data, it compares the hash of each file on the server to the hash of the file in the backup set,   a new hash is generated at each backup from both sources, again, this is another check to ensure the data does not change over time with out me knowing.  doubles the backup time, even when there is no new data, but it gives me that warm and fuzz feeling.

×
×
  • Create New...