Jump to content

Jaga

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    27

Reputation Activity

  1. Like
    Jaga reacted to Umfriend in 5 questions of a 30-day trial user   
    Simple. Assume you have 13 Pooled HDDs. Each contains a hidden PoolPart.* folder. You direct Windows Explorer to P:\ (which I assume is the drive letter you assigned to the Pool). DP will read the PoolPart.* folders on the 13 HDDs and merge the results. Then you select the folder Movies. DP will read the PoolPart.*\Movies folder and merge the results. Etcetera ad infinitum. There is no reason I can think of to have the merged results stand-by for the entire folder structure. That would be slow. And even then, with large Pools if they remeasure/rebalance, then a complete list will be neccessary in order for DP to check dupliction consistency and construct a file movement strategy so it would have to read the entire structure (not the actual files) and that is done rather quickly as well (and transparent to the user, you won't even notice it is working on it).
    This may sound slow but the 13 HDDs will be read simultaneously. There are many users here and I have yet to come across one that complains about DP being slow, whether read or write (except perhaps for the read striping not giving a benefit to some, such as me).
  2. Like
    Jaga reacted to fattipants2016 in Scheduled balancing >24 hours apart?   
    I only have experience with your second question. If you figure out Q1 you could probably just use it to trigger my solution to Q2.
     
    1.) enable drivepool's runningfile option
    2.) use bigteddy's FileSystemWatcher script (available on technet) to monitor for the removal of the runningfile you've configured, and write an event
    3.) use the event-log entry you set up to trigger snapraid via a scheduled task
     
    (in a nutshell, DP will create a dummy file while it's balancing and remove it when it's done. You can use the removal of this file to trigger SR)
     
    Some notes:
    1. ) the filesystem watcher eats up some i/o, so I still recommend you schedule it and define a max. runtime
    - If you let it run all the time, it will also trigger snapraid every time DP does any routine checks, not just balancing
    2.) I recommend configuring snapraid-helper (from codeplex) rather than calling snapraid from command-line
    - it will check for a user-defined number of missing files prior to sync, and e-mail you so you can decide what to do
    - you can also have it email you with a list of added / removed / updated / restored files after every sync if you so desire.
     
    I'd never touched powershell prior to configuring the scenaro above, and now I use it for all kinds of cool stuff. It's worth giving it a go.
    I made quite a few posts, here, while trying to get it working. they might be useful
  3. Like
    Jaga reacted to Christopher (Drashna) in DrivePool is not working - CoveFS not found?   
    What @Jaga said, actually. 
    And "coveFS" is the driver for the pool, so it's a critical component, actually. 
     
    And it may be worth doing this: http://wiki.covecube.com/StableBit_DrivePool_Q3017479
    And then reinstall the software. 
    If you continue to have issues, then open a ticket at https://stablebit.com/contact 
    and run the StableBit Troubleshooter: http://wiki.covecube.com/StableBit_Troubleshooter
  4. Like
    Jaga reacted to Sonicmojo in Switching to LSI 2308 SAS Controller from normal SATA ports with DrivePool   
    For the past few days I have been dealing with some frustrating SMBv1 issues with one of my key apps (Desktop Central) which is a critical part of the existing server (and the new one). I had to get some very odd issues straightened around first as I am planning on moving this DS install from this old box to the new one (which is really the same box). 
    Now that I have solved the issue - I can get back to the hardware fun. But as noted - I am treading very slowly here to ensure I do not compromise my existing pool or render this box "unstartable" but switching the SATA modes etc. I need to bench this box first, remove the existing pool drives and then pop in a spare to get this SAS controller going. That will most likely happen on Saturday.
    I will report back with either success or failure!
    Sonic.
     
  5. Like
    Jaga got a reaction from Christopher (Drashna) in Suggestions for internal PCIe JBOD SATA/SAS controller?   
    I'm simply glad I get to keep using Scanner as my pool monitor.  The thought of no Smart data and no Scanner...  yikes.
  6. Like
    Jaga got a reaction from The_Saver in 150 GB database on system drive to be split with pool SSD's   
    I guess if you wanted to keep it simple, you could just dedicate one of the four SSDs to the database, leaving the other three for Pool use.  You won't get multi-disk-IO speeds, but you'll still get the raw IOPS speed of a single SSD.
    Normally where data stores are considered I wouldn't even think of suggesting RAID due to failure rates on rebuilds and so on, but you're not dealing with RAID 5/6/etc or large storage drives scenario with the SSDs, so it would work well here if you wanted to use it.  Plus with RAID 0+1 or 1 you get the benefit of double-read speeds and redundancy.  But I can respect it if you want to go the simpler route.  Just be prepared to keep multiple daily snapshots of the data as backup points.
     
    Wait... I thought you didn't want the DB files on the Pool at all.  
     
  7. Like
    Jaga got a reaction from The_Saver in 150 GB database on system drive to be split with pool SSD's   
    Yes, that's called Hierarchical Pooling, which DrivePool supports.  The problem there, is that when you add the DB child pool to the master pool, all it's files become visible there to anyone with access to that master pool.  I was under the impression from your first post you wanted to keep the DB files completely out of any pool at all.
     
    Perhaps re-defining your goals and giving a little architecture detail would help us to help you:  You have a DB with a lot of files ranging in sizes, that you want the 4 SSDs to support in a pool-style fashion (aka software RAID), but which you don't want people to see.  You also have a regular Pool of disks that hold your main non-DB style data.  And, you're using the 4 SSDs as a front-end cache - are they setup to utilize the DrivePool SSD Optimizer plugin?
    Are you against having completely separate pools - one for the DB, and one for your main data store?  If not you can accomplish what you need rather easily.  Partition each of the four SSDs into two logical volumes:  the first part holds 1/4 of the DB, the second is used to cache the main data pool.  You'd make a Pool for the DB by combining all the 1/4 volumes together.  You could utilize the second volume slices on all four as your SSD Optimizer front-end.
     
  8. Haha
    Jaga got a reaction from Christopher (Drashna) in Suggestions for internal PCIe JBOD SATA/SAS controller?   
    Found a way to generate SMART data through the LSI adapter.  Funny thing is - it was lurking on the Covecube forums the entire time.   
    Simply enable advanced settings in Scanner, stop/start the service, then turn on Unsafe DirectIO.  Problem solved.  And it only took 2-3 hours of research to bring me right back here.
    Where's that facepalm emote when you need it?
  9. Like
    Jaga got a reaction from The_Saver in 150 GB database on system drive to be split with pool SSD's   
    You're sorta painting yourself into a corner by wanting to use pool drives, but not the pool, AND split the database across all four of the drives.
    What I'd recommend is making a RAID 0 (RAID 0+1 if you can afford the 50% space drop) stripe out of your 4 SSDs, then deciding if you want a separate volume on them just for the DB, or if it can share space with the files you move on/off and simply be in "it's own directory".  I'd think sharing a single logical volume would be okay.
    You can have files/folders outside the hidden Pool directory that sit on the drive and behave normally, and which aren't seen by Pool users.  But you can't break up that DB onto separate drives without some type of underlying span mechanism, which in this case would be RAID.
    You could then mount that RAID drive to both a letter, and a folder under the "C:\Users\Admin\AppData\Local" path.  Drivepool could use the letter, and you'd still be compliant using the system path for the DB.
    No matter what happens, you'll want good timely backups running, since you'll be exposed to either a 4x failure rate (RAID 0) or 2x failure rate (RAID 0+1).
  10. Like
    Jaga reacted to MisterDuck in High Interface CRC Error Count   
    Thank you Christopher, P19 it is then.
    As above, I'm a bit surprised that the obviously buggy P20 is still available, especially given that these cards and their variants will be used in enterprise settings.
  11. Thanks
    Jaga reacted to Christopher (Drashna) in High Interface CRC Error Count   
    16i, 16e, "close enough".   
    The big part being the number of ports. 
    And checking the page: https://www.broadcom.com/products/storage/host-bus-adapters/sas-9202-16e#specifications
    Yeah, it'st he same controller (LSI SAS 2008).
    So it may be a good idea to use the P19 firmware then.
  12. Like
    Jaga got a reaction from Christopher (Drashna) in Suggestions for internal PCIe JBOD SATA/SAS controller?   
    Quick update:
    I went with the LSI 16e that you suggested @nauip, so thanks for that.  Couldn't find any other options that were as robust or had as much capacity without taking a large leap in cost (more than double).  The card already arrived (OEM version), and was in immaculate condition - clearly never opened or handled.  It's shiny, new, properly flashed and humming along in the server now.  Having seen it myself, I'm less worried about manufacture date.
    Mini-SAS cables are on the way, and I've already picked up the 7x8TB drives (in USB enclosures) for shucking later.  The drives are being tested by Stablebit Scanner as we speak (of course!).  Go figure - the very first unit ended up having controller/sector issues, so that's being returned today.
    Small side note on the USB enclosures:  they have horrible heat dissipation characteristics.  Stablebit Scanner was able to push them to 50c within 20 minutes, at which point it suspended surface tests to throttle heat (as it should).  The drives sat forever and didn't cool off more than 2c, until I put them on top of case fans blowing up.  After that, I was able to resume the surface scans at full speed, with a temp range of ~30-38c thereafter.  I'm absolutely certain this is why they have lower warranty periods, simply because the drives in off-the-shelf USB enclosures are going to run hotter, and even thermally throttle on large jobs (which I never even suspected).  The catch-22 is that they are 8TB drives, and typically would handle larger jobs...  so they are poorly engineered, and should come with internal fans.
    I think I'm going to try to setup the new Pool drives on the old server and transfer data across locally between the two pools, then build the WSE 2016 server on top of a bare-metal Hyper-V 2016 install before adding the new Pool back in.  The thought of transferring >10TB of data on a 1Gbit network makes my skin crawl.  
  13. Thanks
    Jaga reacted to Christopher (Drashna) in High Interface CRC Error Count   
    From what I've seen, no, it hasn't.  Had a ticket about this recently, actually. P19 or P15 seem to be the go to firmware versions for stability.
  14. Like
    Jaga got a reaction from fly in Drivepool SSD + Archive and SnapRAID?   
    Makes sense - it would make the SSD cache drive more like a traditional cache, instead of simply a hot landing zone pool member which is offloaded over time.   +1
  15. Like
    Jaga reacted to fly in Drivepool SSD + Archive and SnapRAID?   
    Pretty please? 
    Just to reiterate my use case: My media server has quite a few users.  As new media comes in, it's likely to have a lot of viewers and therefore a lot of I/O.  As files get older, fewer and fewer people access them.  So I'd love to have an SSD landing zone for all new files, which then get archived off to platters as space requires.  It would be fantastic if I could say, once the SSD drive(s) reaches 80% full, archive files down to 50% using FIFO.
  16. Like
    Jaga reacted to Christopher (Drashna) in Drivepool SSD + Archive and SnapRAID?   
    Mostly just ask.  
     
  17. Like
    Jaga got a reaction from Christopher (Drashna) in Stablebit Thumbnails   
    Windows 7 shouldn't be natively creating the thumbs.db files though.  The last Windows OS to do that and rely on them was Windows XP.  A bunch of helpful information on thumbnails in Windows is here:  https://www.ghacks.net/2014/03/12/thumbnail-cache-files-windows/  Starting with Vista, they were moved to a central location for Windows management:  %userprofile%\AppData\Local\Microsoft\Windows\Explorer\thumbcache_xxx.db
    If they are being re-created on your system PetaBytes, I'm not sure why.  I triple-checked my Windows 7 Ultimate x64 media server, and none of the movie/picture folders have the files in them (visible OR hidden).
    This procedure (and a reboot after) might help your issue:  https://www.sitepoint.com/switch-off-thumbs-db-in-windows/
    I still maintain a 3rd party utility like Icaros will help the most.  What it does (in a nutshell) is maintain it's own cache of thumbnails, so that *if* Windows loses them for a folder, Icaros will supply them back to Windows instead of it having to re-generate them slowly.
  18. Thanks
    Jaga reacted to Christopher (Drashna) in Suggestions for internal PCIe JBOD SATA/SAS controller?   
    SAS2 is EOL, actually.  That's why we're seeing a lot of SAS2 hardware for super cheap.  SAS3 is the current standard, and the SFF connectors are not compatible. 
  19. Thanks
    Jaga reacted to nauip in Suggestions for internal PCIe JBOD SATA/SAS controller?   
    The LSI LOGIC LSI00276 6GB 16PORT adapter is a pretty good deal with the caveat that you will have the cables looping out and then back in to the case. When I got it @ $35 I figured I could deal with my already ugly closet server looking a lil bit funnier. And it's held up quite well.
    Amazon no longer has them @ $35, but Server Supply has them at $60. That plus some cables and that's still under $100.
    Amazon's cables to match: CableCreation Mini SAS 36Pin (SFF-8088) Male to 4 SATA ~ $15 each.
    editing to add: I found another 8 port internal only @ $66 at the same place.
  20. Thanks
    Jaga reacted to Christopher (Drashna) in Suggestions for internal PCIe JBOD SATA/SAS controller?   
    Well, I'd still recommend those cards.   Between one of them and the motherboard, that should get you 10-12 ports that you can use. 
    And while I know that you don't want to dump a bunch of money into it.... keep in mind that if you get a cheap consumer card .... you may end up doing so anyways, as they may have issues and you may end up getting something like this ANYWAYS. 
    As for SAS Expanders ... wow, they've dropped in price:
    https://www.amazon.com/dp/B0042NLUVE/
     
  21. Like
    Jaga reacted to DotJun in File Duplication Consistency   
    Ok I've changed my settings to match what you suggest. Thank you for your help.
  22. Sad
    Jaga reacted to Rec0n in WHS 2011 to Win 10 home server migration and drive pool   
    Much sadness.....for some reason my OS SSD will not boot now. It is not even recognised as a boot drive. Stoopid HP ProLiant microserver perhaps, but either way I have some diagnostics to do. It boots to my test mechanic HDD just fine. Strange...
    On a positive note, my DP files on my data HDDs are just fine. All I need is a platform that will boot and I will be back in business. I tell you, it is awesome knowing that my data is safe and accessible.
     
    edit: Tried everything I could think of and got to the stage where a Windows 10 repair install was next. Plugged in the bootable USB stick and the damned thing booted properly! I swear I am going to shift to Linux for this box!! It is driving me nuts. (not really because then I wont have DP....) So frustrating.
  23. Like
    Jaga got a reaction from Rec0n in WHS 2011 to Win 10 home server migration and drive pool   
    Yep, top level are no problem.  DP passes volume commands to the underlying drives, so operations are as "compliant" as possible.
    And..  that's rather humorous.   
  24. Like
    Jaga reacted to Christopher (Drashna) in How to limit disk space of the pool on one specific drive   
    I think that this has been asked for before. 
     
    But just in case:
    https://stablebit.com/Admin/IssueAnalysis/27889
    And extending the "Disk Usage Limiter" balancer would be an easy option, I think. 
     
    Also, are you experienced with C# programming?  If not, no worries.  If so, let me know, as there is source for building balancer plugins. 
  25. Like
    Jaga reacted to Christopher (Drashna) in [Bug?] Prior scan data and settings not preserved on update   
    Also, you may want to check out the newest beta.
    http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3204_BETA.exe
     
×
×
  • Create New...