Jump to content

bissquitt

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by bissquitt

  1.  

    28 minutes ago, Thronic said:

    This is a horrible idea. First of all there would always be SOME damage done and there's lots of situations where you would write to a large amount of files at once. Second, ultimately backups are the only real data protection. DrivePool offers redundancy, which is not a backup. Look into 3-2-1.

    That said, you can ransomware protect your most important files by simply scheduling a separate copy of them locally on the server to a non-shared folder. I'd look into Veeam free agent for both GNU/Linux and Windows. It works great to local and network destinations, supports incremental backups and individual file browsing and recovery. I personally also use rclone to cloud for offsite copies.

    I am well aware of 3-2-1 and everything you said. I never intended for this to replace backups. I really don't understand why people get so bent out of shape in regards to something that is helpful even if it is not foolproof. I don't here people arguing against fire doors in buildings because it only slows down and limits the damage of the fire. No, we need to completely remove all oxygen from the building to prevent the fire in the first place, and have a complete second copy of building in another country.

    Sometimes you have a backup solution, but would rather only be forced to download or sneakernet 10tb of data rather than the full 100tb server. Mitigation is important. 

    Also, I pretty clearly stated it should be optional and able to be turned off when you need to write a lot of files. Ironic that thats pretty much the exact opposite argument that you made above. "I'll put my valuables in this safe, but I refuse to lock it in case I want to put something else in the safe sometime later"

    So in some cases you want 0% protection, and in others you want 100% protection, but anywhere in between is unacceptable? Also, is it really that hard to just not enable it?

  2. I feel like with some thought it could be implemented well. Possibly some excluded extensions like those thumbnails or excluded folders or ignore new files (only apply to modify). You could probably even piggyback off of the windows ACL. The files I care about are not written very often, and while I might not be the majority, I'm probably at least not the minority in that the vast majority of my data is mostly archival. I would gladly take the inconvenience of having to disable something to make significant changes. Just an idea for an optional feature since lots of people can get scared of ransomware. Not something that I'm actively pushing for.

  3. I know its not a perfect defense, but usually I have all of my pooled drives unmapped with just the pool given a letter. I know when DP loses a disk, it goes into read-only mode until its resolved.

    Would it be feasible to do this if the pool hits a threshold of modified files in a short timeframe? Im thinking ransomware mitigation mostly. Not fool proof, but I doubt most ransomware can access a non-mapped disk on a remote server provided the server stays clean.

  4. Well that's awesome. If you could specify "priority duplication drives" as well that may solve the issue of having to use a pool within a pool. The most logical place would be under Balancers > drive usage limiter. Currently the options are "duplicated" and "unduplicated". You could add something like "Backup" as another option. When duplicating a file, it would take preference on drives labeled "backup" and it would guarantee that "at most and at least" one of the copies of the file are on one of the drives labeled backup (So you don't end up in a situation where the file copies are ONLY on the backup drives). Have it throw an alert if there's insufficient "backup" drive space and then failover to a normal drive for duplication. You could also take those drives labeled "backup" and make them a lower priority like you did for USB drives so that they aren't used as much. In the event that I wanted to use the top row of drive bays as backup drives, I could label them backup and despite being normal sata, they would have a lower priority.

     

    The other appropriate place would be either a checkbox in the pool/folder duplication, or another text box so you could specify how many times you want the file to exist on a "backup" drive....or to make it simpler with the least UI/backend change, When you add a drive, add the option to specify it as a backup and possibly a "convert to backup" option next to "remove" that would function as removing it gracefully and then re-adding as a backup drive. Then just have duplication function as "Keep one copy of every file on a drive tagged backup if possible. Once every file has a duplicate on backup, start putting the additional copies on the backups. Keep at least X % of cumulative backup space free for new files. During rebalancing, remove additional duplications of files from backups until that % is reached.

     

    Yet a third option would be to add the ability to link a pool as a mirror of another pool such that as real time duplication is happening it would write one to each pool rather than to 2 drives in the same pool. Then you could choose duplication within the individual pools to decide if you want secondary duplications to exist on the main or backup pool....Essentially this would be like you described above as a pool of pools as far as the backend is concerned, it would just automate it in the UI rather than having to do it manually.

     

     

    </end rant> Sorry I have a background in coding and proceeded to word vomit solutions. Thank you for putting in the feature request and taking care of it.

  5. Wouldn't read striping do the exact opposite of what I want, It reads from both duplicated drives simultaneously. I guess it would stop it from being slow if it ONLY read from the external, but one of the main points would be to not put wear on the external backup unless needed.

  6. Did not know you could do a pool inside a pool. Thanks.

     

    Is there a way to set read priority? As in, if a file is accessed on the disk, access it from the internal pool rather than hammer the externals over usb? Ideally the only access to the external pool would be writing/editing data as it happens and then the occasional re balance or verify. I will be running plex, so if plex decides its going to read the file from the USB drive, its going to be REALLLLY slow.

  7. I have 29 drives. 24 in a case, 5 sitting on top via usb. The 5 on top have enough capacity to hold all the data on the 24. I want to ensure the 5 connected via usb have a full copy of the data. In an emergency I can grab those 5 and run rather than having to hunt for which drives have which data in the server.

  8. I bought too many 8TB easystores when they were on sale and I've decided that I want to use the ones that wont fit in my case as backup drives. Just USB attached and sitting on top of the case. What I want to do however is make sure that in those 5 drives there is at least 1 copy of every file. If there's a fire or emergency or whatnot, I want to be able to grab the 5 externals sitting on top and be confidant that I have a full copy of everything (assuming working hardware). I know that at one point there was a feature that was similar to this but it didn't quite do this. Additionally i would ideally like all file access to NOT come from these USB drives, both for speed and wear on the drives.

     

    My current setup (by accident due to a migration) is my main pool and a pool of 5 8TB drives that has a complete copy of the data. I know it would be easy to just set a scheduled task to backup to that pool, but I didn't know if there was a more elegant solution built in.

  9. If you want a refund, that isn't a problem.

     

    However, the disks going "write protected" is HIGHLY unusual.  At best, they should be "offline" (default for new disks in Server OS's).  But not write protected. 

     

    But then again, you mentioned a Norco. If you're using Norco backplanes for hotswap... that may actually be the source of your issues.  Norco is known for using sub-par parts, which cause all sorts of bizarre behavior. 

     

     

    As for the redundant power supply, the Norco may be able to take one, depending on the specific configuration. 

     

     

     

     

    Once I stopped using my Norco case, a lot of the weird disk behavior I had went away. And Checking out reddit, I'm far from alone in this. Unfortunately.

     

     

    But again, if you do really do wish for a refund, head to https://stablebit.com/Contact

    What did you move to past the norco? (Nevermind, found it but cant delete comment)

  10. PSU is 750W it could be going bad, but the problem isnt consistent with a bad PSU. (I'm currently at 20 drives FYI)

    The drives drop seemingly randomly, and weeks if not months apart from each other with the exception of this 3 drive incident.

    A bad PSU would just not spin the drive up and it would go missing. It still shows in POST and in disk management, its just an uninitialized disk that you cant initialize. A bad PSU would not do this either.

     

    I finished testing all 3 failed drives. 2 are showing bad sectors and the above errors. The 3rd is completely fine in another system it just wont show up on the server in any bay.

    Connected it via a USB enclosure and it is showing up fine. As soon as the system is done checking and balancing from the failure, I'm going to gracefully remove it, reformat and reinsert.

     

    Not sure why it would not recognize but I'm hoping Drashna or someone else at covecube can shed some light on that. (Its not a power problem because 2 other drives were just removed and I can feel it spinning up)

     

    I still feel like something is damaging the drives. One of them was a month old that was stress tested before being put into the pool.

     

    For reference this is the server

    http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=18563174

  11. Hi

     

    I use both scanner and drivepool and have done for many years and if there is ever a problem both have always sent a corresponding email notification, are you sure that it is working and the emails are getting sent drivepool sends a missing drive email and scanner send the smart error emails I know this doesn't help you but the system does work I personally would double check everything even the spam folder

     

    Lee

     

    I have checked and once in the 3 years or so that Ive had them I have received a text. This happened with my current configuration. I also test the texts as well. I appreciate the response though. I also manually check scanner once a day with no reporting of any errors (I stay RDPed into the server)

     

    It just seems HIGHLY suspect to me that 3 drives would all go bad on a reboot when they were functioning fine prior and no indication of errors. Add to that the fact that ALL are uninitialized, marked as read only and no diskpart or anything will let me access them.

     

    This has happened prior with 2 or 3 other drives always one at a time, I figured they were just bad and replaced them. 3 at the exact same time though is unlikely, especially when I'm on a commercial UPS protecting from any outside surges or such.

     

    As far as I can tell, either drive pool, scanner, or win server 2012r2 is killing drives

    FYI this is the most relevant error message that I can find.

    "The IO operation at logical block address # for Disk # was retried"

  12. I'm going to be equally as blunt. I wan't a refund.

    The product simply does not work. I can't trust your product to alert me when there is an issue, I have encountered many, and your product has not informed me of a single one.

    I just had 3 additional drives go missing at the same time, All are now write-protected and can't be initialized. Can't unwrite protect in diskpart either

    I now, have to go through the unimaginably long process of figuring out which of my 40TB of files are missing because you don't even log which files are on which disk.

     

    Notifications: they are on, email and mobile, they have been tested, they work.

     

    Drivepool: Great

    Scanner: absolute horseshit

  13. I am a systems engineer by trade. The disks are indeed missing from the system. My question wasn't so much "Why can't you see something thats not there" but "Why didn't you see that it was going bad before it was gone"

     

    In every case when a test was run on the drives after "missing" it reported multiple smart errors or bad sectors. It seems like this is the likely cause of the failure and something that scanner should have caught.

     

    I offered up the fact that ALL drives were unable to initialize due to write protection as additional information for troubleshooting since, as an IT professional I know any and all information is helpful to have since I'm not familiar enough with the way your software works to know if its relevant.

  14. It scans every so often and always comes back healthy, but in that time I have had 4+ drives just show up as missing in my drive pool. I always try a reboot to see if it will come back. It never does. I then connect the "missing" drive to my other computer and run a scan, every time there has been some sort of error that has made me RMA them. Each time I cant initialize the disk giving me an error that its write protected.

     

    Even when the disk is missing in drive pool, scanner doesn't show any errors, it just doesnt have the disk there. How is drivepool more informative about drive problems than the software made specifically to detect it. Why is Scanner not picking up any of these errors or even telling me that it noticed a drive missing? I have to get a new drive rush shipped in and shut the server down completely until I can put it in because scanner simply isn't reliable and if another drive crashes I'm toast. At this point its pretty much just straining my disks and eating up ram.

     

    Its also really annoying that I can't tell if I've lost anything. Everything in my pool is duplicated so I SHOULDN'T have, but I would hate to need a file 4 months from now and find it not there. If everything is duplicated, the software should be able to see that the file was duplicated and verify that theres a copy that exists. It notices that files are no longer duplicated when it rebuilds.

     

    I'm really tired of losing disks without warning, and then wondering if all my files are ok each time....and it compounds my worry every time it happens.

  15.  The server has since "settled" I was able to download via FTP and all functions correctly but, as before, it still doesn't look like its balancing right.

    Here are the results of the re-balance. As you can see 5C wasnt touched still, and 5D was over filled. I now have an almost empty 4A and most of the drives are still filled past the 90% "limit"

    post-2041-0-04660600-1435111850_thumb.pngpost-2041-0-92630000-1435111850_thumb.png

  16. attachicon.gifimage.jpgHere is my configuration along with a custom built rack.

     

    Server

    iStarUSA D-412S3-MATX 4U Rackmount microATX Server Chassis

    Intel Core i7-3770K Ivy Bridge 3.5GHz (3.9GHz Turbo) LGA 1155 77W Quad-Core

    Intel Media DQ77MK Desktop MB, Intel Q77 Express Chipset, w/TPM

    Corsair Vengeance LP 32GB (4 x 8GB) 240-Pin SDRAM DDR3 1600 (PC3 12800)

    Intel 80GB 330 Series Maple Crest SSD

    Antec EarthWatts Green EA-380D, 380W

    Noctua Ultra Silent CPU Cooler Cooling NH-U9B SE2

    3x Noctua NF-R8 PWM 80mm Case Fan

    LSI 9200-8e SASController Card, 2 x SFF-8088 mini-SAS External Connectors, JBOD only

    Windows Server Essentials 2012 R2

    Stablebit DrivePool v2.1.1.561

    Stablebit Scanner v2.5.1.3062

    IHomeServer v3.1.73.0 (iTunes 12.0.1.26) feeding 3x AppleTV 3

    Crashplan

     

     

    Array

    NORCO DS-24E 24-3.5" JBOD Enclosure

     

    Pooled Drives - 39.1TB

    1x 4.0TB WD Red

    7x 3.0TB WD Red

    5x 2.0TB WD Green

    1x 2.5TB WD Green

    3x 1.5TB Seagate

    1x 1TB WD Green

    Non-Pooled drive

    1x 1TB WD Green - OS Backup drive

     

    UTM

    ARK IPC-1.5U1525 Black 1U Rackmount Server Case

    Intel S1200KP Mini ITX Server Motherboard LGA 1155 Intel C206 BIOS

    Intel Core i3-2120T Sandy Bridge 2.6GHz, 35W Dual-Core

    Stock Intel cooler

    Noctua 60x25mm A-Series Blades with AAO Frame, SSO2 Bearing Premium Fan

    G.SKILL Ripjaws X Series 16GB (2 x 8GB) 240-Pin SDRAM DDR3 1600 (PC3 12800)

    Intel 60GB 330 Series Maple Crest SSD

    100gb 2.5 HD

    Intel Dual Port Pro 1000 PCIe NIC

    picoPSU-160-XT, 160w output, 12v input DC-DC Power Supply

    Ubiquiti Networks Unifi UAP-Pro Enterprise Dual Band AP

    Untangle v10

     

    Dell PowerConnect 2824 24port GB managed switch

    Dell PowerConnect 2724 24port GB managed switch

     

    VMware ESXi 5.5 Home Lab server

    ARK 1U125 Black 1U Rackmount server case

    Supermicro A1SAi-2750 mini ITX MB

    Supermicro PWS-203-1h 200W 80 Plus Gold

    Intel Atom C2750 2.4GHz Avoton 8 core CPU

    32gb Kingston DDR3 1600 ECC memory

    LeefSupra 16gb USB 3.0 flash drive

    1x 480gb Intel S3500 SSD

    2x 240gb Seagate 600 Pro SSD (200gb 

    1x 128gb Samsung 810 SSD (cache)

    1x 64gb Intel 330 SSD

     

    APC Smart UPS SMT1500RM2U 1000W 2U rackmount

     

    *Updated with new hardware/software additions

     

    Ctopher

    What rack is that? Been looking for a nice short 4 post rack for my gear

  17. Did the above and its been about 16 hours and its still remeasuring like it constantly does, haven't accessed the data on the server during that time.

    As for enabling file system logging. After installing the Beta version, the troubleshooting menu is gone from the gear icon. It was there in the previous version.

    4497dcaaf3f104e3c8bd6ddabaa928ff.png

    5fb56428f5230252b5b9edace4f6b894.png

  18. My pool doesn't seem to want to balance itself and is specifically avoiding 2 drives that have no errors at all.

     

    6b6f1de3533232099e142beaef4d5bc6.png

     

    5C and 5D have been like this for many months and new data always goes on the other drives. I didn't think to much into it (despite having it set to balance daily) until I started getting FTP errors that the disk was full which is clearly isnt.

     

    272fc13a3e384a8b08a26f025308aff7.png

    5C I do have set to only put duplicated files on due concerns with its integrity and prior use

    5D however is a mostly brand new Red drive. I can't figure out why its avoiding it and why I would get disk full errors.

     

    Thanks

     

    EDIT: I know the drives are set with a rebalancing target, but they have been like that for months and never get data on them 

    EDIT2: Server has been rebooted in the past week and I plan to update the version of drivepool once it finishes what its doing

×
×
  • Create New...