Jump to content

bissquitt

Members
  • Posts

    19
  • Joined

  • Last visited

bissquitt's Achievements

Member

Member (2/3)

0

Reputation

  1. I am well aware of 3-2-1 and everything you said. I never intended for this to replace backups. I really don't understand why people get so bent out of shape in regards to something that is helpful even if it is not foolproof. I don't here people arguing against fire doors in buildings because it only slows down and limits the damage of the fire. No, we need to completely remove all oxygen from the building to prevent the fire in the first place, and have a complete second copy of building in another country. Sometimes you have a backup solution, but would rather only be forced to download or sneakernet 10tb of data rather than the full 100tb server. Mitigation is important. Also, I pretty clearly stated it should be optional and able to be turned off when you need to write a lot of files. Ironic that thats pretty much the exact opposite argument that you made above. "I'll put my valuables in this safe, but I refuse to lock it in case I want to put something else in the safe sometime later" So in some cases you want 0% protection, and in others you want 100% protection, but anywhere in between is unacceptable? Also, is it really that hard to just not enable it?
  2. I feel like with some thought it could be implemented well. Possibly some excluded extensions like those thumbnails or excluded folders or ignore new files (only apply to modify). You could probably even piggyback off of the windows ACL. The files I care about are not written very often, and while I might not be the majority, I'm probably at least not the minority in that the vast majority of my data is mostly archival. I would gladly take the inconvenience of having to disable something to make significant changes. Just an idea for an optional feature since lots of people can get scared of ransomware. Not something that I'm actively pushing for.
  3. I know its not a perfect defense, but usually I have all of my pooled drives unmapped with just the pool given a letter. I know when DP loses a disk, it goes into read-only mode until its resolved. Would it be feasible to do this if the pool hits a threshold of modified files in a short timeframe? Im thinking ransomware mitigation mostly. Not fool proof, but I doubt most ransomware can access a non-mapped disk on a remote server provided the server stays clean.
  4. Well that's awesome. If you could specify "priority duplication drives" as well that may solve the issue of having to use a pool within a pool. The most logical place would be under Balancers > drive usage limiter. Currently the options are "duplicated" and "unduplicated". You could add something like "Backup" as another option. When duplicating a file, it would take preference on drives labeled "backup" and it would guarantee that "at most and at least" one of the copies of the file are on one of the drives labeled backup (So you don't end up in a situation where the file copies are ONLY on the backup drives). Have it throw an alert if there's insufficient "backup" drive space and then failover to a normal drive for duplication. You could also take those drives labeled "backup" and make them a lower priority like you did for USB drives so that they aren't used as much. In the event that I wanted to use the top row of drive bays as backup drives, I could label them backup and despite being normal sata, they would have a lower priority. The other appropriate place would be either a checkbox in the pool/folder duplication, or another text box so you could specify how many times you want the file to exist on a "backup" drive....or to make it simpler with the least UI/backend change, When you add a drive, add the option to specify it as a backup and possibly a "convert to backup" option next to "remove" that would function as removing it gracefully and then re-adding as a backup drive. Then just have duplication function as "Keep one copy of every file on a drive tagged backup if possible. Once every file has a duplicate on backup, start putting the additional copies on the backups. Keep at least X % of cumulative backup space free for new files. During rebalancing, remove additional duplications of files from backups until that % is reached. Yet a third option would be to add the ability to link a pool as a mirror of another pool such that as real time duplication is happening it would write one to each pool rather than to 2 drives in the same pool. Then you could choose duplication within the individual pools to decide if you want secondary duplications to exist on the main or backup pool....Essentially this would be like you described above as a pool of pools as far as the backend is concerned, it would just automate it in the UI rather than having to do it manually. </end rant> Sorry I have a background in coding and proceeded to word vomit solutions. Thank you for putting in the feature request and taking care of it.
  5. Wouldn't read striping do the exact opposite of what I want, It reads from both duplicated drives simultaneously. I guess it would stop it from being slow if it ONLY read from the external, but one of the main points would be to not put wear on the external backup unless needed.
  6. Did not know you could do a pool inside a pool. Thanks. Is there a way to set read priority? As in, if a file is accessed on the disk, access it from the internal pool rather than hammer the externals over usb? Ideally the only access to the external pool would be writing/editing data as it happens and then the occasional re balance or verify. I will be running plex, so if plex decides its going to read the file from the USB drive, its going to be REALLLLY slow.
  7. I have 29 drives. 24 in a case, 5 sitting on top via usb. The 5 on top have enough capacity to hold all the data on the 24. I want to ensure the 5 connected via usb have a full copy of the data. In an emergency I can grab those 5 and run rather than having to hunt for which drives have which data in the server.
  8. I bought too many 8TB easystores when they were on sale and I've decided that I want to use the ones that wont fit in my case as backup drives. Just USB attached and sitting on top of the case. What I want to do however is make sure that in those 5 drives there is at least 1 copy of every file. If there's a fire or emergency or whatnot, I want to be able to grab the 5 externals sitting on top and be confidant that I have a full copy of everything (assuming working hardware). I know that at one point there was a feature that was similar to this but it didn't quite do this. Additionally i would ideally like all file access to NOT come from these USB drives, both for speed and wear on the drives. My current setup (by accident due to a migration) is my main pool and a pool of 5 8TB drives that has a complete copy of the data. I know it would be easy to just set a scheduled task to backup to that pool, but I didn't know if there was a more elegant solution built in.
  9. What did you move to past the norco? (Nevermind, found it but cant delete comment)
  10. Its all in the Norco, and I'm not sure my setup could fit a redundant PSU
  11. PSU is 750W it could be going bad, but the problem isnt consistent with a bad PSU. (I'm currently at 20 drives FYI) The drives drop seemingly randomly, and weeks if not months apart from each other with the exception of this 3 drive incident. A bad PSU would just not spin the drive up and it would go missing. It still shows in POST and in disk management, its just an uninitialized disk that you cant initialize. A bad PSU would not do this either. I finished testing all 3 failed drives. 2 are showing bad sectors and the above errors. The 3rd is completely fine in another system it just wont show up on the server in any bay. Connected it via a USB enclosure and it is showing up fine. As soon as the system is done checking and balancing from the failure, I'm going to gracefully remove it, reformat and reinsert. Not sure why it would not recognize but I'm hoping Drashna or someone else at covecube can shed some light on that. (Its not a power problem because 2 other drives were just removed and I can feel it spinning up) I still feel like something is damaging the drives. One of them was a month old that was stress tested before being put into the pool. For reference this is the server http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=18563174
  12. I have checked and once in the 3 years or so that Ive had them I have received a text. This happened with my current configuration. I also test the texts as well. I appreciate the response though. I also manually check scanner once a day with no reporting of any errors (I stay RDPed into the server) It just seems HIGHLY suspect to me that 3 drives would all go bad on a reboot when they were functioning fine prior and no indication of errors. Add to that the fact that ALL are uninitialized, marked as read only and no diskpart or anything will let me access them. This has happened prior with 2 or 3 other drives always one at a time, I figured they were just bad and replaced them. 3 at the exact same time though is unlikely, especially when I'm on a commercial UPS protecting from any outside surges or such. As far as I can tell, either drive pool, scanner, or win server 2012r2 is killing drives FYI this is the most relevant error message that I can find. "The IO operation at logical block address # for Disk # was retried"
  13. I'm going to be equally as blunt. I wan't a refund. The product simply does not work. I can't trust your product to alert me when there is an issue, I have encountered many, and your product has not informed me of a single one. I just had 3 additional drives go missing at the same time, All are now write-protected and can't be initialized. Can't unwrite protect in diskpart either I now, have to go through the unimaginably long process of figuring out which of my 40TB of files are missing because you don't even log which files are on which disk. Notifications: they are on, email and mobile, they have been tested, they work. Drivepool: Great Scanner: absolute horseshit
  14. I am a systems engineer by trade. The disks are indeed missing from the system. My question wasn't so much "Why can't you see something thats not there" but "Why didn't you see that it was going bad before it was gone" In every case when a test was run on the drives after "missing" it reported multiple smart errors or bad sectors. It seems like this is the likely cause of the failure and something that scanner should have caught. I offered up the fact that ALL drives were unable to initialize due to write protection as additional information for troubleshooting since, as an IT professional I know any and all information is helpful to have since I'm not familiar enough with the way your software works to know if its relevant.
  15. It scans every so often and always comes back healthy, but in that time I have had 4+ drives just show up as missing in my drive pool. I always try a reboot to see if it will come back. It never does. I then connect the "missing" drive to my other computer and run a scan, every time there has been some sort of error that has made me RMA them. Each time I cant initialize the disk giving me an error that its write protected. Even when the disk is missing in drive pool, scanner doesn't show any errors, it just doesnt have the disk there. How is drivepool more informative about drive problems than the software made specifically to detect it. Why is Scanner not picking up any of these errors or even telling me that it noticed a drive missing? I have to get a new drive rush shipped in and shut the server down completely until I can put it in because scanner simply isn't reliable and if another drive crashes I'm toast. At this point its pretty much just straining my disks and eating up ram. Its also really annoying that I can't tell if I've lost anything. Everything in my pool is duplicated so I SHOULDN'T have, but I would hate to need a file 4 months from now and find it not there. If everything is duplicated, the software should be able to see that the file was duplicated and verify that theres a copy that exists. It notices that files are no longer duplicated when it rebuilds. I'm really tired of losing disks without warning, and then wondering if all my files are ok each time....and it compounds my worry every time it happens.
×
×
  • Create New...