Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Posts posted by Umfriend

  1. I'd really like such an app/add in. It may be the absolute best worthless app (worthless in that it may never ever find anything) but the comfort would be so great.

     

    OT: Early days I was a console operator at a Unisys mainframe (or, as the IBM-operaters called it, a mini ;-). Late 80s. Everything was double and could survive one failure: 2 CPUs, 2x5 Harddrives (huge machines), 2x2 tape drives, 2 consoles of course, 2 printers, 2 communication controllers etc. The database was duplexed as well. Every Wednesday evenening we would run the Compare. It'd check the contents of both databases. Had been done since the start in 1985, went until the system was replaced in, 1998 or so. Wednesday was overtime day (evening) because the compare ran for 2 to 3 hours and exceeded the last shift. Paid well that and was very reliable. It never ever found a thing. Everyone was happy.

  2. Uhm, I'll await official confirmation from you on this. Still no message from DP on the A/B difference in the two files. I mean, I would not care if it turns out a compare of some sort is done weekly (perhaps even monthly) but I would like it to do it and know when it does it.

     

    I'll instruct a re-measure NOW, see what that does.

  3. 23 hrs and 19 minutes have passed and DP has not signalled any issues. Is there a duplication pass when I have immediate balancing? Something else I could have done/should have done that disables/enables such a check? Running v2.0.0.420.

  4. No. Normally, when it checks the duplication, it checks that the there are the expected/correct number of copies, and compares the file hash of both. I'm not sure if you've seen it, but the "file mismatch" error? That's what it's complaining about. The contents don't match. If you want, test this out with a simple text file. The notification may not be immediate, but it WILL happen.

    I don't consider it hijacking, actually. This is all very related information. And if AMCross didn't ask, I'm sure it was probably on his mind.

    I did not know about the hash-total check. How often does is check? I have immediate balancing and all other just at default settings. I have set up a test just now to see how that works. A text file with "This is line one with the letter A.", which was stored on both disks. Changed one of them to letter B.

    "

  5. Agreed. most backup tools, especially those for home use that are affordable, will not touch a file that has not been "modified" since the last backup,which is why I went with syncback, as part of the backup profile for my irreplaceable data, it compares the hash of each file on the server to the hash of the file in the backup set,   a new hash is generated at each backup from both sources, again, this is another check to ensure the data does not change over time with out me knowing.  doubles the backup time, even when there is no new data, but it gives me that warm and fuzz feeling.

    Actually dbailey, if I understand you correctly, as long as I do not reformat ,my backup HDDs and start anew, I should be fine? A file that has not been modified for a long time would only be written to the backup disks _once_ at initial backup? I understand you _actively_ look for changed/corrupted files by comparing against backup but I could simply wait until a file becomes corrupted and then restore that file from the backup, no?

     

    Anycase, bit-by-bit compare of DP-duplicated files would suffice for me.

  6. AFAIK, it does scan drives simultaneously, to the extent the controller permits/can handle the I/O.

     

    Moreover, Scanner halts/suspends scanning a single disk when other I/O needs that disk, so it can be done while you use the system (I am assuming it runs 24/7 BTW).

     

    Also, and I think this is nice, the next scan starts xx days after the previous scan _ended_, not _started_. So, if it can not effectively scan all at the same time, it'll work itself out to do some sooner than others and the next time they'll start some time apart. And then, _if_ you notice (I/O) performance issues during scans, there are all kinds of settings to ensure controllers will not be overloaded. I would not tinker with the default settings unless there is an observed problem.

     

    Personally, I have it set at every 14 days (had it weekly until recently). I have far less storage (1 x 1TB, 2x2TB, 1x750GB and the Server Backup HDD which is 2TB or 4TB depending on the rotation). I never ever noticed whether scanner was running or not. It just tells me it did when I check through the dashboard and I choose to believe her.

  7. ..... I run the surface scan every 7 days..... personally..

     

     

    @dbailey,  that's a very good suggestion and a good link.

    And if you don't mind, what did you not like about the WHS Integrity Checker add-in?

     

    @everyone:

    Backups. If you know me from WGS/HSS, then you've probably heard me say it before, but it's always worth repeating: There is no such thing as too many backups. If the data is important, you should have it stored in multiple mediums, and multiple locations. The HSS guys suggest a "3-2-1" backup strategy, in fact. 

    3 Backup copies of anything you want to keep. 

    2 different storage media. 
    1 offsite storage site.

     

     

    This helps prevent any significant data from being lost or corrupted.

    It does not. With pictures, home movies etc, files may be untouched for years. I have DP x2, 3 Backup Disks of which 1 is offsite at all time. But Server Backup retains backups back to, say 4 months or so (or, it may be I started anew with Server Backups in which case it is my bad), anyway, there is a limit to how far one can go back. As there is no *active* check on file integrity, by the time I'd open an old file, it may be corrupt and I may only have backups of that corrupted file.

     

    Not sure if DP actually does a regular _compare_ of duplicates (so not just meta-data but the actual content) which would, for me, make this all a non-issue, it ends somewhere.

    I had scanner run weekly until a week ago then thought I was maybe being a bit, uhm, anal? about it? Default was 30 after all so I settled for bi-weekly now. But would it sense an accidental/incidental flipping of a bit?

  8. I second that suggestion by dbailey75. Scanner already performs a scan every month and reads the entire disk (I know, sector based, not file based) but I'd love a second whole disk read monthly to ensure no files have changed that have not been saved in the past month. It's fine that I have backups but retention policy only keeps backups for so long.

  9. WHS2011 will backup your clients just fine. If you have > 2TB in client backups then I don't see a way you can have these client backups being part of you Server Backup as well. I.e., if you lose a client, you can restore from server. If you lose the server, you still have the clients and a new backup will be made (once a new server is up and running). But, if you lose a client and the server at the same time (and you have server backup disks rotating offsite/safe), then you won't be able to resotre clients.

     

    For any data that you do want to have backed-up in Server Backups, I would recommend 2TB HDDs (I think 2 partitions on a 4TB HDD would work as well but I can't be sure). For any other data, the bigger the better.

     

    Last thing, and this is a personal thing, I would never partition a 3TB drive into 3x1TB without a very good reason (e.g. 2x2TB partitions so that Server Backup might be able to deal with this).

  10. Christopher will explain, I'm sure, I'd mess it up, especially as they are working on grouping/ordering add-ins that I use nor understand. What may be good to know though is that a single file is never scattered over drives and that each disk can be accessed on another computer and you'll find the files in regular NTFS format, no special drivers/software needed. Each drive will have a hidden "poolpart" folder within which you'll find the exact folder structure that Explorer would show you for the entire Pool. But if you are looking to find file "X", you will not know, AFAIK, on which disk(s) it resides.

     

    Do you intend to use the reguler WHS2011 Server Backup functionality? If so, then realise that you can not backup a Pool, that is the virtual huge drive consisting of actual drives. You'll have to backup the underlying drives. In one sense, this is great as it is easier to deal with the volume limit of 2TB for WHS2011 Server Backup. It's a bit of a pain though if you use duplication and more than 2 drives as you'd backup each file twice for no reason.

     

    This is why I have a 2x2TB 2x duplication Pool. I just need to backup 1 of the two drives. Should I need more space at some stage then I would create another 2x2TB 2x duplication Pool and backup 1 disk of each 2x2TB set. WHS2011 Server Backup _can_ backup more than 2TB of data but not more than 2TB per volume (say, disk) and other than that I think the only limit is the size of the backup drive.

  11. It's not like I would ever use it but given that I think I've seen a number of questions on feeder disks, I wonder whether it would make sense, even for a Pool with duplication, to specifically allow files not to be duplicated while on a feeder disk? The thinking is (and I'm not entirely sure if this is correct) that files would/should not reside on feeder disks for long as, when I/O is low, they'd be moved to normal drives. Assuming a user is willing to take a risk for a short while, could not he/she have just one copy saved to a (SSD) feeder disk and it only be duplicated when it's moved off of it?

  12. You can also get rights to it, but it is a hasle. I just do it because I begrudge my OS access to a resource that I cannot access myself. something with Properties -> Security -> Advanced -> Owner -> Edit -> Change owner to user. Had to give OWNER RIGHTS as well first and then I coudl give OWNER RIGHTS full access and got in. Something like that at least.

  13. Personally, if the space was sufficient, I would always do a copy and delete.

     

    However, I have seen enough posts from people who know about this to know that copying into the poolpart folders can be done without any problem as well, in fact, Drashna often suggests it.

     

    Will you duplicate files?

     

    Bear in mind that if you use the poolpart-route then the way files are placed will not be equal over all disks but as you copied them to the respective drives until DP does something, not sure what. I don't really know about these things because, well, it simply always seems to work for me.

  14. Yes, and _if_ SYSTEM and Adminstrators do not have the permissions, set them. If _that_ does not work, _then_ you may need to take ownership as above. That was my issue, could not even give rights to Adminstrators...

  15. I am not exactly sure but you may need to check the properties of the drives -> Security -> Edit -> Add -> type "Owner" -> Check Names -> OK

     

    I am not entirely sure what you may have to do next but I have had similar instances where being an administrator does not even allow you to touch files (aside from, for instance, a WHS 2011 Sever backup disk on another machine I also had this issue with SQL Server databases being restored to a different computer). I *think* this way you can get owner rights and then do what you need to do. This solved it for me as in a next step I could actually give rights on the files to users, including the administrator IIRC. Memory is a bit iffy on this and your milage may vary.

×
×
  • Create New...