Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by Umfriend

  1. In my case, I was trying to move from 2.X to 1.X without deactivating. 1.X (my bad I guess, lazy bum!) then complained about the license _and_ unduplicated everything. Went back to 2.X right thereafter and that is where I'll stay.
  2. When I tried that, I did have an issue with the license so it may be worthwhile deactivating the license prior to removing the add-in (and REMEMBER what the license code is somewhere).
  3. IMHO, the perfect match might be a combo of those SMR drives (large capacity at low cost per TB) with SSDs as landing drives. I am just wondering how that would work with databases (say SQL Server DBs) but, if you are looking for performance, you would probable want those on SSDs only anyway (e.g. if x2 duplication then do 2x 120GB SSDs, partition both as 40GB for the landing SSD/Pool and 80GB for databases, perhaps even unduplicated if your backup strategy is sound).
  4. Are you suggesting robocopy as an alternative for Server Backup? I had thought about it a bit. I could even try to decrease the partition of the Server Backup HDD and just do an OS backup to (hopefully) the one and a robocopy to the other. It might work perhaps. But I do not see how I would keep any history. I guess files that were deleted from the Server would remain, that is good, but no versioning. Also, if a Backup HDD is lost, there are .vhds that someone would have to know how to read to get to the data. Perhaps not that hard but it is a hurdle. With robocopy, they'd all simply be there for anyone. I am not thrilled about the idea but may consider it. Should there be other (dis)advantages then I'd like to hear them. Thx.
  5. Can I create a Pool consisting of (two) Pools? As an example: Pool P: E:\ + F:\, not duplicated Pool Q: G:\ + H:\, not duplicated Pool R: P:\ + Q:\, duplicated, so that of a single file there will be a copy on either E or F and a copy on either G or H. I realise my wishes are rather, uhm, exotic, just trying to get the most out of it.
  6. Feared it would be thus. Any chance, in a future not to distant, that something like this could be considered to implement? I have not really thought about it but I guess what I could be looking for is that if duplication is set to a certain number that I could then assign HDDs to, say, columns/series/groups (2 groups for x2 dupl, 3 for x3 etc.).
  7. Hi, I am running DP 2.x with WHS 2011. I have 2 Pools of 2x2TB HDDs, duplication is set to x2 for everything. I backup everything using WHS 2011 Server Backup. One Pool (392 GB net) contains all the shares except for the Client Backups. The other (1.16 TB net) contains only the Client Backups. Of each Pool, I backup one underlying HDD. Given that the Client Backup database changes, the Server Backup .vhd fills up tracking changes and reaches its limit of 2TB. At that time, all Server Backups are deleted and WHS Server Backup starts over again with a full backup. This is fine in principle but it does mean that the history I actually keep is not as long as I would like. Also, should a HDD fail then there is no HDD to which it can migrate the data. So I would like to create one Pool of 4 x 2TB, x2 duplication. That way, each HDD would contain about 750GB so that the history I keep is longer. The problem is though that files may be placed on the two HDDs that I do not backup. So I am wondering whether it is possible to tell DP to, in a way, group the HDDs in a 2x2 fashion, e.g., keep one duplicate on either E: of F: and the other on either G: or H:? Or, put otherwise, keep an original on E: or F: and the copy on G: or H: (I realise the concept of orginal/copy is not how DP works but as an example/explanation of what I want it to do), to the extent possible. It would not be possible if, for instance: - E: or F: failed, I would still have duplicates after migration but some will be on G: and H: - G: or H: failed, I would still have duplicates but some would be on both E: and F: I do realise that once either E: or F: fails, my Server Backup will be incomplete. However, that is true for my current setup as well. The intention would be to replace E: of F: and then ensure that the duplication/placement scheme is correct again (and I would hope this works automatically if the new HDD as the appropriate drive letter and gets added to the Pool). I have looked at the File Placement tab but I don't see how I should set up rules to accomplish what I want. Kind rgds, Umf
  8. Have you considered Seagates 8TB Archive HDDs? They might even be a little bit cheaper than 5x3TB WD Greens? And I think 5 HDDs is what you can fit into that HP N40L? (not sure) in which case larger HDDs allow for future expansion? Just a suggestion.
  9. Umfriend

    read strip

    That is the thing. These are the same drives, connected to the same controller with, both on a "SATA Rev 3.1 port (3.0 Gbps of 6.0 Gbps Max)", says Scanner. Also, Scanner reports E:\ as a Bus speed of 96.5 MB/s and F:\ as a Bas speed of 199 MB /s but it is E:\ that it reads from :confused:
  10. Umfriend

    read strip

    Sorry if I am stealing this thread but as it happens, I do not see any read striping at all. One Pool consisting of 2 x 2TB Seagate NAS HDD, both connected to the same SATA 3 controller WHS 2011 - DrivePool 2.1.1.561 x2 Duplication and Read striping is checked. DP will show the bar "Read Striping" (next to Fast I/O) as 100% but in fact Disk Performance only shows I/O on one HDD and Resource Monitor and Scanner support this (i.e. both show now I/O on the other HDD). Does that sound right? Edit: I thought I had noticed this earlier, like when client backups are running, but in this case it was during a robocopy of ten 4GB files from the Pool to an SSD.
  11. acdc, in http://community.covecube.com/index.php?/topic/1025-budget-media-server-build/, some is already written. I use two of these as WHS 2011 Server Backup drives to my fullest satisfaction. Christopher (Drashna) has one or more in a Pool and is satisfied as well. If your data on that drive is mostly static then the possible write-penalty should not bother you and, even if you do not care about performance, they read like crazy. There is also a review here: http://www.storagereview.com/seagate_archive_hdd_review_8tb AFAICS, as long as you do not run transactional databases on these, they are excellent drives and at a very low cost per TB.
  12. So how do you get new rules to add? Is that something we users can help with?
  13. Oh yes, it reads like crazy. (for a spinner and, IIRC, at 5900rpm)
  14. Oh yes I agree, as long as it is writing new tracks. But at some stage, like I see with Server Backups, writes do become slow at times. What I had hoped and turned out to be correct is "The drive leverages an on-drive cache (roughly 20GB) to handle inbound writes, in addition to internal systems for meta data tables and background processes like garbage collection, not unlike an SSD."
  15. Well, the guys at storage review actually tested it: http://www.storagereview.com/seagate_archive_hdd_review_8tb
  16. Have you ever rebooted during this time? IIRC, when I uninstaled 2.x to go back to 1.x the de-installation was not succesfull but completed still after reboot (and the same happened going back to 2.x). Also, you could check which services are running to determine the actual version that is. But try reboots first after each attempt to uninstall/install.
  17. That is why a have 2 Pools of 2x2 TB HDDs but it is nice to know that if I need to, I can create larger Pools and still have an efficient backup. That goes for WS2012 / > 2TB volume backups as well of course.
  18. Also, you might be able to create a distinct set of HDDs with single versions of the duplications using file placement rules? Then you could backup only that single set. But it's a bit out of my league.
  19. AFAIK, metadata is all stored in the PoolPart.xxxx folders on the Pool HDDs itself. In fact, you can simply take all HDDs out of your machine, connect them to another PC with DrivePool and it will recognise the Pool instantly.
  20. AFAIK, memtest86+ is still the best and most extensive memory tester.
  21. It only got Plextool delivered (completely different from the legendary Plextools). It does not say much other than SSD Health 098%. It has doen that since two years ago or thereabouts and that can't be right. For which SSDs do Scanner/BitFlock do have good interpretation rules? I'm not buying Plextor again for a long time after the karfuffle with the two M6S I had and not to keen on the Samsung EVOs either given the read-speed issue that has recurred. I am lucky the store I got the SSD for my Server from only had the 850 Pro in stock. I do not understand, why enable SMART on a device and keep statistics when you do not document what the statistics mean? (Yeah, this is aimed at you, Plextor!)
  22. BitFlock ID is...suprise...Umfriend! And what about the Wear Levelling Count? Or maybe I simply misunderstood this statistic entirely?
  23. Hi, So I started to play around with Bitflock on my lappy. This one has a Plextor M5-PRO 512GB SSD. Two statistics I am worried about: 1. Wear Leveling Count: 81702 2. Total LBAs Written: 210,728,448 B (201 MB) After running an I/O intensive SQL script, these changed to: 1. Wear Leveling Count: 81898 (+196) 2. Total LBAs Written: 211,411,968 B (202 MB) Could it be that the interpretation rules are not fully correct ("Using Plextor / Marvell SSD specific rules.")? Should I be worried? Not sure the SSD should last for 80K P/E cycles and the idea that only 1MB was written during this operations, let alone that 202 MB was written over 2 years is absurd. Int a grey-ish font it says with all statistics "Value: 100 (Was 100)"... .
  24. I should probably stress it with some of my SQL stuff but I don't like to spend the time right now. On my lappy I have an SSD for the DBs but I had the tempdb on the spinner. When I moved it to the SSD as well it boosted a script by 40%. Hate to think what would happen on the Seagate Archive HDDs, I imagine queue depths well in the hundreds. Anyway, love the wbadmin.msc, it there something similar for client backups in WHS2011?
×
×
  • Create New...