Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by Umfriend

  1. Umfriend

    Ransomware?

    I was wondering whether the new Pool-of-Pool functionality could bring some relief. My thinking is: 1. Define two unduplicated Pools 2. Create a duplicated MOAP (Mother Of All Pools). 3. After duplication - Detach one of the Pools Normally this would, I think. put the MOAP in read-only mode but this need not be a bad thing if it is for media that is not written to often? Periodically, you could attach the 2nd Pool, write to the MOAP whatever needs written, let DP do its thing and detach Pool #1 again. Moreover, I wonder whether you could not still write to Pool #1, that one would not be in read-only-mode, if you would use that as the source for media players as well then Pool #2 is really a local 'kinda sorta' backup. The only downside is that you would have to write directly to the folders within the second or nested poolpart folder. But perhaps DP could allow a for a duplicated MOAP the read-only-mode to be suspended termporarily? Just thinking (I might consider going x3 duplication even if something like this was easily manageable exactly for this reason)...
  2. As a small update, I now had a rather good scenario and managed 97MB/s (3TB in 8.5 hrs). Use case is everything. And if prices are close, then I'd go for something else (e.g., I would consider the 8TB IronWolf at EUR 299 instead of 259) but them 10TB IronWolfs... are still somewhat expensive.
  3. If it is for cold storage, so mainly write-once read-many, then you could consider the Seagate Archive HDDs. I believe 8TB are the largest (I have two of them for backup purposes) and they are way cheaper per TB. Now write performance *can* be very bad. I think I just had a worst-case situation due to a restructuring of my Server storage (which is backed up to the Archives) and managed a 10MB/s write but this is a very distinct situation. A clean Archive HDD will get you anywhere between 60-180 MB/s write speed and assuming most writes will be less than 20GB you should get to the higher end of that. They read like crazy though (for spinners). Where I buy my HDDs, a 10TB IronWolf will set you back EUR 439 whereas an 8TB Archive does 259. So 20TB IronWolf comes in at EUR 878 while 24TB Archives do EUR 777...(Although there is currently a discount on the 8TB Ironwolf at EUR 299). I would also suggest to get something like an hot-swap bay, something like this http://www.icydock.com/goods.php?id=141. This allows you to backup up critical data (like the OS HDD and non-DVD data) to removable HDDs, allowing you to rotate backups offsite. This way you have protection against accidental deletes, system failure, theft and fire etc.
  4. So I moved to 2.2.0.746, all good. I now have two unduplicated Pools of single 4TB HDDs, partitioned as 2x2TB and a third Pool that consists of the two with duplication. Big upside for me: the issue where duplicates can be stored on the same physical HDD is now "solved" (well, worked around). So because this now allows me to continue to use 2TB volumes (given the limitation of WHS2011 Server Backup) with larger Pools, I can keep WHS2011 for quite some time. This is a real money saver for me. Thanks! I would have chosen a different implementation as I now need three Pools to accomplish this. I would rather have had the option to define strings within a single Pool. However, the current implementation has the benefit that it uses, I would think, sort of the same code as opposed to something new and the additonal overhead (I assume that for each I/O, the service now needs to make three calls) presumably is very small anyway.
  5. Best of luck Chris...Hope it'll be all history soon.
  6. All, thanks for your info, it has been most helpful. I have decided that Viagra is cheaper I'll wait for a while and at some stage, probably 2018-2020, go the Essentials route and not to do the VM stuff. Just to complicated for me, not motivated to educate myself on this and save the expense. It will allow for a cheaper server. I'll test-drive on some old machine I got lying around and then use the existing i7-3770 machine and that setup should last me another 10 years. In 4 years or so I'll simply buy a new even more powerful lappy for work. The only real thing that does not work currently is a BMR of my lappy given that WHS2011 is W7-based and does not support the newer USB-standard but I guess a small converter-card would allow me to plug my NVMe SSD into a PCIe slot in an existing machine and BMR there.
  7. Ah, here it gets a bit tricky. Once you reconnect the remaining HDDs in step 7/8, DP should recognise the Pool but it will also, I am pretty sure, notice that one HDD is missing. This one you have to remove via the UI (basically telling DP it will never get that HDD back). Assuming some files will be restored somehow, I would copy them from whatever device you get them on to the Pool (so to the drive letter that is your Pool). If you also have another HDD to use in the Pool I would simply just add it to the Pool via the UI. Basically, adding HDDs and adding the files are two different things (at least how I would do this). I use Samsung HDDs, not EVOs, but I guess many brands are just fine.
  8. No, the data that was stored on the other HDDs will still be there, uncorrupted. DP stores files in normal NTFS format. Single files are always stored on one HDD in their entirety. All you may have lost are the files that ended up on the OS+DP HDD. You can easily find them, assuming these HDDs have not failed of course, but attaching them to a working PC, explore hidden folders and dive into the hidden poolpart folders. And perhaps some sort of data recovery can be applied to that HDD as well. Your hindsight is not 20/20 I am afraid. Even if you had not included that partition in the Pool, another HDD might have failed and you'd have had the same issue. Duplication and Scanner would have given some protection but #9 really is key.
  9. No, your data is fine. I am assuming that the died OS HDD was not part of the Pool in any way BTW, otherwise the data there may be lost.But any an all data that resided on the other three are still there and very reasily recoverable. First, you could connect the other 3 2TB HDDs to another computer, set Windows Explorer to show hidden folders and below the poolpart.xxxx. folders you will find your data. DP stores files as regular NTFS so no worries. Further advise: 1. Buy a new HDD 2. Disconnect all other HDDs 3. Insert new HDD in Server 4. Install WHS2011 5. Install DrivePool 6. Turn off Server 7. Connect disconnected HDDs 8. Reboot DrivePool should recognise the pool, remeasure/rebalance and all should be well. Step 9: MAKE BACKUPS! WHS2011 Server Backup works great and if you get, say, an Icy Dock hot swap bay for one HDD, slam an 8TB Archive HDD in there and back the Sever up once a day. Buy another 8TH HDD and swap them weekly, keep one copy offsite.
  10. FCOL, that is expensive (WS2016 Standard). Any chance I could run once WS2016 as the real server and another as the Work environment? So many considerations. I *think* I would want to have SQL Server run on the main server and connect to it from the Work VM. I would have to be able to RDP into both through the internet. I guess I could do *all* communication between the internet and my won network through a VPN by configuring the router? Anyways, I guess I can try a couple of times before I really migrate away from WHS2011 for client backups/file sharing/streaming. Did you install the Essentials Role? AD, another thing I have no clue about. Ideally, it would all be simple without me having to upgrade to a network/database/server/security administrator. WHS2011, in that sense, was almost perfect for me. Oh, and thx for the input sofar, it is very helpful to me. Edit: Do thing get easier if I get a mainboard with 2 NICs? Edit2: And don;t get me started on licensing, I don;t get that at all (and I did try!). If I want to have a client PC backed up and be able to logon for filesharing, does that require a User CAL? How many you got?
  11. This certainly helps, even if already a bit over my head (for now). Teamspeak? That has been ages for me when I did Red Alert II and Fighter Ace, LOL. Anyway, could a single program running on the "main" Server connect to a VPN (to have some private transfers, you know...)? Now mostly I would want to RDP in the "work" VM but sometimes I may need to RDP into the Server. How does that work? Port forwarding on the router and use a specific port for the Server? What would I need to do in the Server to make it recognise that port (or does the router translate external port to internal port at internal Server IP address)? Will everything from the VM be stored in a .vhdx file or can it just access the HW that I assign to it? Could the VM logon to the Server and/or a SQL Server instance running in the Server as opposed to the VM? How do I backup the VM? Is it a "client" on which I could install a Connector and have client backups or is it part of the Server Backup as a whole? Oh, and you run Standard? Is that becuase Essentials does not support VMs? And I guess I need a W10 license for the work VM? How does that work. Install a spare W7 instance and then upgrade for free to W10? As you can see, no clue.. But I am looking forward to perhaps running a Zen-based 16C/32T server. Building it will be so much easier than actually installing OS/SW right.
  12. So DPCMD on the POP will show two devices which are in fact Pool1 and Pool2? Not the actualy underlying physical devices?
  13. So I know this is not the forum for it but just briefly if you don't mind: You have a VM that runs (a) your work environment and ( a server? Or seperate machines? I have built PCs and know my way around them somewhat but virtualization and networking (a.o VPN, proxies) are simply not my forte.
  14. Really OT but I know some here used to visit there as well. Has Terry Walsh ditched the forums he had on wegotserved? If so, what is a good alternative? Sometime this year I will build a new server running WS2016 essentials and as I have got no real clue about VMs I need some advice. Basically I want to build a big-ass Server for client backups and file sharing and, running on the same HW but isolated wrt access, a virtual desktop. The idea is that I can then someday buy a very light laptop that would basically be a VDI(?) and actually run my work things on that VM/RDP (but without any danger of damaging the actual Server part of the Server).
  15. And how does DPCMD behave when you want to get a dump of the MOAP (Mother Of All Pools)?
  16. Wow! That you guys are working on this really makes me happier. Been waiting for over 1.5 yrs now I think. From the changes.txt, I gather that I could do something like this: 1. Create Pool1 from HDD1 & HDD2 - no duplication 2. Create Pool2 from HDD3 & HDD4 - no duplication 3. Create Pool3 from Pool1 & Pool2 - x2 duplication? Could I also add HDDs to Pool3? Would seem weird but it might be nice for SSD caches. Although I think I'd rahter add an SSD cache to Pool1 and Pool2 seperately. Had to laugh about "Circular pooling is not allowed (e.g. Pool A -> Pool B AND Pool B -> Pool A). Contrary to popular belief, this does not lead to infinite disk space." -
  17. I am actually not clear on this myself. I use the remove option from DP itself and leave the boxes cleared. I do increase the I/O priority by pressing the (small) arrows to the right of the Pool organisation bar. IIUC, it is not the fastest way to do it but so far it has always worked for me.
  18. Actually, I want to know too! Q appears before P with me. This has been bugging me for a long time now but, you know, just thought it was to small a think to bother someone with. Thanks!
  19. Umfriend

    I'm at a loss

    Just a quick question, have you looked at the event viewer to see if there are any IDE/ATAPI/DISK-type errors? That might point to HW issues in the server aside from HDDs, such or controller?
  20. Ouch! Get rid of them soon. And don't make new ones.
  21. Well, this is not about measuring anymore so much but about having both duplicates of files stored on the same physical HDD and I am just wondering if there is an older version of DP that does not have this issue.
  22. What OS are you running? I do use Windows Server Backup and that _can_ work provided you backup the underlying HDDs/Partitions/Volumes. And yes, barring some special circumstances or Stablebit implementing Strings/Groups, that likely means you'll be backing up duplicates if they are in the Pool.
  23. I doubt it. The pool consists of: Device 5: G:\ and H:\ (and recognised as such by Scanner) Device 3: I:\ and j:\ (and recognised as such by Scanner) It is Device 3 that has both duplicates of some files (and this is reported by DPCMD as well without it being flagged as an error). Drive Usage LImiter lists both Device 3 partitions as one / provides one set off checkboxes (Duplicated and Unduplicated). It is Device 5 for which the Drive Usage Limiter provides checkboxes for both partitions. What is the order by which DPCMD is supposed to list files? It appears to be alphabetically after the PoolPart folder mostly but at least some of the erroneously placed files are listed out of that alphabetical order. Should I revert to a 2.1 version? That one has measurement issues and, if I am not mistaken, does not support DPCMD list-pool-file-parts or somesuch..?
  24. No problem at all. Just restart, it'll be fine.
  25. Hi. So I had "solved" this by only using one 2TB partiiton of each 4TB HDD and backing up all HDDs in the Pool bar 1. However, over time this became very cumbersome and I recently split of Client Computer Backups onto a seperate Pool consisting of 2 x 4TB HDDs formatted as 4 x 2TB partitions. I am running WHS2011 and DP 2.2.0.738 with x2 Pool File Duplication. Initially, files were distributed correctly, i.e., each duplicate was placed on the other actual HDD. But now some duplicates are stored on two partitions on the same HDD. All settings are default (no file placement rules, balances are Scarr, Volume Equalisation, Drive Usage Limiter, Prvent Drive Overfill and Duplication Space Optimiser). What I have noticed is that DP seeks to equalise either the amount of date stored or the amount of data free on each partition (which in this case is the same). Two partitions (G:\ and H:\) on Device 5 have a SIV folder with quite a bit of data (as a result of Server Backup running on these I guess). I am just wondering whether one of the Balancers, I suspect the Duplication Space Optimizer, is not following the "no duplicates on 1 physical HDD"-rule? Perhaps this makes it easier to solve it and needs no/less in-depth code audit? I would be willing to simply turn the Duplication Space Optimizer off but I doubt whether it will help. DPCMD does list the duplicates as being stored on the same physical HDD but does not indicate that this is an error/problem so I am not sure DP even realises this is one (just did and did remeasure and rebalance, no effect). Another thing I notice that is weird is that in the Drive Usage Limiter, volumes G:\ and H:\ (together Device 5) are listed seperately (they have their own Duplicated/Unduplicated check boxes) whereas volumes I:\ and J:\ (together Device 3) hare combined and have one check box. It is Device 3, containing I:\ and J:\ that has both duplicates stored on it.
×
×
  • Create New...