Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by Umfriend

  1. Then again, the OP apparantly can build a rough simple version in just a week so perhaps a second developer can be hired soon?
  2. I think Acronis uses VSS and DP does not support VSS. I think you must resort to backing up the underlying disks. (old link to Acronis, it may be different now, you might try and google or ask themselves. If they use VSS, it won;t backup the Pool as such. https://kb.acronis.com/content/1681)
  3. If you are using a DP Pool, would you not access files through the DP Pool drive letter? That would not include Poolpart.etc. Also, whenever I move shared folders, I typically use the move folder wizard through the dashboard. Never gave me an issue with file name lengths nor permissions and links etc. Chris will be able to say more/better but AFAIK, DP uses UNC paths so max file name lenght is about 32K(!). Windows Explorer however, I believe, has a 255 char limit. So DP handles long filenames well but when you access the files directly on the HDD that is part of the Pool with Explorer, for instance, then the 255 char limit may become an issue faster due to the Poolpart.etc. extra chars.
  4. Hmmm, I am not sure how exactly as I haven't used DP 1.x for ages (you *can* run DP 2.x on WHS2011, I do too) but it should be possible. Somehow you should be able to access settigns for Balancers and look up the Drive Usage LImiter. Here you get a list of HDDs that are used by the Pool and you can check /uncheck duplicated/unduplicated files. If you uncheck Unduplicated for the HDDs where you don't want them, you are all set. I guess you need to enforce a run of the balancer though.
  5. Umfriend

    Drive size

    There is one tweak I know of that roughly works most of the time: Divide numbers by 2 by head. The thing is, so much can happen with the HDDs in a Pool that makes it virtually impossible to do what you want. For instance, you could store data on one of the two HDDs outside of the Pool. You would lose twice as much Pool space. DP *could* perhaps report the space available on the the HDD with least space free but that becomes irrelevant when there are more than two HDDs. In short, the circumstances where it would actually work are very limited. But yeah, I would like it too, if I just thought it was sensibly feasable.
  6. I think the main issue is that there is only one developer, but I could be wrong. So the first feature of choice would have to be cloning (of the developer, not data ).
  7. I think the issue is that Pool 1 can have data that is not in Pool 3. What you want to do, but test with a few files first, is *move* files from Pool 1 to Pool 3. I think you may have two poolpart folders on HDD 1 now. First one for Pool 1 and within that folder another for Pool 3. You might manage quicker by moving files from one poolpart (being Pool 1) to the other (being Pool 3 on HDD 1). Not sure how that works if you have shares etc, those may need re-establishing. The thing is, you *can* store files in Pool 3 *and* Pool 1 side by side and each with their own rule. I'd advise aganst it but it can be done.
  8. Perhaps. Of course, perfect code never fails but perfect code is very hard to come by. If the core code is stable and lean and this is an optional add-in or somesuch then, sure, I would have no issue with it. But as I understand it, it would require a lot of work at the driver level itself.
  9. But I am a customer too and I am not sure I'd want (the risks associated with) this.
  10. Sure. I should read up on under what conditions a Pool goes (and stays) in read-only actually because one of the purposes of duplication, for me, is to have continuous service without immediately requiering intervention.
  11. Yes (and sorry OP for hijacking) Lee, but the thing is, if the hot-standby would allow to reduplicate, then it would also be possible had the hot-standby HDD already been part of the Pool... My thinking was more that you could have one hot-standby HDD to cover for the first HDD to fail in *any* number of Pools. But then again, aside from hierarchic Pools, there is no real reason to have many Pools I think (although, in my case, 2 HDDs go to standby often and for long periods as they are only used for client backups).
  12. Hi Lee, I can actually see why one would think for it not to be necessary as long as there is enough space left on HDDs remaining in the Pool. It shouldn't be very useful IMHO. But in the specific cases of the OP (and in my case where I force Pools to store one copy on each HDD in the Pool) it might be helpful. I am still on the fence on this one. In my case, for instance, it is driven by the desire to use Server Backup for the entire server.
  13. I have been wondering about similar scenarios. If you use scanner and use the Scanner balancer in DP (which is used by default), then I would think that if you have 4 HDDs and only use x3 duplication, it would never go into read-only mode for long with a 1-HDD failure (not sure though), just until the 1 defect HDD is evacuated. Similarly, using x2 duplication would make a 4 HDD Pool (with just 1 HDD of net unduplicated content) rather resilient.   But yes, recovery in another machine would require the friend to access all HDDs.   I never requested, but have considered to, DP to allow to designate spare HDDs or hot-standbys to replace failed HDDs on the spot.
  14. Uhm, I actually want DP to be as simple and fast as possible and would prefer not to have fancy placement stuff in there. But then, I consider the File Placement Rules evil as well. My fear would be that as DP does more, it becomes more vulnerable to bugs and user mishaps. I wonder (but I may be well off here) whether using seperate HDDs and then defining libraries in Windows would not be a more logical solution for the suggested function here. One of my main concerns is that (and I have looked at the UnRAID link provided above briefly) either DP can not guarantee placement as expected, causing issues when one expects to recover easily, or must check and produce error reports. Let's say for instance that you want a certain folder on one HDD, the folder size is 3TB and the biggest HDD is 2TB. What should it do? It may then spill over to others (this is what happens with File Placement Rules if I am not mistaken) but unless this is reported clearly, a user may still be off as bad as with random placement (where the user knows placement is "random"). If it is reported clearly then the user must still micromanage. DP may also say there is not enough space but you might have a 10TB Pool with 5TB free and still run into this. Another concern is that DP would have to become more context-aware when placing files or balancing and, when free space becomes limited, may need to reshuffle entire folders to make room for an addition to another. The UnRaid document alludes to such issues as well with their implementation of split levels (and as far as I can tell it does not do a reshuffling at all but errors out on free space even if in the aggregate there is a lot available). I prefer DP to offer a single virtual drive and manage placement as it seems fit (use least occupied HDD or fill all equal %-age), run as unattended as possible and, if you have duplication, it will recover itself (for instance if you have a hierarchic Pool with one base-Pool/string as Amazon Cloud Drive Unlimited with a 500 Mbit connection and local HDDs as the other base-Pool/string). If it becomes bloated with user options and suffers in stability then that would, IMHO, be a bad thing. Anyway, just my 0.02$ worth.
  15. Chris would know but I don't think this is easily done if at all possible. You *might* consider creating 2nd Pool for duplicated data only and it would have to be, I would think, a hierarchic Pool e.g. Pool A - Not duplicated, say 3 3TB HDDs Pool B - Not duplicated, the 8TB HDD, and Pool C - Duplicated, consisting of Pool A and Pool B. You might be able to get what you want with file pacement rules but I never understood those well and believe them to be evil
  16. This is a storage / file sharing server, right? Way overpowered. But it will last you for years, YEARS, and you might do other things with it as well. One thing, I think you selected internal memory consisting of one stick only. You can do this but it is better to have two sticks with half, so 2 x 8GB sticks. Very often, you can find kits or pairs of 2 x 4GB or 2 x 8GB etc. exactly for this reason. If budget is not really an issue, go for it. Otherwise, you can do far cheaper. For instance, no reason to have a Z170 MB. Why not look at ASUS PRIME B250M-K and comaprables? About $80 I'd think and has what you need I think. An I3-7100 would save tens of dollers. The case is very nice but depending on where you would place it, cooling and sound may not be that much of an issue. It's not like this one is going to use a lot of power anyway. I would consider either a cheap case or a very expensive one. And not sure why 8GB, 4GB might do very well and 8GB definately will (2 x 4GB!). On the PSU I am unsure. It seems overdimensioned and I would think you can get by for some $60. I am confused about using DP with two pools and (only) two HDDs? You definately want to connect by cable unless you simply can't. I would not know of any reason to have WiFi- on the Serbver.
  17. You could use an HBA card, something like this for instance: https://www.newegg.com/Product/Product.aspx?Item=9SIACSJ57G0575&cm_re=HBA-_-14G-000B-00067-_-Product I don't know the ins-and-outs but it should give you an additional 8 HDDs over the SATA ports your MB has.
  18. AFAIK, the Duplicated setting in the Drive Usage Limiter does not ensure *one* of the duplicates to go to such a drive and the other copy to the ones having Unduplicated data. As both are duplicates, none of the two copies are unduplicated and would end up at the unduplicated ones. Not sure which version of DP you run but the later betas allow Pool hierarchies. I use that currently. So I have 2 Pools, each consisting of actual HDDs and these are unduplicated. Then I create a 3rd Pool (The Mother Of All Pools as I call it) that consists of the 2 existing Pools and this one is duplicated. This does exactly what you are looking for (in my case, ensuring one copy on one seperate set of HDDs and the other on another seperate set of HDDs which, in your case, might be the Google Drive). I *think* in your case you could do: 1. Create a Pool with actual HDDs, unduplicated 2. Create a Pool with the Pool from (1) above and CloudDrive, duplicated. DP will then hold one copy in Pool (1) and one on CloudDrive.
  19. Oh! That is absolutely great, thanks for the link. "When temporarily not in use, multiple LackRacks can be stacked in a space-efficient way without disassembly, unlike competing 19" server racks." "Due to their light weight design, Lackracks will grow to any required size without compromise" "LackRack Enterprise Edition" ROFLAO Edit: Whoever wrote that must be a funny guy/gal (while at the same time being very informative): "If you mount the first item, it is recommended to install it against the table top for good fit. This happens automatically if you have the LackRack upside down, except in zero gravity environments." I am tempted very much to do this due to that article alone.
  20. I had never considered defcon's suggestion but boy, does it make sense! It's ust that I am not interest in a rack itself. and these are horizontally orientated where I think I'd really like a vertical solution. But if you have some spare space in a dark corner somewhere (also helps with any noise issue) then this may be a vastly superior solution. Sure, it is 2nd gen / Sandy Bridge, but so is the still popular i7-2600K. Here you get 2 x 6 Cores, that is like 24 threads at 2.2GHz!!! And at 80W TDP, cooling can be rather quiet. Kickass server platform. I'll definately think about it.
  21. So I went looking again and depending on the use case, this might be interesting: http://www.silverstonetek.com/product.php?pid=709 8 x 3.5" hot swap bays and given the 2 x 5.25" external openings, you could get either another 3 x 3.5" or 2 x 3.5" + 2.5". This might be my next case when I go WSE 2016 in a couple of years.
  22. Actually, I do not have that much storage, compared to most here I am a midget. My Server storage layout is as follows: 1. 250 GB SSD for OS 2. One Pool, duplicated, 2x2TB HDDs - These are for all the file sharing stuff. 3. One Pool, duplicated, 2x4TB HDDs, these are for the Client Backups. 4. One 750GB HDD for media that is unimportant. 5. One 2TB HDD as spare in case one of the others fail (which will not help if one of the 4TB fails). 5. Two 8TB HDDs for Server Backups, one is always offsite, typically rotated weekly. I am never *that* concerned about power supplies actually. AFAICS, that is only important if you want to run *many* drives, especially as they may require quite a bit at the same time when starting up I think. I have a decent 350W. As for the case, this is an issue for me. I want the HDDs to be easily accesible. So I bought a http://www.icydock.com/goods.php?id=155which allows for 5 3.5" hot-swap bays (but I would recomend the tray-less variant, fool that I was to buy this one) and something like this http://www.icydock.com/goods.php?id=141 for a 6th 3.5"slot (which is used for the Server Backup HDDs). For this, you would need a case with 4(!) external 5.25" slots. There are not many nowadays that offer that and I am actually using a very very old case (2003?). But I would like to go a little bit bigger and something like this http://www.silverstonetek.com/raven/products/index.php?model=rv03&area=en might work for 2x5 HDDs for storage (in my case I would have Pools of 5 HDDs with enough spare room to have Scanner/DP take one or two offline in case of issues) and 1 HDD for Server Backup and 1 2.5" for unimportant media (not duplicated, not backed up, I use old laptop HDDs for this). However, it is a rather expensive, a case + three conversion kits. I'd rather find a case with hot swap bays in place. Unfortunately, I have not found them aside from rack-mounts but that is another ballpark entirely for me and not that cheap either... Should you go for 10-15 HDDs, then I think a 450-500W PS might be worth while. Also, what I am still missing is a UPS which is actually a requirement IMHO. But yes, it starts with: How much storage do you need and what is it you want the Server to do. Next, given how much storage you need, do you need this to be acccesible through a hot-swap bays or is it OK to open that case in case you need to replace/upgrade? In the latter case, it becomes easier and cheaper expect for the one backup HDD should you intend to make off-site backups (which I *highly* recommend). After we know that, it becomes easier to make suggestions, given a budget.
  23. I second the last sentence: "Have fun and take your time"! Indeed, budget and use case are decisive (and sometimes mutually exclusive). Now, *if* budget is not that much of a deal, you might want to consider building a server with a server OS, such as Windows Server 2016 Essentials. It is windows, it will run DP (and Scanner and, if you are so inclined, CloudDrive). I run Windows Home Server 2011 on an overpowered i7-3770. Here's what it does: - File sharing. So my kids each have their own storage on the server and their PC/Lappys can be thin (not that they are, ugh, another story). My wife has a small business and one employee, also their own storage on the server. - DP & Scanner. I do use duplication so that in case a HDD fails, there is no need for a restore of (part of) the Server. It is a continuity/uptime consideration - Client backups. This is absolutely great IMHO. Should a client fail, it can be rather easily restored. It stores multiple backups. Either a full or single file(s) recovery can be done. It supports iMax AFAIK, but not (yet) Sierra. - A downloading client that we'll not discuss. - Serviio DLNA media server. I can stream to my SACD/Blue-ray player, TV and mobile devices (using GinkgoDlna and VLC for instance) - Server Backup. And this last thing is my main benefit. The Server itself backs itself up, including file-shares and client backups. I rotate backup disks offsite. So assume the house burns down and all PCs/Lappys and the Server itself is lost. In that case, I get the offsite server backup disk, build a new server and restore that backup. From that I can then restore the clients. Now many here have huge Pools, tens of disks, but if you're looking for something smaller and are inclined to go this way then I could give you some pointers. Having said that WSE 2016 sets you back, I think, about USD400.... I think you can try WSE 2016 for 180 days at no cost (other than quite some time and a PC to install it on). As for hardware, aside from disks and the case, this all could easily by run on a 2nd gen i5 or small Ryzen. Heck, my first ran on a Celeron G530 and that worked fine as well.
  24. I do not consider DP as a backup. It is a redundancy tool which enhances uptime. The cool thing, for me, is that should a drive fail then the Server will still be up and recovery, as in re-duplicate, is done easily which saves a lot of time compared to a recovery from a backup. But if the files are important (and surely some are) then a real backup is essential IMHO. I don;t know Synctoy nor other backup tools, they may be suitable indeed. Wrt to the faulty file, if two copies are written correctly and then on one HDD it gets corrupted somehow, then the corruption will not automatically replicate to the othet copy I would think. The question would really be which copy you retrieve once you try and load the file. Moreover, if you coupl DP with Scanner, then this partly mitigates such a circumustance, should it occur (which I think is extremely rare). But Chris should be able to provide more info.
×
×
  • Create New...