Jump to content
Covecube Inc.

Umfriend

Members
  • Content Count

    935
  • Joined

  • Last visited

  • Days Won

    48

Everything posted by Umfriend

  1. You might want to raise a ticket for support. AFAIK, this should not happen. When I copy a large file to my Pool (say 30GB), I get close to the 1GbE limit. Granted, it is a smaller Pool but also Hierarchical (so I would suffer any DP overhead roughly twice) and duplicated to two subpools, each 3 HDDs, no SSD Cache.
  2. So, thanks to this forum I got me an IBM M1015 (actually, a Dell H310, same difference) that is working out great for quite a few years now. But now I am wondering whether there is a 4/16-port SAS/SATA card with a wider and more recent PCIe bus that works well as an HBA and is cheap 2nd hand. Basically, is there an improved alternative to the IBM M1015 with a relatively similar value for money?
  3. LOL, I remember using only like 50% of a HDD (so just one 50% volume on a HDD, the rest not used/allocated) to have better seek times. Made a real difference in synthetics. IRL? Not so much IMHO. On workload, AFAIK, the stated limitation on (or better, guaranteed) use of the IronWolves is 180TB/year but that is in writes. Scanning reads.
  4. Well, it is just speculation on my part but I really doubt a full format is the same as a Scanner run. The difference being that a format will actually try to write (and report when it fails) while scanner "just" tries to read.
  5. With DP, we don't talk about original and copies. You can have either a single instance of each file, that is x1. If x2, then you have two duplicates. So if you want two copies of each file, x2 is the way to go.
  6. So the thing is I do not know whether this may be relevant to OP. However, the theory is refuted by practice as can be seen in thread I linked. It wasn't just me. And not with empty folders either. Has something be done since January 2020 to address this? If so, then I may be wrong now (but wasn't then).
  7. If the software uses/relies on timestamps of folders as well then this might be the problem: Basically, with x2 duplication, a folder may have different date modified on the two disks and any software querying the pool will only get one of them.
  8. There must be some sort of log file showing a bit more info. Small chance you'll learn anything but you might want to take a look at Event Viewer, both on the client as at the server.
  9. Another thing I have is that if I copy files from a client to the Server of the network, it matters whether I access the Server through Explorer -> Network or through a mapped network drive. The latter sometimes fails but I am pretty sure it has to do with some sort of permission (SQL Server backup files I can not copy through the mapped network drive) and I get a different message anyway. So, completely OT. So basically, I have no clue. I hope someone else here as an idea on how to diagnose and/or fix. I would have a look at Event Viewer on both the client as the server. Not optimisti
  10. The only thing I can think of is Manage Pool -> Performance -> Network I/O Boost (even though it says it relates to read requests, not write requests).
  11. So there is a Pool that consist of only 1 4TB drive? If so, then yes, you can shut down, power down, remove 4TB drive and connect it again when you are done. In fact, you could install DP on another machine and connect that 4TB drive and DP should recognize it as a Pool.
  12. Maybe another ticket needs to be opened because 28526 does not address the resulting balancing behavior (it explains it but does nothing to solve it), perhaps as a feature request? Or will this not be addressed at all?
  13. Whether it makes sense or not, a higher pool does not inherit duplication settings from a lower Pool. That might make it harder. I even think a higher Pool is rather restricted in what it can read from a lower Pool, I.e. check the duplication status. But we'll see what Covecube says.
  14. I fear this may be hard to address actually. Whenever the Top Pool (consisting of Pool A and Pool starts to rebalance, it would have to take into account: (a) The use-gap between Pools A and Pool B (b) Then, for any file that is a candidate for move (CFM), check what the Use Of Space (UOS) is, i.e., what the duplication status is (If, for instance, you use folder level duplication, that file may have 1 or a gazillion duplicates), (c) Then, for any such CFM, determine what the UOS would be after the move. Again, the relevant duplication may be rather arbitrary. The real issue here m
  15. Could you take a look at Disks 6 and 9? I suspect you find a single PoolPart.* folder in the root and that the within, you'll find two PoolPart.* folders (on each disk). Is that correct? If it is, then I get it.
  16. @zeroibis Whatever works for you. To me, it seems like a lot have administrative hassle for a remote, really remote, probability that, assuming you have backups, can be recovered from one way or the other. As I said, my setup never requires a 20TB write operation to recover 10TB of data. It basically resembles your 3rd example although I would have Drives 1 to 4 as Pool 1, drives 5 to 8 as Pool 2 and then have Pool 3 consist of Pools 1 and 2. But I agree that if two drives fail, you need no restore unless the two drives are one while 2-drive Pool. In my case, I would need to restore some
  17. First of all, DP does not require drive letters. Some here have over 40 drives in a Pool. For individual access, you can map the drives to a folder. On the backup/restore thing. Yes. However, my setup does not require backing up duplicated data. I have two x1 Pools and then one x2 Pool consisting of the two x2 Pools (it is called Hierarchical Pooling). Each x1 Pool has three HDDs. I only need to backup 1 set of three drives. Now with 4 (or 6 in my case) drives, the probability of losing two drives at the same time (or at least loss of the 2nd prior to recovery of the first failure) is rea
  18. I don't know the ins- and outs but some of this stands to reason. So you have Pool X (x2, consisting of X1 and X2), Pool Y (x2) and Pool Z (x1) consisting of Pool X and Y. I think the issue is that when DP finds it needs to move 100MB from X to Y, it will select 100MB, measured at x1, of files to move but it must then move both duplicates. DP will not be able to move just one duplicate from X to Y because then the duplication requirement will not be met on either X or Y. Why you get a substantial "Other" I am not sure but it may be because from the perspective of Pool Z, the files are in
  19. Also, what HDDs are being written to? Just wondering if you got some SMR-drives being written to (although, even then, 3 days seems like a very very long time).
  20. No worries, it was new to me too (and I still don't know that much about it). So a cheap SAS card, like the IBM 1015 (and there are many that are called differently but are the same. I have a Dell Perc H310 SAS controller which, I am pretty sure, is the exact same card.). Anyway, the cheaper SAS-card have two ports indeed. However, those are SAS ports. There are cables that allow you to connect up to four SATA drives to just one SAS port. For example: https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout/dp/B012BPLYJC. If you look at the picture you see one weird connector, tha
  21. Hi Newbie! ;D, DP needs very little, I had it running on an Intel Celeron G530 and could stream 1080p to at least one device. So a cheap build with, say, a Ryzen 3 3200G, 8GB of RAM, a decent 350W PSU and W10 would work like a charm as a file/stream server. The things you'd probably look for are SATA connectors (cheap boards often have only 4). Although you could get a cheap SAS card (IBM 1015 or somesuch, used.) which would provide plenty of expandability. The other thing is the network connection. 1Gb Ethernet, I think, should be "enough for anybody". It is a bit of a bad time as C
  22. My storage needs don't really grow so I am just sticking with what I have, anything between 1.7 and 5.5 yo (6 drives). I bought them spread out over time. Mostly Toshiba and HGST as I had a few issues with Seagate and WD years ago. Only 2 HDDs of same type and purchase date so those are in seperate Pools. As long as they don;t fail, I'll run them until they do.
  23. You can have two PoolPart.* folders on one drive at one folder level (e.g. root) but that is a symptom of something gone a bit wrong. Long story. Only one of them would be the real currently active PoolPart.* folder. A PoolPart.* folder can in fact contain yet another PoolPart.* folder in the case where you use Hierarchical Pools. But mostly, no, normally a drive has one PoolPart.* folder in the root. You're most welcome.
  24. It's a bit of a long shot but perhaps on the E and F drives, the PoolPart.* folder is not hidden and on the D and G drives it is? You can set Windows Explorer to show hidden files and folders.
  25. I am just guessing but as this appears to relate to one file, perhaps use the "Automatically resolve [...]" option in the first screen? Alternatively, seek out the individual copies and delete them manually, then recheck and/or rebalance?
×
×
  • Create New...