Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by Umfriend

  1. Umfriend

    Drive missing

    Let me guess, Chia? AFAIK, USB may drop connection occasionally and DP will put a Pool in read-only mode while a disk is missing until it gets connected or you remove it from the Pool. Search this forum, there are posts about disk connected through USB.
  2. OK. That you have multiple PoolPart.* folders on H and K is a clear issue. That you don;t have them on P and M is weird. And then there are the PoolPart/* files which shouldn't be there. Not sure what to do here. Tranferring files, removing drives from Pool through GUI, reformat and add is a possibility but takes a long time. Perhaps better to contact support (https://stablebit.com/Support) or wait for a better volunteer here. Another scenario, but I am not sure if that would work well, is: 1. Remove the suspect drives from the Pool throuh the GUI 2. From each PoolPart.* folder on those drives, check whether they have any contents. If they don't, delete them. If they do, rename the folders. 3. Add the drives to the Pool 4. You will now see a new PoolPart.* folder. For each of the for drives, move the contents from the renamed PoolPart.* folder(s) to the new PoolPart.* folder according to this: StableBit DrivePool Q4142489 (Follow this well, you will need to stop the DrivePoolService and start it when done) 5. Do a re-measure I *think* this will work but....
  3. If you haven't yet, a reboot never hurts. And then, try and determine if you have two hidden PoolPart.* folders on a single drive (like P:/ for instance) and whether you can open a file directly from within such PoolPart.* folder. Once we know that we can go from there.
  4. First, DON'T PANIC! 99% sure it'll be just fine. I don't know what the problem is. I wonder whether you may have run into some issue with security rights to the files. That is why I want you to open a file on a suspect disk directly and not through the Pool. DrivePool stores all files in a hidden folder, that you can tell explorer to show regardless, and in plain NTFS format. It may be that you have two PoolPart.* folders on in the root on a disk as well. So don't panic, just gather facts & symptoms first. BTW, I am not support, if we (there are others who regularly help here as well) don't succeed, you can raise a ticket for real support. Oh, one other thing, in DP, can you do Manage Pool -> Re-measure?
  5. Don't start every sentence on a new line, it makes it harder to read & navigate :D. Can you access the data directly? I mean, go to P:\, then explore the hidden Poolpart.* folder and below, open a file from it that way?
  6. Yeah, although I primarily use DP for redundancy/uptime of my SOHO server. I just had some disks doing nothing that I put in, never bought anything for Chia.
  7. 1. I have added a Storage Spaces simple resiliancy pool to a DP Pool (but no data written on it yet)(so that I could consolidate left-overs from drives to store just one more Chia plot). 2. Donnow, never heard of it. 3. Yes. I use a Dell Perv H310 SAS2 card without issues and there are those here who use bigger HBA cards and/or SAS expanders. 4. I am a long time DP user. I made a Pool for Chia because DP makes it easy for me to manage drives. I see no reason not to take the 30-day trial if you still can.
  8. Yeah, it think I remember that yesterday I had the limit at 105GB, it wrote to it with 50GB left afterwards and then started to rebalance one 100GB file off that HDD. I did have the "OFP - Only control new file placement option checked. I *think* I fixed that by unchecking the "Allow balancing plug-ins to force immediate balancing." switch. But I feat that it will be a structural issue for this specific use-case. Either you set the limit to low and you'll get out-of-space issues or you set it high enough for one file to fit but the last file will then be offloaded if it ever start to balance.
  9. OK, I thought that if I set the limit to, say, 120GB, and I had 130GB free, it would not write to the disk as it would end up with less than 120GB free. I think, now, that that is not how it works: it will write it. However, once it is done, the resulting 10GB free will, I think, cause it to then offload/rebalance, no? I like the "SSD" idea, will set that up sometime.
  10. I actually run into the same issue (because of Chia so weird use-case). What I think would help is if we had two limiting values to set: 1. Never fill above a certain level in any case or, if filled above, move some when rebalancing; and, 2. Don't write to this drive if free space is already below X GB. I write single files of about 100GB. I would not mind if the disks got full up to 0, but let's say 50GB. However, if there is 120GB available, I want it to write to that disk even if, ex-post, there would be less than 50GB free. It's just when there is less than 103GB free that it should not write to it. What I do not understand is that the last operation is, as far as I can tell, a simple copy so I *think* it should be possible to know how much will be written before determining which disk to write to. I wonder if that combined add-in written by that DP-fan caters for something like that.
  11. Could you share your Balancing settings and balancers? This can't be normal behaviour as this would always result in an error (at some stage, a file will have to go to the next disk).
  12. "Never underestimate the bandwidth of a truck loaded with tapes". Your plot here, yank&transfer to there method, I think, is conceptually the fastest but using DP for that (so that the plots can overflow to the next HDD when the first is full etc) maybe a bit risky. I think my method requires less management. Matter of preference I guess. And yeah, a 2.5Gb network might be a sensible idea. It makes me wonder whether it would be possible connect two PCs over USB3.0 or higher. There are USB bridge cables. 5Gb/s or higher. A lot cheaper perhaps but length limited.
  13. Yeah, I was suspecting that. I am plotting as well (but only on owned unused HDDs, using 2.5"laptop HDDs to plot and unused HDDs to farm). So when you transfer a HDD to the farmer, you need to add that drive to the farmer. What I would suggest is a slightly different setup where you would have a landing disk on your plotter and then have a batch job transfer it to the farmer over the network, possibly a shared drive. On the farmer, I would actually run DP and simply add HDDs to a Pool as you need it and point the farmer to that Pool. IMHO easier to manage but then, I plot&farm on a single machine (as I don't buy HW for Chia). But 3tb/day maybe a bit much for the network, depending on the actual infrastructure. Again, it is not what DP was designed for but I am pretty confident it can be done, just not on how to do it best.
  14. Well, you _can_ just physically remove a disk in a Pool, insert it into another PC and read it there. You may have to unhide hidden files as the files will be in a hidden PoolPart.* folder. Caveats apply, such as either shut down before removing or, if hot-swappable, at least go through "Safely remove hardware and eject media" although I am not sure DP would not have the drive locked. Just don't know. However... The Pool itself will notice that a disk is missing and come to a read-only state. You would then have to remove that drive through the GUI. Having said this, it is not the typical use case DP was built for. And you are the 2nd in a short time to ask for something like this (https://community.covecube.com/index.php?/topic/6043-possible-to-remove-drive-including-its-files-from-pool/). I am interested in the reasoning behind this.
  15. Yes, by default DP distributes files over the drives (mind you, a single file is never split, always on a single drive). What you want is the Ordered File Placement Plugin. https://stablebit.com/DrivePool/Plugins
  16. I assume you mean whether it is OK to remove multiple drives at the same time from the Pool through the GUI, not actually pulling them physically from the PC (which you might do after the removal from the Pool is done of course). In that case, yes, it is fine. However, I *think* it is sequential, it would not actually empty them simultaneously and in fact, the first drive you'll empty will actually places files on the other disks, including the one you want to remove (I am pretty sure I read on these forums). It may be better to use the Drive Usage Limiter balancer. If you, for the relevant disks, uncheck both duplicated and unduplicated on all drives you want to empty (and assuming you have suffient space available on the ones you do not want to empty) and the remeasure/rebalance, DP will empty those drives out without placing them on the others being emptied. Once that is done you can remove them through the GUI and that removal will be very fast.
  17. You might want to raise a ticket for support. AFAIK, this should not happen. When I copy a large file to my Pool (say 30GB), I get close to the 1GbE limit. Granted, it is a smaller Pool but also Hierarchical (so I would suffer any DP overhead roughly twice) and duplicated to two subpools, each 3 HDDs, no SSD Cache.
  18. So, thanks to this forum I got me an IBM M1015 (actually, a Dell H310, same difference) that is working out great for quite a few years now. But now I am wondering whether there is a 4/16-port SAS/SATA card with a wider and more recent PCIe bus that works well as an HBA and is cheap 2nd hand. Basically, is there an improved alternative to the IBM M1015 with a relatively similar value for money?
  19. LOL, I remember using only like 50% of a HDD (so just one 50% volume on a HDD, the rest not used/allocated) to have better seek times. Made a real difference in synthetics. IRL? Not so much IMHO. On workload, AFAIK, the stated limitation on (or better, guaranteed) use of the IronWolves is 180TB/year but that is in writes. Scanning reads.
  20. Well, it is just speculation on my part but I really doubt a full format is the same as a Scanner run. The difference being that a format will actually try to write (and report when it fails) while scanner "just" tries to read.
  21. With DP, we don't talk about original and copies. You can have either a single instance of each file, that is x1. If x2, then you have two duplicates. So if you want two copies of each file, x2 is the way to go.
  22. So the thing is I do not know whether this may be relevant to OP. However, the theory is refuted by practice as can be seen in thread I linked. It wasn't just me. And not with empty folders either. Has something be done since January 2020 to address this? If so, then I may be wrong now (but wasn't then).
  23. If the software uses/relies on timestamps of folders as well then this might be the problem: Basically, with x2 duplication, a folder may have different date modified on the two disks and any software querying the pool will only get one of them.
  24. There must be some sort of log file showing a bit more info. Small chance you'll learn anything but you might want to take a look at Event Viewer, both on the client as at the server.
  25. Another thing I have is that if I copy files from a client to the Server of the network, it matters whether I access the Server through Explorer -> Network or through a mapped network drive. The latter sometimes fails but I am pretty sure it has to do with some sort of permission (SQL Server backup files I can not copy through the mapped network drive) and I get a different message anyway. So, completely OT. So basically, I have no clue. I hope someone else here as an idea on how to diagnose and/or fix. I would have a look at Event Viewer on both the client as the server. Not optimistic but I'd look.
×
×
  • Create New...