Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by Umfriend

  1. Yeah, I fear with writing at least 900GB to a 2TB HDD, much of the old information will have been overwritten. Very hard to recover from such a thing. If I may I would like to offer a bit of advice on drivepool and on backups with WHS2011 (or WSE2016 for that matter), although budget constraints may limit their value. 1. If you have DrivePool, you really want Scanner as well. 2. Any data that is worth while to back up, have it duplicated. Duplication is no backup but at least in many cases it will benefit uptime and limit recovery-work. 3. If you have a Pool, ensure that you free space on it that is at least as large as one largest HDD. This way, DP (possibly triggered by Scanner) will be able to evacuate in case of a HDD failing. 4. If you make backups, have at least two backup HDDs as target. Rotate them, like once a week. Possibly keep the other offsite. 5. The backup HDDs, they need to be large.That way, purging of old backups does not occur often (and for the cases where it does occur and you need older ones you have the offsite backup HDD. May not have all the latest but most data is rather static anyway). Of course, WHS2011 Server Backup requires a bit of care in setting up given the limitations it has but there are decent workarounds that I can advise on. Edit: There are companies that specialise in recovery. Perhaps they can retrieve a lot from the failed drive. I have no experience with this.
  2. Two things: 1. I think / speculate that the PoolpPart.2ad* folder is a remnant of the Pool J:\ you had. If there is data in there that should simply be in Pool I:\ then I would consider moving them. 2. Is it correct that not all folders in the Pool are duplicated? To be sure, "x1" means there is no duplication and only one instance of a file exists.
  3. Also, AFAIK, external USB enclosures are not recommended precisely because USB connections may drop. Never heard of Get Data Back Pro but if it can show the files, can't it do whatever it takes to make it regularly readable again?
  4. I think you mean a 10TB drive, no? In any case, that Pool should have no data. Does it show any sign of progress?
  5. I agree with insleys and I tested this on WSE2016.
  6. What you should do is removing the drives from J. One by one is my advice. When you removed the last J should stop to exist.
  7. So J says 27.7TB because it had 9.5TB in HDDs and Pool I which is 18.2TB. That makes sense actually. It is just the way Hierarchical Pools (i.e. Pools becoming part of yet another Pool) work. What worrries me though is thay you have duplicated data in J. That says, to me, that something is actually writing to (virtual) drive J:\. You can still remove drive I:\ from Pool J. I would do that right now. But then you do need to check whether there are applications that are writing to J:\ while they should, probably, be writing to I:\ There is no real need to fear deleting anything. As long as you do not tell something to delete something. Removing a (virtual or real) drive from a Pool will never delete anything. It may move stuff around a bit, that is all. Edit: Forgot about your Q " I don't understand why I is a subset of J" - I would speculate that you, I guess by accident and unaware, added I:\ to Pool J:.
  8. Yeah, WS2019 missing the Essentials role sucks. I'm running WSE2016 and I have no way forward so this will be what I am running until the end of days probably.... But wow, nice setup! With the HBA card, can you get the HDDs to spin down? I tried with my Dell H310 (some 9210 variant IIRC) but no luck.
  9. Yes they do. To be sure, new screenprints (including the Pool Organsation bar) would be helpful to check and if you have any specific nightmare scenarios (as in: But if I do this, won;t DP do that?), feel free to ask them first.
  10. I assume I:\ is the actual Pool you want. Note that in Pool J:, there appears to be no data (unusable and Other only). So I would: 1. In Pool J:, remove Pool I. This will not delete anything from Pool I itself, it will only move what is located on Pool I through Pool J to D:\ 2. Move anything left, which should be nothing, from Pool J:\ to Pool I:\. Then Pool J:\ should be really really empty 3. Remove D:\ from Pool J: Now, Pool J no longer exists and you can add drives to Pool I: as you want. However, Pool I is missing a disk. I would resolve this first because it messes up measurements. Is it really gone/lost? Then I would Remove Disk 1, it should be instant, and then first have DP remeasure and, if neccessary, re-balance/re-duplicate. Only when Pool I: appears to be without issues would I do the above.
  11. I may well be wrong but AFAIK, there is no way to have DP not write a number of duplicates consistent with the duplication settings right away. I speculate that the FileDuplication_AlwaysRunImmediate setting related to cases where the duplication settings where changed or where something went "wrong" as in a missing drive being removed etc.
  12. So the 2 SSD requirement comes into play only if you have x2 duplication. Assuming you do then I can not be sure (don;t use SSD plugins) but I am pretty sure that through your use of hierarchical Pools, the main Pool will see the SDD-sub-pool as one and thus not sufficient for x2 duplication... That DP or the plug in requires drive letters for it to work is, AFAIK, out of spec and you could post a ticket for that I would say.
  13. Now for me it is of no concern but I have the default setting so it should be synchronized and I can tell you it is not, i.e., the date modified for a folder as presented in Windows Explorer is _not_ by definition the most recent date/timestamp at which a file within such a folder on any individual HDD has been written. As I indicated, it seems to present the first date/timestamp that it finds. I think that is what OP is looking for and I submit that DP currently (2.2.2.934) does not work that way.
  14. Just wondering, the 4TB HDDs, these are separate, right? I mean, they are not part of a Pool yet? And you want both 10TB HDDs to contain all the Photos and Videos, right? Would you mind if I came up with a scenario where you would achieve this although the 10TB HDDs would be partitioned into 2 5TB partitions?
  15. Sorry, no clue. From what I have read, it is not that easy to configure this easily.
  16. Not sure but a lot of disks also spin up when queried for SMART. There are some options for that as well.
  17. So I took another look. I don;t have subsonic or file placement rules and I use hierarchical Pools but here is my suspicion. I have found, I think, simply by looking with Explorer to the top-level Pool, the two Sub-Pools and the underlying disks that the top level shows as Date modified, the Date modified that is found for an item on the first disk of the level just below. In your case, I speculate, that anything querying the Pool would get as date modified for the appropriate folder the date noted on Raid Disk 1. But that one gets updated only when actual files are written to that folder on that disk. It might have been a bit better if the result would have been the most recent Date modified of all instances of such folder on disks in the Pool.
  18. You can look at the code, right? Wat woud happen if a file had a timestamp more recent than the time modified of the folder containing the file?
  19. I don't know. What I don't understand is that you say "and b) would not recognise the files as new even when I forced a scan."but also "but that doesn't explain why files are detected on a forced scan" unless you mean that a forced scan does recognise the files, just not as new? Otherwise, I would speculate that subsonic is somehow explicitly pointed to RAID Disk 1, perhaps through the folder-mount?
  20. I agree with insleys but that only works if you have a SATA port available. As I understand it, OP does not have one. Now if you had a generic USB-Sata docking station (like https://www.ebay.com/itm/ORICO-External-Hard-Drive-Enclosure-USB-3-0-to-SATA-Docking-Station-For-2-5-3-5/163798204921?hash=item26232241f9:m:mLNj1jjGGMlm5uIFrqja3RQ), then it would work for sure.
  21. No. You could, for instance, take out all drives and connect them in random order to another PC and Drivepool would recognize the Pool on the other machine just like that. Only caveat, I think, is when you use non-default add-ins. The Pool would be recognized but not all settings would work right away again.
  22. Yes, they are. But Christopher seems very busy and some others appear to have left. Personally, I don't use the SSD plugin so I did not respond. Afaik, duplication isn't on a schedule, it's either there or not. Balancing is a different matter.
  23. So first, what is the exact composition of the Pool and how much data is stored on it (specify whether it is net or including duplication). Have you tinkered with any DP settings or are they default? As a tip, I have been impatient at times but I have found that, in my experience, trying to force by doing things you do not actually want (like increasing to x3) hardly ever works. Rebooting and being patient as in have a look the next day work much better. Also, you can open resmon.exe to see if there really is no I/O for a really long time. Sometimes DP is just working out where to place what and yes, when adding/removing or changing duplication and such it *can* take a long time.
  24. Actually, that is not what you want I think. What you want is to have x1 duplication (i.e. no duplication, just one instance of the files) except for that one folder. I would search this forum for a sticky on DPCMD or somesuch. It allows to create a list of all duplicates of files and checks if they are compliant with the settings. You should only have duplicates of files in that one folder but this is the way to check.
  25. So that looks a bit weird. I would re-measure first.
×
×
  • Create New...