Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by Umfriend

  1. I am wondering whether you should do a rebalancing becuase with default behaviour, HDD1, 2 and 5 should each have the same amount of free space.
  2. Hi, I assume you mean to say that you have data in root folders on HDDs that are already added to the Pool and you want to move the data in the root folder(s) to the Pool, quickly. This is what you are looking for: http://wiki.covecube.com/StableBit_DrivePool_Q4142489 I think.
  3. The answer is in two parts: 1. Yes, you can add a HDD to a Pool if there is already daya on the HDD. That existing data will remain but it will not be part of the Pool. The space used by it will be unavailable to the Pool. 2. If you want that data to be in the Pool then the quickest way to achieve this is to move that data on the new HDD to the hidden PoolPart.* folder that is created on the HDD once you add the HDD to the Pool. I believe it is advised to stop the DrivePool service prior to the move and restart after. The Pool will need to be remeasured (not sure if this is done automatically or must be forced through the UI) and it will rebalance (and duplicate if duplication applies) thereafter.
  4. Umfriend

    Disk duplication

    Actually, it does not work that way at all (unless you do some tweaking, then it *might* work that way). DP stores duplicates on two different HDDs but there is no telling where they are. So if you have three files, F1, F2 and F3 and four disks then F1 may be at Disk 1 and Disk 2 while F2 may be at Disk 1 and Disk 4 (and F3 may be at Disk 2 and Disk 4). So, although *files* are duplicated, Disks are not.
  5. Umfriend

    File Placement

    Assuming you save episodes in folders by Series/Season or somesuch, you could consider using File Placement rules. There you could say that Pool:\Series\Marvel\Sesamestreet\*.* should be located on HDD 4 for instance and Pool:\Series\Marvel\NCIS\*.* of HDD 5. BTW, AFAIK, when I access a Pool, all HDDs spin. Not sure but I think they do.
  6. What I would do is: 1. Use Resmon.exe to check which process is actually doing a lot of I/O (torrent client or drivepool service) 2. Just as a test, let the torrent client download the same file to another disk that is not part of a Pool and see if the behaviour is different.
  7. It is easiest if you have an extra port available (otherwise, it depends on whether you have sufficient space on the three remaining HDDs): Install new HDD Add new HDD to Pool Select HDD to be removed and press Remove DP will then transfer all non-duplicated files from that HDD to the other HDDs and will do the same with the duplicates that are stored on the HDD-to-be-removed. I am pretty sure it will rebalance automatically but that may depend on settings. It will be quicker of you allow to duplicate later. In that case, DP will only move the HDD-to-be-removed but frankly, I would always want DP to ensure duplication at all times so I would not use that option.
  8. I think that if a HDD suffers a sudden death then there is no way Scanner could have warned you beforehand and if it simply went missing then it should be DP giving notice, not Scanner (as Scanner reports on issues with HDDs it does see). It sounds like a sudden mechanical failure to me but I wouldn't really know. You can create filelists, sort of, with the dmpcmd thing but it may be cumbersome for your purpose. An alternative might be And then there is the option of having actual backups.
  9. I think you can by using explorer to go to the Poolpart.* folder on the relevant HDD but... why?
  10. First off, this is not related to any Stablebit product in any way so I would understand if Chris closed/deleted this topic. However, fora on WSE are far and few between and I know there are people here who actually know things. So I am going to upgrade to WSE2016 (from WHS2011) and it will probably be the last Server I'll ever build. I want to have the option the create a VM with a huge amount of RAM at some stage. WSE2016 has a mem limit of 64GB. I am wondering whether any VM running inder WSE2016 with the Hyper-V role installed is subject to the same 64GB limit or whether that VM will be subject to the mem-limit of the OS running under the VM? The main reason I am asking is that I have installed Hyper-V Server Core on bare metal with the intention of running WSE2016 and another OS in two VMs. However, I fail to connect to that Server using Hyper-V Manager. It is not straightforward to me at all how this should work and various how-to's have not helped. But I would not mind if WSE2016 itself ran on bare-metal instead of virtualized, so that is why I am asking.
  11. No. DP does not store/save a complete Pool file list anywhere. It does not know what files were on the faulty HDD.
  12. I assume you have no duplication. I would, provided I have enough ports: Physically remove the faulty HDD (you have done this already) Remove it through the DP UI -> This should stop DP complaining about a missing disk and unlock the Pool. Add the two new HDD to the Pool Remove the two old HDDs from the Pool through the UI -> This will move all files to the new HDDs Remove the two old HDDs physically from the Server Then see what you can recover from the faulty HDD and copy that back to the Pool. I would consider to keep the two performing old HDDs in the Pool and use x2 duplication.
  13. I guess it is somewhat contrary to the concept of DP. I mean, what you are looking for, it seems to me, is the virtual equivalent of physically taking a platter out of a HDD and shoving it into another on another machine. Also, the Pool that just lost the HDD should be unusable, as in that it will go into read-only mode as it is missing a disk? It will surely whine about it. So I am really wondering about the use-case here. Could you not, for instance, share a folder and access it over the network?
  14. Easily? I think each machine would create it's own PoolPart.* folder, so the files on the HDD would not end up at Pool Y instantly. But if you then *move* the content from PoolPart.* that is X to PoolPart.* that is Y (very fast!), it would be there. Can't say that it is a use-case that I find attractive though, but you know best!
  15. Wrt step 4, Mick is right and clarified my point very well. Thanks. So if it is the case that your files are in Pool A then they are located in: E:\PoolPart.*, F:\PoolPart.* and G:\PoolPart.*. You could move them, HDD by HDD using explorer to: E:\PoolPart.*\PoolPart.*, F:\PoolPart.*\PoolPart.* and G:\PoolPart.*\PoolPart.* The * is for an insanely unintelligble unique name. For instance, I have one that is I:\PoolPart.658b6833-2c4d-4812-8ad3-5d3113caa4fb\PoolPart.ce41f9c3-261e-46d4-a01b-fb38220c59d3\ServerFolders ^^ Pool A ^^ Pool B When I used "upper" and "lower" I meant this hierachically: The drive looks like: ROOT -- Other Folders -- PoolPart.* Folder (this is Pool A and within Pool A you can have) ---- Other Folders (Only in Pool A) ---- PoolPart.* Folder (This is Pool
  16. Mick is right. The Drive Usage Limiter is not the way to go for this. What you *can* do is this: 1. Create Pool A (E:, F: and G:) - no duplication 2. Create Pool B consisting of the Google Drive and Pool A (Yes, you *can* add a Pool to a Pool, it is called hierarchical Pooling) 3. In Pool B, uncheck Unduplicated for the Google Drive (this forces all unduplicated files to be saved to Pool A) 4. Copy/Move everything to Pool B 5. In Pool B, set duplication to x2 for photos folder Pool B will view Google Drive as one drive and Pool A as one single other HDD. Duplication will ensure one copy in Pool A (E, F or G) and another copy on Google Drive. Wrt step 4, depending on where the data is right now, this can be done *fast* by disabling the DrivePool service after step 3 and the move the files on the E, F and G drives from the upper hidden PoolPart.* folder to the lower PoolPart.* folder drive by drive. Moving this way is fast. Otherwise you may have a lot of moving/copying between HDDs which is far slower as it really writes files instead of just changing the folders. Oh, and if you can, do a backup first.
  17. Ah. I think you would have been faster if you enabled duplication from the start. Now it will first transfer files to the other HDDs and then with duplication write them back.
  18. That's a good point actually. You'd still need to move some data though and re-measure and rebalance. However, if you think you may ever need to expand the Pool(s), then the current setup might be better as you could simply add drives to the G and H Pools. Otherwise you'd have to add a drive to the Raid5 array and perhaps that might have some additional complications. But if that is the way you would go then Jaga is absolutely right I would think.
  19. So I = G + H. But G and H are also Pools by themselves. The data, I think, is in pool H, not Pool I. What you can do is, explore the bidden poolpart.* folder on H. You will see there is another poolpart.* folder in it. The first is Pool H (only). The second is Pool I to the extent stored on H. G has the same but is mostly empty. Move the files in the first poolpart folder the the second below it on H. DP will the. Duplicate.
  20. I would say WHS 2011 is a 2008R2 server but whatever.
  21. Well, those were released in 2011 so (and R2 in 2009).
  22. I was a bit to quick as I don't know your use-case. But if you run a server that is used by more people who could cause I/O on both partitions, then the HDD performance will suffer as the head actually needs to run from one partition to the other. But such a scenario might not at all be relevant for you. And heck, backups over performance, I say. In any case, your 2-disk 2-partition plan will work (I had a similar setup for a while). If you have the budget and the machine is somewhat up-time critical, you might consider having a third 8TB HDD handy in case of a failure.
  23. Yes, that's the way two do it: 2 partitions, assign one to one Pool and the other to another Pool. I would not recommend it though. If it is about backup then you could specify a backup excluding the movies folder. If it is about defrag, the movies won;t get fregmented so there is no issue there. It's not about the size of the disk/partition but abou thow much must be replaced. The last time I heard about defragging being a concern is, I don't know, 15 years ago?
  24. Well, I do do a lot of data analysis for which I use SQL Server and yes, the databases can be quite large I was not aware W10Pro was free. Which is good because I need to get that on my lab-laptop. I though HyperV role was only available for WS2016 Standard/Enterprise. Anyway, a lot to take in. Gonna explore some stuff soon. Many thanks!
×
×
  • Create New...