Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won


Umfriend last won the day on October 31

Umfriend had the most liked content!

About Umfriend

Recent Profile Visitors

355 profile views
  1. Umfriend

    Disk duplication

    Actually, it does not work that way at all (unless you do some tweaking, then it *might* work that way). DP stores duplicates on two different HDDs but there is no telling where they are. So if you have three files, F1, F2 and F3 and four disks then F1 may be at Disk 1 and Disk 2 while F2 may be at Disk 1 and Disk 4 (and F3 may be at Disk 2 and Disk 4). So, although *files* are duplicated, Disks are not.
  2. Umfriend

    File Placement

    Assuming you save episodes in folders by Series/Season or somesuch, you could consider using File Placement rules. There you could say that Pool:\Series\Marvel\Sesamestreet\*.* should be located on HDD 4 for instance and Pool:\Series\Marvel\NCIS\*.* of HDD 5. BTW, AFAIK, when I access a Pool, all HDDs spin. Not sure but I think they do.
  3. Umfriend

    Duplication "hogs" server

    What I would do is: 1. Use Resmon.exe to check which process is actually doing a lot of I/O (torrent client or drivepool service) 2. Just as a test, let the torrent client download the same file to another disk that is not part of a Pool and see if the behaviour is different.
  4. Umfriend

    Replacing drives in pool - with partial duplication

    It is easiest if you have an extra port available (otherwise, it depends on whether you have sufficient space on the three remaining HDDs): Install new HDD Add new HDD to Pool Select HDD to be removed and press Remove DP will then transfer all non-duplicated files from that HDD to the other HDDs and will do the same with the duplicates that are stored on the HDD-to-be-removed. I am pretty sure it will rebalance automatically but that may depend on settings. It will be quicker of you allow to duplicate later. In that case, DP will only move the HDD-to-be-removed but frankly, I would always want DP to ensure duplication at all times so I would not use that option.
  5. Umfriend

    Have Scenario - Need Help

    I think that if a HDD suffers a sudden death then there is no way Scanner could have warned you beforehand and if it simply went missing then it should be DP giving notice, not Scanner (as Scanner reports on issues with HDDs it does see). It sounds like a sudden mechanical failure to me but I wouldn't really know. You can create filelists, sort of, with the dmpcmd thing but it may be cumbersome for your purpose. An alternative might be And then there is the option of having actual backups.
  6. Umfriend

    Empty folders in poolparts

    I think you can by using explorer to go to the Poolpart.* folder on the relevant HDD but... why?
  7. First off, this is not related to any Stablebit product in any way so I would understand if Chris closed/deleted this topic. However, fora on WSE are far and few between and I know there are people here who actually know things. So I am going to upgrade to WSE2016 (from WHS2011) and it will probably be the last Server I'll ever build. I want to have the option the create a VM with a huge amount of RAM at some stage. WSE2016 has a mem limit of 64GB. I am wondering whether any VM running inder WSE2016 with the Hyper-V role installed is subject to the same 64GB limit or whether that VM will be subject to the mem-limit of the OS running under the VM? The main reason I am asking is that I have installed Hyper-V Server Core on bare metal with the intention of running WSE2016 and another OS in two VMs. However, I fail to connect to that Server using Hyper-V Manager. It is not straightforward to me at all how this should work and various how-to's have not helped. But I would not mind if WSE2016 itself ran on bare-metal instead of virtualized, so that is why I am asking.
  8. Umfriend

    Have Scenario - Need Help

    No. DP does not store/save a complete Pool file list anywhere. It does not know what files were on the faulty HDD.
  9. Umfriend

    Have Scenario - Need Help

    I assume you have no duplication. I would, provided I have enough ports: Physically remove the faulty HDD (you have done this already) Remove it through the DP UI -> This should stop DP complaining about a missing disk and unlock the Pool. Add the two new HDD to the Pool Remove the two old HDDs from the Pool through the UI -> This will move all files to the new HDDs Remove the two old HDDs physically from the Server Then see what you can recover from the faulty HDD and copy that back to the Pool. I would consider to keep the two performing old HDDs in the Pool and use x2 duplication.
  10. I guess it is somewhat contrary to the concept of DP. I mean, what you are looking for, it seems to me, is the virtual equivalent of physically taking a platter out of a HDD and shoving it into another on another machine. Also, the Pool that just lost the HDD should be unusable, as in that it will go into read-only mode as it is missing a disk? It will surely whine about it. So I am really wondering about the use-case here. Could you not, for instance, share a folder and access it over the network?
  11. Easily? I think each machine would create it's own PoolPart.* folder, so the files on the HDD would not end up at Pool Y instantly. But if you then *move* the content from PoolPart.* that is X to PoolPart.* that is Y (very fast!), it would be there. Can't say that it is a use-case that I find attractive though, but you know best!
  12. Umfriend

    File duplication from local to cloud possible?

    Wrt step 4, Mick is right and clarified my point very well. Thanks. So if it is the case that your files are in Pool A then they are located in: E:\PoolPart.*, F:\PoolPart.* and G:\PoolPart.*. You could move them, HDD by HDD using explorer to: E:\PoolPart.*\PoolPart.*, F:\PoolPart.*\PoolPart.* and G:\PoolPart.*\PoolPart.* The * is for an insanely unintelligble unique name. For instance, I have one that is I:\PoolPart.658b6833-2c4d-4812-8ad3-5d3113caa4fb\PoolPart.ce41f9c3-261e-46d4-a01b-fb38220c59d3\ServerFolders ^^ Pool A ^^ Pool B When I used "upper" and "lower" I meant this hierachically: The drive looks like: ROOT -- Other Folders -- PoolPart.* Folder (this is Pool A and within Pool A you can have) ---- Other Folders (Only in Pool A) ---- PoolPart.* Folder (This is Pool
  13. Umfriend

    File duplication from local to cloud possible?

    Mick is right. The Drive Usage Limiter is not the way to go for this. What you *can* do is this: 1. Create Pool A (E:, F: and G:) - no duplication 2. Create Pool B consisting of the Google Drive and Pool A (Yes, you *can* add a Pool to a Pool, it is called hierarchical Pooling) 3. In Pool B, uncheck Unduplicated for the Google Drive (this forces all unduplicated files to be saved to Pool A) 4. Copy/Move everything to Pool B 5. In Pool B, set duplication to x2 for photos folder Pool B will view Google Drive as one drive and Pool A as one single other HDD. Duplication will ensure one copy in Pool A (E, F or G) and another copy on Google Drive. Wrt step 4, depending on where the data is right now, this can be done *fast* by disabling the DrivePool service after step 3 and the move the files on the E, F and G drives from the upper hidden PoolPart.* folder to the lower PoolPart.* folder drive by drive. Moving this way is fast. Otherwise you may have a lot of moving/copying between HDDs which is far slower as it really writes files instead of just changing the folders. Oh, and if you can, do a backup first.
  14. Umfriend

    Can't enable duplication on a pool of pools

    Ah. I think you would have been faster if you enabled duplication from the start. Now it will first transfer files to the other HDDs and then with duplication write them back.
  15. Umfriend

    Can't enable duplication on a pool of pools

    That's a good point actually. You'd still need to move some data though and re-measure and rebalance. However, if you think you may ever need to expand the Pool(s), then the current setup might be better as you could simply add drives to the G and H Pools. Otherwise you'd have to add a drive to the Raid5 array and perhaps that might have some additional complications. But if that is the way you would go then Jaga is absolutely right I would think.