Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Posts posted by Umfriend

  1. So I am not entirely sure but I think it works like this:

    1. Scanner gets a SMART warning and tells DP to evacuate the relevant HDD

    2. Files are evacuated to another HDD.

    3. Now some files may be unreadable on the relevant HDD. I *think* you first need to remove the HDD from the Pool and then have the Pool remeasured. Should DP find some files that are to be duplicated x times are no longer duplicated x times, it will re-duplicate.

    But it may be a bot more automated even.

  2. For "Backup":

    Manage Pool -> Balancing -> Balancers -> Drive Usage Limiter -> Check / uncheck Duplicated / Unduplicated. If you uncheck Unduplicated for the slower HDDs then the unduplicated files will end up at the faster HDDs. However, a duplicated files may still end up at two faster HDDs. There is no way to direct one to a fast and the other to a slow HDD. You can not do this because DP does not actually make "backups" of duplicated files. It makes "duplicates". The difference being that all copies of a file have the same status, there is no "original" vs "backup" distinction.

    You could work with hierarchal Pools though, that would make it work at the expense of being a bit less flexible with space usage. You might be able to use the File Placement Rules but I don't use it and have no clue how it should work for you. The File Placement Rules, as I understand it, would however be very helpful wrt to keeping folders/data on specific HDDs (your 3rd Q).

     

  3. 1 minute ago, Jaga said:

    It's definitely -not- slow.  I'm a performance freak, and would throw a hissy fit if my performance degraded significantly.  :D  I even did some raw vs cached Pool testing (using Primocache) and the raw numbers were really close to what I'd see testing a single drive.

    Yes, that is what I tried to say: No discernable performance impact.

  4. 1 hour ago, PoolBoy said:

    BUT how does it know those files belong together?

    I can only think of two ways to know:
    a] A database. Fast but prone to corruption.
    b] At startup scan the disks and from the folder names figure out the original structure. Safe but very time consuming.

    Simple. Assume you have 13 Pooled HDDs. Each contains a hidden PoolPart.* folder. You direct Windows Explorer to P:\ (which I assume is the drive letter you assigned to the Pool). DP will read the PoolPart.* folders on the 13 HDDs and merge the results. Then you select the folder Movies. DP will read the PoolPart.*\Movies folder and merge the results. Etcetera ad infinitum. There is no reason I can think of to have the merged results stand-by for the entire folder structure. That would be slow. And even then, with large Pools if they remeasure/rebalance, then a complete list will be neccessary in order for DP to check dupliction consistency and construct a file movement strategy so it would have to read the entire structure (not the actual files) and that is done rather quickly as well (and transparent to the user, you won't even notice it is working on it).

    This may sound slow but the 13 HDDs will be read simultaneously. There are many users here and I have yet to come across one that complains about DP being slow, whether read or write (except perhaps for the read striping not giving a benefit to some, such as me).

  5. I get the impression you think there is some sort of database involved. There is not. It's all just plain NTFS. If you had a multple files named. "Test.fil"  but stored in different folders then each file would be stored on one of the pool disks ( or more than one if duplication is on) in a folder with that same foldername.. Just like what you have currently with 001.jpg....002.jpg in 2016/2017/2018 folders.

  6. But duplication is irrelevant here either way as a space limitation could be solved by adding more 1GB volumes. But still a 3GB file can't  ever be written to a Pool that only has volumes < 3GB.

  7. 1 hour ago, zanosg said:

    Im sorry but that isn't how DP works according to my tests. In VirtualBox I'm running Win7 and I have installed StableBit. Then i created 6 1GB drives to test with. I created pool of 1GBx4 and that should equal to 4GB DP, but it is not. It says that it is when i check the properties of that Pool but in reality its not because I cant write 3GB file to it. It says that it needs more space. It looks like the space was slashed in half. I do not want that. I want all the space available. Thanks.

    Yes, that is because you have partitioned the drives and added 1GB volumes to the Pool. DP does not split individual files over HDDs/Partitions/Volumes. So if you have a Pool that conssists of 1GB volumes, then the largest filesize you could ever write to that Pool is 1GB as that one 1 GB file needs to end up at _one_ volume. In theory, you could have a million 1GB volumes added to a Pool and your storage capacity would be huge but only for files that do not exceed 1GB per file in size.

    @Spider99: I think you misread his post. A Pool that consists of 1GB volumes can never hold individual files larger than 1GB.

  8. With DrivePool, that is very easy. You have the 1x4TB+1x8TB Pool. Then you add the other 8TB HDD. Then you select the 4TB Pool and press Remove. DP will move the files out of the 4TB HDD and presto. It will take a while as it is actually copying TBs of data. There may be an issue with a few open files but that will work itself out.

  9. Wow. The notion of DrivePool not ever balancing except by selecting the appropriate disk for new files is, given my circumstances, ffing brilliant!  How do you ensure DP will never rebalance (unless by user intervention)?

  10. AFAIK, the SSD Optimizer requires a number of SSDs that is equal to the level of duplication. I.e., one SSD will not work for a Pool that has x2 duplication. Having said that, you could create two x2 duplication Pools and put one on top of that, unduplicated and with the SSD as cache. That would work I think.

    You have 8x 2tb drives separated into 4 pools of 2 drives each where each pool has duplication. You configure backups to run on each of these individual pools. You then join these 4 pools into 1 large storage pool. You know that the data on each of the 4 smaller pools are exact copies of each other so now when 2 drives fail in a scenario that created data loss you can simply download the 2tb recovery of that individual pool.

    I actually think may not work fully unless the backups made are instant. The reason for this is that you may have run a backup at T=0, DP may rebalance at T=1, your drives of one pool may fail at T=2. At T=1, some files may have been written to the failing drives and you would not know it and these would not be available in the backup made at T=0.

    This may seem similar to the (normal) loss of data created between backups but this is in fact a bit worse. I may be wrong though.

    Now two HDDs failing at the same time is rather rare but you could consider to use CloudDrive as well. You would have either x2 or x3 duplication, depending on how OCD you are and then backups to e.g. Backblaze (which I assume allows for versioning and history). Should you consider CloudDrive then I advise to ask here what the best providers are as some have all kinds of limits/issues (I do not know, not a CloudDrive user).

  11. In WHS at least you can ignore the warning. Sometimes it remains but greyed out, althoug a reboot tends to help. Mostly, they disappear.  I have never encountered _new_ warning not to occur because I selected to ignore the notification (but then, how would I as it is hard to experience something that does not occur...things not known).

    For instance, if I insert a new HDD I get a notification that I can either use it as storage or as a backup disk. When I ignore the warning it disappears but a new one will pop-up when I insert another HDD.

  12. Not familiar with testdisk. Best to follow Christophers' instructions.

    Is F:\ the drive this is about? Explore that through Windows Explorer and see if there is a hidden Poolpart.* folder. You could also attach it to another PC to see if you can read it from there. Did you have duplication active?

  13. First thing I would do is:

    1. Assign a drive letter to the HDD (if it is not there already)

    2. Explore the HDD with explorer. Make sure you have Explorer set to show hidden files. If it shows a PoolPart.* folder, open that one and if it shows data you have likely _NOT_ lost data.

    3. If that is the case then I think you need to give more details about OS/DP Version/HDDs/MB/Controller card etc and better describe what is happening exactly. It sounds to me like you experienced behaviour that should have been addressed way earlier and, as it hasn't, must be now.

    I don't see you writing about deleting/creating/formatting partitions, that's good at least.

  14. It may have to do to with other balancers. If you have Volume Equalization and Duplication Space Optimizers active, they may need to be de-activated _or_ you need to increase the priority of the Disk Space Equalizer plug-in such that it ranks higher than the other two (but if you have StableBit Scanner, that one should always be #1 IMHO).

    I have not actually used that plug-in myself though.

    Edit: Did you activate e-measure though?

×
×
  • Create New...