Jump to content
  • 0

Problems with a pool of duplication pools


zeroibis

Question

I have run into what appears to be a glitch or am I just doing something wrong.

I have 4 10TB drives.

I have 2 10TB drives in a pool with file duplication on.

I have another 2 10TB drives in a pool with file duplication on.

I then have a pool of these two pools. In this pool my data shows 50% as unduplicated and 50% as "other".

The result of this appears to drive the balancer crazy so it is in a loop of moving files around and always overshooting.

I know the files are duplicated on the lower pools but I feel that the higher level pool should be handling and reporting this status better.

Am I doing something wrong, is my install bugged?

Link to comment
Share on other sites

25 answers to this question

Recommended Posts

  • 0

To put this another way the balancer will try to move 1tb of files for example instead of 500GB because it is ignoring the fact that for every mb of "unduplicated" files it moves it will always move the same mb of "other" files.

If the reason for this is that file duplication is not enabled in the top level pool maybe there needs to be another category of file types besides Unduplicated, Duplicated, Other and Unusable for duplication. If duplicated only refers to the current pool and not underline pools there should be another category for something like Duplicated in another pool.

Link to comment
Share on other sites

  • 0

"Other" data in a pool usually indicates non-pool data taking up space in one of the volumes that make up the pool.

E.g. if you had a pool that had 1 TB of data in it, and you added that pool as a "disk" for another pool, then that 1 TB would show up as "Other" in the latter pool.

I don't know why that would affecting the balance however.

Link to comment
Share on other sites

  • 0

I don't know the ins- and outs but some of this stands to reason. So you have Pool X (x2, consisting of X1 and X2), Pool Y (x2) and Pool Z (x1) consisting of Pool X and Y. I think the issue is that when DP finds it needs to move 100MB from X to Y, it will select 100MB, measured at x1, of files to move but it must then move both duplicates. DP will not be able to move just one duplicate from X to Y because then the duplication requirement will not be met on either X or Y.

Why you get a substantial "Other" I am not sure but it may be because from the perspective of Pool Z, the files are in fact unduplicated. On X1 and X2 both you'll have a PoolPart.* folder. Files written there are part of Pool X. Within these folders, you should have another PoolPart.* folder. Files written here are part of Pool Z. My guess is that when you write a file to Pool Z, it will feed through to, say, the inner PoolPart.* folder on X1. Then DP will try to duplicate this file in Pool X (so write to X2) and I think this duplicate may be considered "Other" from the perspective of Pool Z. Not sure where that one ends up physically (inner or outer PoolPart.* folder) on X2 but it can not be part of Pool Z (with respect to measurement) because if it were, it would be a duplicated file. That would be in violation of the duplication setting of Pool Z.

Generally, I think, it is better to have Pool X and Y both at x1 (so no) duplication and then tell Pool Z to have x2 duplication. One big advantage is that if you ever need to recover, you only need either the drives from Pool X or Pool Y. I am pretty sure that would avoid balancing to overshoot and measurement being unclear.

 

Link to comment
Share on other sites

  • 0
8 hours ago, Umfriend said:

I don't know the ins- and outs but some of this stands to reason. So you have Pool X (x2, consisting of X1 and X2), Pool Y (x2) and Pool Z (x1) consisting of Pool X and Y. I think the issue is that when DP finds it needs to move 100MB from X to Y, it will select 100MB, measured at x1, of files to move but it must then move both duplicates. DP will not be able to move just one duplicate from X to Y because then the duplication requirement will not be met on either X or Y.

Why you get a substantial "Other" I am not sure but it may be because from the perspective of Pool Z, the files are in fact unduplicated. On X1 and X2 both you'll have a PoolPart.* folder. Files written there are part of Pool X. Within these folders, you should have another PoolPart.* folder. Files written here are part of Pool Z. My guess is that when you write a file to Pool Z, it will feed through to, say, the inner PoolPart.* folder on X1. Then DP will try to duplicate this file in Pool X (so write to X2) and I think this duplicate may be considered "Other" from the perspective of Pool Z. Not sure where that one ends up physically (inner or outer PoolPart.* folder) on X2 but it can not be part of Pool Z (with respect to measurement) because if it were, it would be a duplicated file. That would be in violation of the duplication setting of Pool Z.

Generally, I think, it is better to have Pool X and Y both at x1 (so no) duplication and then tell Pool Z to have x2 duplication. One big advantage is that if you ever need to recover, you only need either the drives from Pool X or Pool Y. I am pretty sure that would avoid balancing to overshoot and measurement being unclear.

 

Actually the reason for the config I have is to aid in recovery and due to limits in mountable drives.

You can mount drives A-Z so this gives you a maximum of 26 mountable drives. My system supports 24 drives so this would appear to be ok until you add in external drives.

Then there is the issue of backup.

By taking a pair of drives and using duplication on them I need both drives in that pair to be lost to trigger me needing to restore from backup. This is becuase content of both drives is known to be identical. If instead I just had all the drives in a pool and duplication turned on and I lost 2 drives I do not know exactly what content was actually lost as only some of the files were actually present on both of the lost drives. Thus in order to safely restore data I would need to restore twice the amount of data. This also dramatically complicates backups becuase now you are also backing up duplicate data as well.

 

  

9 hours ago, Shane said:

"Other" data in a pool usually indicates non-pool data taking up space in one of the volumes that make up the pool.

E.g. if you had a pool that had 1 TB of data in it, and you added that pool as a "disk" for another pool, then that 1 TB would show up as "Other" in the latter pool.

I don't know why that would affecting the balance however.


There is no data that is not within the pool. All data is within the pool or within a poll that is part of this pool. You can see this indicated in the images below.

 

See attached images:

 

As you can also see in the last image the balance creates a condition where it can never actually obtain balance. Likely the easiest way to correct this would be to simply alter the way that the balance operation functions rather than worry about making new ways to report data etc. Currently the balencer will make a decision and then it will execute the plan and nothing will change its mind. It needs to take some time to stop and recheck rather than move everything in one go. If the system was configured so that maybe after every X % of balance it rechecks to see how its changes actually effected the balance. If it never tired to balance more than 50% of the required balance change it estimates before rechecking it would never have a problem.

Screenshot 2021-02-13 114026.png

Screenshot 2021-02-13 114057.png

Screenshot 2021-02-13 114250.png

Link to comment
Share on other sites

  • 0

First of all, DP does not require drive letters. Some here have over 40 drives in a Pool. For individual access, you can map the drives to a folder.

On the backup/restore thing. Yes. However, my setup does not require backing up duplicated data. I have two x1 Pools and then one x2 Pool consisting of the two x2 Pools (it is called Hierarchical Pooling). Each x1 Pool has three HDDs. I only need to backup 1 set of three drives. Now with 4 (or 6 in my case) drives, the probability of losing two drives at the same time (or at least loss of the 2nd prior to recovery of the first failure) is really small. I also wonder what the loss of two drives in one of your Pools would do if ever your Pool is larger than 2 drives. I don't know what kind of backup solution you use but I could simply recover the whole shabang to my x2 Pool and tell the recovery not to overwrite existing files in that x2 Pool. DP would have recovered files where one duplicate still exists, the other files will again be present on both sub-Pools.

Link to comment
Share on other sites

  • 0

Yea the goal is to not need to pull a 20TB recovery when you could have just done a 10TB recovery it would literally take twice as long. Also you have a higher probability of failure.
 

In my case:

A+B = Duplication Pool 1

C+D = Duplication Pool 2

E+F = Duplication Pool 3

Duplication Pool 1+2+3 = Pool of Pools

In the above example you can sustain more than 2 drive failures before you lose data depending on what drives fail. In the event of a failure where data loss occurred you will need to restore 1 drives worth of data.

Next example:

A+B+C+D+E+F = One large pool with 2x duplication.

In the above case you will have data loss whenever 2 drives die; however, you will need to pull two drives worth of data from your backups in order to restore. Even if you skip files that you did not lose you still need to actually generate the remote pull action to get the data to then restore from your remote backup server/service.

Next example:

A+B = Pool 1

C+D = Pool 2

E+F = Pool 3

G+H = Pool 4

Pool 1+2+3+4 = Pool of Pools with duplication applied here. Once again like above if any 2 drives die you will need to restore data and the restore operation will be at least the size of two drives.

 

The other issue is how you are presenting data for backup. If you are just backing up your entire large pool then in order to run a restore operation you need to pull the entire backup because you do not know what files you lost. By having your data sorted in such a way as that you know exactly what backup goes to what drive set you can reduce the scale of the restore operation.

 

For those trying to use this at home things like this may not really matter for for business usage the way that your structuring your backups and your time to restore are major factors in how your going to deploy.

 

Anyway none of this really matters to the actual issue at hand here which appears to be a bug in the software. From what I can tell the software is unable to correctly balance a pool of pools when the underline pools have file duplication enabled on them. Given that the ability to pool pools together is a key feature of Drive Pool along with the ability to then balance data across these pools this is a bug. As stated in my previous post a simple fix would be to recode the balance operation to check after no more than 50% completion if balance is still needed. Until then the only work around is to disable automatic drive balancing and only run the operation manually and then stop it at exactly 50% and your good. From there you should not need to run it if you have file placement balance turned on.

 

 

Link to comment
Share on other sites

  • 0
7 hours ago, zeroibis said:

There is no data that is not within the pool. All data is within the pool or within a poll that is part of this pool. You can see this indicated in the images below.

Incorrect. There may be no data that is not within _a_ pool, but data that is within a pool is not necessarily also within _the_ pool made using that pool.

For example:

You have disks A, B, C and D. You have pools E (A+B) and F (C+D), and pool G (E+F).
A and B will each contain a root hidden folder PoolPart.guidstring (together containing the contents of Pool E).
C and D will each contain a root hidden folder PoolPart.guidstring (together containing the contents of Pool F).
E and F will each contain a root hidden folder PoolPart.guidstring (together containing the contents of Pool G).
(note that "guidstring" is a variable that may differ between poolparts)

Thus you will have a hidden folder A:\PoolPart.guidstring\PoolPart.guidstring where data placed in A:\PoolPart.guidstring outside of A:\PoolPart.guidstring\PoolPart.guidstring will show up as Data in E and Other in G, while data placed in A:\ outside of A:\PoolPart.guidstring will show up as Other in both E and G.

Can you provide more information on your balancer/placement settings? I would suggest it's possible that your chosen rules and your specific data sets have created a situation where DrivePool can only fulfill a triggered requirement in a way that sets up a different requirement to be triggered on the next pass that will in its own fulfillment in turn trigger the previous requirement.

Alternatively, if you want DrivePool's developer to inspect your situation personally (well, remotely) to see if it is due to a bug and/or whether your suggested fix would be feasible, you could use https://stablebit.com/contact to open a support ticket.

Link to comment
Share on other sites

  • 0

@zeroibis Whatever works for you. To me, it seems like a lot have administrative hassle for a remote, really remote, probability that, assuming you have backups, can be recovered from one way or the other.

As I said, my setup never requires a 20TB write operation to recover 10TB of data. It basically resembles your 3rd example although I would have Drives 1 to 4 as Pool 1, drives 5 to 8 as Pool 2 and then have Pool 3 consist of Pools 1 and 2. But I agree that if two drives fail, you need no restore unless the two drives are one while 2-drive Pool. In my case, I would need to restore some data if the two failing drives are split between Pool 1 and Pool 2. I wonder whether a triple duplication with one duplicate through CloudDrive would be helpful but I don;t use Clouddrive and it may be expensive at those sizes.

I am wondering however, what do you use for backup?

Link to comment
Share on other sites

  • 0
15 hours ago, Shane said:

Incorrect. There may be no data that is not within _a_ pool, but data that is within a pool is not necessarily also within _the_ pool made using that pool.

For example:

You have disks A, B, C and D. You have pools E (A+B) and F (C+D), and pool G (E+F).
A and B will each contain a root hidden folder PoolPart.guidstring (together containing the contents of Pool E).
C and D will each contain a root hidden folder PoolPart.guidstring (together containing the contents of Pool F).
E and F will each contain a root hidden folder PoolPart.guidstring (together containing the contents of Pool G).
(note that "guidstring" is a variable that may differ between poolparts)

Thus you will have a hidden folder A:\PoolPart.guidstring\PoolPart.guidstring where data placed in A:\PoolPart.guidstring outside of A:\PoolPart.guidstring\PoolPart.guidstring will show up as Data in E and Other in G, while data placed in A:\ outside of A:\PoolPart.guidstring will show up as Other in both E and G.

Can you provide more information on your balancer/placement settings? I would suggest it's possible that your chosen rules and your specific data sets have created a situation where DrivePool can only fulfill a triggered requirement in a way that sets up a different requirement to be triggered on the next pass that will in its own fulfillment in turn trigger the previous requirement.

Alternatively, if you want DrivePool's developer to inspect your situation personally (well, remotely) to see if it is due to a bug and/or whether your suggested fix would be feasible, you could use https://stablebit.com/contact to open a support ticket.

All settings are default. I no longer have an issue after manually balancing to 50% and stopping the balance run and then disabling the auto re balance.

What is going on is I have 2 pools and they show all the data as being duplicated as you would expect as they are duplication pools. You then add those two pools to a poll and the resultant pool now shows 50% of the data as other.

Now I guess the root issue issue is that the higher level pool should not show this data as other but I can also see how if it was labled duplicated data like the other pools it could confuse users as to what pool the duplication is occurring on. So there would need to be some sort of other label like duplicate data in sub pool or something like that so it is clear where the duplication is occurring to users.

See these images: if this is not how it is supposed to work let me know and I will open a ticket to get this issue resolved.

 

Screenshot 2021-02-14 113008.png

Screenshot 2021-02-14 113140.png

Screenshot 2021-02-14 113307.png

Link to comment
Share on other sites

  • 0

Okay, I think I can see what you're trying to accomplish and why it's having issues with balancing. The default balancer settings expect that a pool is not competing for its parts' free space with external applications. Thus, if a user is placing data into each of the pools A, B and C, but C also happens to use pools A and B as parts, then unless the balancer settings used for C are adjusted to take this into account you can end up with the oscillation you're seeing - because from the point of view of pool C, data being added directly to pool A or B outside of C's own purview/control is the action of an external application, or "Other".

Your options are pretty much:

  1. Continue to micro-manage C's balancing yourself.
  2. Turn off automatic balancing for C, or have it balance only every so often.
  3. Adjust balancing for C until it can tolerate A and B being used independently.
  4. Place data into A or B via C instead of directly, by making use of the File Placement feature of balancing.

Personally I'd pick #4, but your use-case may vary.

Link to comment
Share on other sites

  • 0

  

1 hour ago, Shane said:

Okay, I think I can see what you're trying to accomplish and why it's having issues with balancing. The default balancer settings expect that a pool is not competing for its parts' free space with external applications. Thus, if a user is placing data into each of the pools A, B and C, but C also happens to use pools A and B as parts, then unless the balancer settings used for C are adjusted to take this into account you can end up with the oscillation you're seeing - because from the point of view of pool C, data being added directly to pool A or B outside of C's own purview/control is the action of an external application, or "Other".

Your options are pretty much:

  1. Continue to micro-manage C's balancing yourself.
  2. Turn off automatic balancing for C, or have it balance only every so often.
  3. Adjust balancing for C until it can tolerate A and B being used independently.
  4. Place data into A or B via C instead of directly, by making use of the File Placement feature of balancing.

Personally I'd pick #4, but your use-case may vary.

 

The only source of files in the Pools E and H above are from the combined pool Z.

You can see attached images for exact contents of E and H.

The only data outside the pool is a folder containing not even 1MB of data used by backblaze.

 

Screenshot 2021-02-14 202722.png

Screenshot 2021-02-14 202742.png

 

The issue I am having is occurring becuase the pool Z is seeing half the data on pools E and H as "other" instead of as duplicate data because the data is being duplicated on pools E and H instead of on pool Z directly. I do understand that it would be confusing to users to label this "other" data as duplicated as they might assume that it is being duplicated by the current pool and not an underline pool but the data should be labeled something other than "other" and the balance functions should take the existence of duplicate data within underline pools into account.

Link to comment
Share on other sites

  • 0

... Something's not right. I'm just going to collect all the relevant screenshots here.

Screenshot 2021-02-13 114026.png

Screenshot 2021-02-13 114057.png

.. Okay, there's no "Other" data showing up on the four physical disks.

Screenshot 2021-02-13 114250.png

... But it says there's "Other" data on the two respective pool volumes.

Screenshot 2021-02-14 202722.png

Screenshot 2021-02-14 202742.png

... Even though the only other folders apparently contain <1MB, which is a negligible amount when we're looking for 5.59 TB of "Other" data?

:huh:

Wait, do those explorer views still have protected (system) files hidden (Explorer window, View menu, select Options, select View tab, look under Advanced settings, "Hide protected operating system files (Recommended)", defaults to ticked)? Because unless pool measuring has been borked (which would confuse the balancing), you've got 5.59 TB of "Other" data somewhere and one of the possibilities is shadow files in the System Volume Information folder, and I'm not seeing that folder in those screenshots.

On that note, just in case it is borked, can you please open the DrivePool GUI and for each of pools E, H and Z - in that order and waiting for each to finish before starting the next - select Manage Pool -> Re-measure...

Link to comment
Share on other sites

  • 0

Here is what is going on mathematically in the images you see above.

Disk 6 (L3): 2.96TB

Disk 9 (R3): 2.96TB

There pool reports 5.92TB total storage used. Note that their pool reports this full 5.92 as "duplicated" which is correct as the data of disk 6 and 9 are identical.

Disk 1 (L7): 2.63TB

Disk 10 (R7):2.63TB

Their pool reports 5.26TB total storage used. Note that their pool reports this full 5.26TB as "duplicated" which is correct as the data of disk 1 and 10 are identical.

Then you have the pool z which contains Disk 6,9,1,10 for a total space used of 11.18TB however this pool does not properly report the type of data so what it instead reports now is:

10TB Mirror LR3 5.92TB exactly 50% of the data is "other"

10TB Mirror LR7 5.26TB exactly 50% of the data is "other"

The pie charts you can also see this but the numbers are different becuase I manually balanced by then.

So what is going on is that the Pool Z is not recognizing the duplicated files from the pools E and H and is listing them as other.

 

You can recreate this by the following steps:

Create a pool of two drives and enable 2x file duplication

Create a second pool of two drives and enable 2x file duplication

Create a third pool of the first two pools.

Place some files into the third pool.

Half of the space will be reported as "unduplicated" and half as "other"

 

If the above steps does not recreate the issue then something is just wrong for me.

 

Note the numbers in this image are after I balanced and will not match the math preformed at the beginning of this post.

Screenshot 2021-02-14 235246.png

Screenshot 2021-02-14 235638.png

Screenshot 2021-02-14 235813.png

Link to comment
Share on other sites

  • 0
On 2/15/2021 at 3:00 PM, zeroibis said:

If the above steps does not recreate the issue then something is just wrong for me.

Thanks for your patience! Following your steps recreated the issue. So... yeah, that's a bug. :(

I'd hazard a guess that because the ability to pool drivepool volumes was added after the ability to pool physical volumes, the pool measuring code hasn't been updated to take that new ability into account.

I've reported this issue to Stablebit in a ticket, however you may still wish to report the constantly oscillating balancing as that may be a separate issue (I wasn't able to reproduce that; I did notice some oscillation with my test data set but it gradually decreased and reached equilibrium after some iterations; your suggestion of having it check partway might be useful there too).

 

Link to comment
Share on other sites

  • 0
7 hours ago, Shane said:

Thanks for your patience! Following your steps recreated the issue. So... yeah, that's a bug. :(

I'd hazard a guess that because the ability to pool drivepool volumes was added after the ability to pool physical volumes, the pool measuring code hasn't been updated to take that new ability into account.

I've reported this issue to Stablebit in a ticket, however you may still wish to report the constantly oscillating balancing as that may be a separate issue (I wasn't able to reproduce that; I did notice some oscillation with my test data set but it gradually decreased and reached equilibrium after some iterations; your suggestion of having it check partway might be useful there too).

 

Nice, yea the balance issue is caused by this issue. Because the balance program can only logically move data other than "other" it does not count "other" data when making the calculation. So if I had 2 pools like in my case and one pool had 6tb total used and another had 4TB total used and it was seeing half the data as other the decision goes like this:

6TB = 3TB Movable Files, 3TB Unmovable Files

4TB = 2TB Movable Files, 2TB Unmovable Files

So to balance it can only move the movable files and thus it tries to move 1TB becuase 6-1=4+1. However in this case the other data will also move every time it moves the movable data. So instead of moving 1TB likes it thinks it is doing it actually moves 2TB creating an infinite loop.

So my expatiation is that once they fix the bug of the data showing up as other this will correct the balancing bug as well.

 

In order to recreate the balance issue you need to use enough data to have it so out of balance that it can never get within the 10% or 10GB condition to stop.

Link to comment
Share on other sites

  • 0

I fear this may be hard to address actually. Whenever the Top Pool (consisting of Pool A and Pool B) starts to rebalance, it would have to take into account:
(a) The use-gap between Pools A and Pool B
(b) Then, for any file that is a candidate for move (CFM), check what the Use Of Space (UOS) is, i.e., what the duplication status is (If, for instance, you use folder level duplication, that file may have 1 or a gazillion duplicates),
(c) Then, for any such CFM, determine what the UOS would be after the move. Again, the relevant duplication may be rather arbitrary.

The real issue here may be that when checking the UOS, the Top Pool would actually have to read file parameters somehow. Either it would (1) have to read on a look-through basis, so an x2 duplicated file is returned to the balancer process twice or, (2) for each file, interpret the duplication settings as per Pool A/B, including folder level duplication settings. However, I suspect that the balancing process is only able to read the relevant data for itself w.r.t. duplication settings and through the Pools A and B, meaning that querying a file would only return one result (just like you only get one results when you look at Pool A or B through Explorer). I suspect that this is how they end up at "other" currently: The Top Pool queries Pools A and B but receives only one record/data item/hit per duplicated file. It also receives total space. So the difference between total space and space used [by single instances of the file because that is all the Top Pool receives when querying Pool A or B] by definition, is other.

I am sure it can all be done but it does not seem simple to me and it may have an impact on performance as building the list of files to move will require some additional intelligence.

 

Link to comment
Share on other sites

  • 0
3 hours ago, Umfriend said:

I fear this may be hard to address actually. Whenever the Top Pool (consisting of Pool A and Pool B) starts to rebalance, it would have to take into account:
(a) The use-gap between Pools A and Pool B
(b) Then, for any file that is a candidate for move (CFM), check what the Use Of Space (UOS) is, i.e., what the duplication status is (If, for instance, you use folder level duplication, that file may have 1 or a gazillion duplicates),
(c) Then, for any such CFM, determine what the UOS would be after the move. Again, the relevant duplication may be rather arbitrary.

The real issue here may be that when checking the UOS, the Top Pool would actually have to read file parameters somehow. Either it would (1) have to read on a look-through basis, so an x2 duplicated file is returned to the balancer process twice or, (2) for each file, interpret the duplication settings as per Pool A/B, including folder level duplication settings. However, I suspect that the balancing process is only able to read the relevant data for itself w.r.t. duplication settings and through the Pools A and B, meaning that querying a file would only return one result (just like you only get one results when you look at Pool A or B through Explorer). I suspect that this is how they end up at "other" currently: The Top Pool queries Pools A and B but receives only one record/data item/hit per duplicated file. It also receives total space. So the difference between total space and space used [by single instances of the file because that is all the Top Pool receives when querying Pool A or B] by definition, is other.

I am sure it can all be done but it does not seem simple to me and it may have an impact on performance as building the list of files to move will require some additional intelligence.

 

It is actually pretty easy. Right now due to a bug it is seeing half of the data on the underline pools as other. Once it no longer categorizes them as other it will then balance the correct amount of data.

If it did matter in so far as the calculations are considered it just needs to divide the balance operation by the duplication setting of the underline pool. The actual file operations as they work now do not need to be changed. So the difference would be:

Currently:

6TB = 3TB Movable Files, 3TB Unmovable Files

4TB = 2TB Movable Files, 2TB Unmovable Files

So to balance it can only move the movable files and thus it tries to move 1TB becuase 6-1=4+1. However in this case the other data will also move every time it moves the movable data. So instead of moving 1TB likes it thinks it is doing it actually moves 2TB creating an infinite loop.

------------------------------------------------------

After the bug is fixed

6TB = 6TB of 2x duplicate files = 6/2 =3TB Unique files

4TB = 4TB of 2x duplicate files = 4/2 = 2TB Unique files

3-1=2+1 so move 1tb of files.

Another example:

9TB = +TB of 3x duplicate files = 9/3 = 3TB Unique files

4TB = 4TB of 2x duplicate files = 4/2 = 2TB Unique files

Obviously I am assuming the pools in these examples have the same total storage size but you can do the math and see that it works out pretty simply.

The key is that all the program needs to do different than it does now as far as balancing pools with file duplication is to divide the balance action by the duplication setting on the pool it is modifying to take into account the multiplication effect of duplication.

 

40TB pool with 4 10TB drives with 4x duplication with 12TB of data in the pool. This pool is 30% full.

a new 40TB pool with 2 20TB drives is created and 2x duplication is enabled. This pool is 0% full.

The two above pools are added to a new pool and this pool now balances the data of the duplication pools. How much needs to be moved to balance.

Members:

Pool A: 40TB with 12TB used 30% full 4x duplication

Pool B: 40TB with 0TB used 0% full 2x duplication

 We need to look at the real space in order to figure it out.

How much free space is there for unique files on Pool A? 40/4 = 10TB

How much free space is there for unique files on Pool B? 40/2 = 20TB

So how much space is actually being used on Pool A 12TB/4=3TB -> 3/10 = 30% ok so far so good

However, if we reduce files on pool A by 50% we only increase the usage on pool B by 25%. Thus we need to move twice as much as we expect to.

So how do we know how much to move?

We still have 3TB of unique data but we now have 10TB+20TB of total storage space for unique files.

Our new total % of used storage space is thus 3/30 or 10%.

So if each pool is 10% full their totals will add up to the 10% full that the new total is.

Pool A 1TB = 10% full

Pool B 2TB = 10% full

So if we currently have 3tb of unique files on Pool A we need to remove 2tb worth and move them to pool B.

This will result in:

Pool A: Remove 2TB unique files = 2*4 -8TB of data so 12-8= 4TB or 10% full

Pool B: Add 2TB unique files = 2*2 of data so 4TB or 10% full.

 

 

Here is just the raw math:

Pool A: 40TB with 12TB used 30% full 4x duplication

Pool B: 40TB with 0TB used 0% full 2x duplication

Balance Calculation:

Space for unique files:

Pool A: 40TB/4 = 10TB

Pool B: 40TB/2 = 20TB

Total Space for Unique Files = 30TB

Space used by unique files:

Pool A: 12TB/4 = 3

Pool B: 0TB/2 = 0

Total Space used by unique files = 3TB

Current Space Used by Unique files as a %

Pool A: 3TB/10TB = 30%

Pool B: 0TB/20TB = 0%

Total Current Space Used by Unique files as a % = 3TB/30TB = 10%

Target % used for members to be balanced = 10%

Pool A: 30% - X = 10% -> X = 20%

Pool B: 0% - X = 10% -> X = 10%

Convert from % to TB to move:

Pool A: 10TB*20% = 2TB

Pool B: 20TB*10% = 2TB

2TB = 2TB

Conclusion: Move 2TB of unique files from Pool A to Pool B

Pool A: Remove 2TB unique files = 2*4 = 8TB of data so 12-8= 4TB or 10% full

Pool B: Add 2TB unique files = 2*2 of data so 4TB or 10% full.

 

I think you also get the idea from the above math as to how a system with more than 2 pools would operate in so far as finding how much to remove and add to each one.

 

 

 

 

 

Link to comment
Share on other sites

  • 0

That doesn't account for the possibility of any given pool using a mix of duplication levels. Much better I think to stay with your original suggestions of not treating child pool duplication as Other (which I'd think would be doable via fetching the child pool's own measurements) and checking at interval(s) during re-balancing to see if it's satisfied the goalpost conditions.

Link to comment
Share on other sites

  • 0
3 hours ago, Shane said:

That doesn't account for the possibility of any given pool using a mix of duplication levels. Much better I think to stay with your original suggestions of not treating child pool duplication as Other (which I'd think would be doable via fetching the child pool's own measurements) and checking at interval(s) during re-balancing to see if it's satisfied the goalpost conditions.

 

Oh you mean a mix of duplication levels within a pool.

What is interesting is I am not even sure how they would achieve this or why.

For example for that to actually happen the following would be needed.

Pool A: Contains the pool part folder for Pool Z

Pool B: Contains the pool part folder for Pool Z

Pool A: Configures custom duplication settings to an individual file or group of files in the pool part folder for Pool Z

Pool B: Configures custom duplication settings to an individual file or group of files in the pool part folder for Pool Z

 

Issue Files and folders within these pool parts can be moved at any time and then your duplication rules will not match.

Solution: They should apply these rules at the Pool Z there is no logical reason to not do this unless they are not using balancing.

 

It does not make sense to be for that type of duplication method to be supported or encouraged. It is one thing to have duplication applied to an entire pool and then make that pool part of another pool. I could also see making duplication enabled only on the entire pool part folder.

However, if you are wanting to have duplication at the individual file and folder level within a pool part folder whos contents you then want to balance to another pool where you do not and logically could not have this identical configuration set is insane.

 

You are right that a simple solution to the balancer is to have it adjust the goal posts as it goes. I would imagine the programing of that logic to be more copy pasta than trying to write a bunch of new math into it lol.

Link to comment
Share on other sites

  • 0
6 hours ago, Umfriend said:

Whether it makes sense or not, a higher pool does not inherit duplication settings from a lower Pool. That might make it harder. I even think a higher Pool is rather restricted in what it can read from a lower Pool, I.e. check the duplication status.  But we'll see what Covecube says.

I mean the data is already in the program and it is the same program. Nothing is really changing here with respect to file placement only how the program internally reports the data.

As I have stated before I think it is logical that the files in the higher pool not be labeled "duplicated" because that may lead users to believe that duplication is turned on at that pools level. Instead it should label them something else like "duplicated by child pool".

Also nowhere is anyone asking or wanting a higher pool to inherit any settings, it should not be doing that.

Link to comment
Share on other sites

  • 0
4 hours ago, Christopher (Drashna) said:

For a detailed description from Alex about this: 
https://stablebit.com/Admin/IssueAnalysis?Id=28526

But basically, this is expected and normal, because it's computing the files in that level for the pool, and doesn't compute duplication on sub-pools. 

Yea that makes sense, honestly I do not really care how it reports it on the higher pool. The more important thing is for the balancing to actually work correctly.

The simplest solution would be to have the balance operation recheck and adjust the goal posts while underway so that it is not able to overshoot or infinite loop itself.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...