Jump to content
Covecube Inc.

zeroibis

Members
  • Content Count

    61
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by zeroibis

  1. Yea that makes sense, honestly I do not really care how it reports it on the higher pool. The more important thing is for the balancing to actually work correctly. The simplest solution would be to have the balance operation recheck and adjust the goal posts while underway so that it is not able to overshoot or infinite loop itself.
  2. So currently you have: A pool with files A drive not in a pool with files You want: A pool with files also containing the drive that is currently not in the pool and its files also in the pool. How to: Add the existing drive that is not in a pool to the pool. This will create a hidden folder on that drive called pool part or something like that. The contents of that folder is what shows up in the pool. So in order to have your files on this drive show up in the pool just move the files to this folder. The will now be in the pool. If the folder names
  3. I mean the data is already in the program and it is the same program. Nothing is really changing here with respect to file placement only how the program internally reports the data. As I have stated before I think it is logical that the files in the higher pool not be labeled "duplicated" because that may lead users to believe that duplication is turned on at that pools level. Instead it should label them something else like "duplicated by child pool". Also nowhere is anyone asking or wanting a higher pool to inherit any settings, it should not be doing that.
  4. Oh you mean a mix of duplication levels within a pool. What is interesting is I am not even sure how they would achieve this or why. For example for that to actually happen the following would be needed. Pool A: Contains the pool part folder for Pool Z Pool B: Contains the pool part folder for Pool Z Pool A: Configures custom duplication settings to an individual file or group of files in the pool part folder for Pool Z Pool B: Configures custom duplication settings to an individual file or group of files in the pool part folder for Pool Z Issue Files
  5. It is actually pretty easy. Right now due to a bug it is seeing half of the data on the underline pools as other. Once it no longer categorizes them as other it will then balance the correct amount of data. If it did matter in so far as the calculations are considered it just needs to divide the balance operation by the duplication setting of the underline pool. The actual file operations as they work now do not need to be changed. So the difference would be: Currently: 6TB = 3TB Movable Files, 3TB Unmovable Files 4TB = 2TB Movable Files, 2TB Unmovable Files So to bala
  6. Nice, yea the balance issue is caused by this issue. Because the balance program can only logically move data other than "other" it does not count "other" data when making the calculation. So if I had 2 pools like in my case and one pool had 6tb total used and another had 4TB total used and it was seeing half the data as other the decision goes like this: 6TB = 3TB Movable Files, 3TB Unmovable Files 4TB = 2TB Movable Files, 2TB Unmovable Files So to balance it can only move the movable files and thus it tries to move 1TB becuase 6-1=4+1. However in this case the other data will
  7. Here is what is going on mathematically in the images you see above. Disk 6 (L3): 2.96TB Disk 9 (R3): 2.96TB There pool reports 5.92TB total storage used. Note that their pool reports this full 5.92 as "duplicated" which is correct as the data of disk 6 and 9 are identical. Disk 1 (L7): 2.63TB Disk 10 (R7):2.63TB Their pool reports 5.26TB total storage used. Note that their pool reports this full 5.26TB as "duplicated" which is correct as the data of disk 1 and 10 are identical. Then you have the pool z which contains Disk 6,9,1,10 for a total space used of 1
  8. The only source of files in the Pools E and H above are from the combined pool Z. You can see attached images for exact contents of E and H. The only data outside the pool is a folder containing not even 1MB of data used by backblaze. The issue I am having is occurring becuase the pool Z is seeing half the data on pools E and H as "other" instead of as duplicate data because the data is being duplicated on pools E and H instead of on pool Z directly. I do understand that it would be confusing to users to label this "other" data as duplicated as they mig
  9. All settings are default. I no longer have an issue after manually balancing to 50% and stopping the balance run and then disabling the auto re balance. What is going on is I have 2 pools and they show all the data as being duplicated as you would expect as they are duplication pools. You then add those two pools to a poll and the resultant pool now shows 50% of the data as other. Now I guess the root issue issue is that the higher level pool should not show this data as other but I can also see how if it was labled duplicated data like the other pools it could confuse users as to wh
  10. Yea the goal is to not need to pull a 20TB recovery when you could have just done a 10TB recovery it would literally take twice as long. Also you have a higher probability of failure. In my case: A+B = Duplication Pool 1 C+D = Duplication Pool 2 E+F = Duplication Pool 3 Duplication Pool 1+2+3 = Pool of Pools In the above example you can sustain more than 2 drive failures before you lose data depending on what drives fail. In the event of a failure where data loss occurred you will need to restore 1 drives worth of data. Next example: A+B+C+D+E+F = On
  11. Actually the reason for the config I have is to aid in recovery and due to limits in mountable drives. You can mount drives A-Z so this gives you a maximum of 26 mountable drives. My system supports 24 drives so this would appear to be ok until you add in external drives. Then there is the issue of backup. By taking a pair of drives and using duplication on them I need both drives in that pair to be lost to trigger me needing to restore from backup. This is becuase content of both drives is known to be identical. If instead I just had all the drives in a pool and duplication tur
  12. To put this another way the balancer will try to move 1tb of files for example instead of 500GB because it is ignoring the fact that for every mb of "unduplicated" files it moves it will always move the same mb of "other" files. If the reason for this is that file duplication is not enabled in the top level pool maybe there needs to be another category of file types besides Unduplicated, Duplicated, Other and Unusable for duplication. If duplicated only refers to the current pool and not underline pools there should be another category for something like Duplicated in another pool.
  13. I have run into what appears to be a glitch or am I just doing something wrong. I have 4 10TB drives. I have 2 10TB drives in a pool with file duplication on. I have another 2 10TB drives in a pool with file duplication on. I then have a pool of these two pools. In this pool my data shows 50% as unduplicated and 50% as "other". The result of this appears to drive the balancer crazy so it is in a loop of moving files around and always overshooting. I know the files are duplicated on the lower pools but I feel that the higher level pool should be handling and report
  14. I wonder what running the scanner on a cloud drive actually does... as it is not as though it would be able to access anything on the physical layer anyways...
  15. Wow, very interesting. I have had a problem like this long ago but not in DrivePool. Same thing, some ancient files from the 90s with a permissions bug. Will definitely take note of LockHunter if I ever run into that problem again. Thanks for sharing what fixed it!
  16. Did not know that you guys even considered looking at raid controllers I can only imagine the mess that would be. Yea my main focus was just asking for support of HBA drives used in soft raid configurations as I know that you already have access their their data. The other part of my request was actually more of a suggestion which deals with your first response about displaying the info. I was hoping to provide in those photos a suggestion of how you could add support for this information without turning the UI upside and inside out to add support. As for dynamic disks yea I get how
  17. Not really backup without some sort of versioning (at minimum turning on file history). It is basically a short term backup from the last X hours but you lose files that you delete once the process runs. I would call what your doing 8TB main drive where you have 2x 5tb pooled redundant drives. Thus at that point you would likely benefit from just making it real time or at minimum putting the entire process under drive pool. Create a simple pool with the two 5TB drives. Then create a second pool that contains the first pool and the 8TB drive. Now you can enable pool duplication.
  18. Basically you have file corruption. If you have not had any system crashes that would have caused a write error and your HDDs were fine the causes is one of the following. Bit rot, the sector on one of the drives had not been accessed in a long time resulting in file corruption (very unlikley) Memory error, (if your not running ECC ram this is very likely) I would recommend doing some tests on your memory subsystem to ensure it is not causing file corruption.
  19. zeroibis

    File Placement

    Well you would only have them on different disks if you ran out of space while moving the file over so while it could occur it would not occur very often.
  20. I remembered from a previous conversation that the issue with implementing support to get SMART data from a soft raid set is more of an implementation issue than a technical one. I wanted to put forward a suggestion on how this could be implemented in what would hopefully be a way that would not require a ton of work. First a why this is important. Currently if you have drives in a Soft RAID set lets say in a Storage Spaces array for example StableBit Scanner will pick up the RAID set and can perform a surface scan and file system check on it. This is fantastic and you want this sort of a
  21. Good to know about the backup and restore option. I am getting tired of needing to constantly re rename all my drives every time it gets upset becuase I crashed windows testing something.
  22. Cool, just saw this. It appears that writes that use sector by sector write verification like to trigger this. For example I only see this error generated when I am using acronis to copy to a drivepool volume. (even if done via network). Would be neat if drivepool was somehow able to automatically suppress this error message but it is something that I suspect is tied heavily in the os.
  23. Oh BAM I found what was off. I knew based on the CPU usage and SSD usage that they were not being loaded heavy and figured it was something with the network. Changed the nic to enable jumbo frames and bam dropped the transfer down to ~9min. I do not remember turning them on before but maybe they were enabled when the ati card was connected. I do know that for some reason the ati gpu drives would screw around with the network connections in general (for example with the ati gpu it would take an extra 45 seconds to establish a network connection and this issue never occurs when an nvidia ca
  24. No that was just the transfer time from the 960 EVO (1tb) to a 970 evo (250gb). The transfer directly to an nvme SSD represents the fastest possible transfer time. Actually recreating that test on my current system it takes over 10min now and I am not sure why that is. Only setting changed since then was enabling sata hot plug in bios and swapping a PCIe x16 ATI card (running pcie2.0 x4) for a x1 (3.0) nvidia card. Oh it could be that the NVME has now settled to its true performance because I wrote over 11TB to it earlier this week. (that shit is warn in now)
  25. I realized that I never posted the SSD to SSD result here it is:
×
×
  • Create New...