Jump to content
Covecube Inc.

zeroibis

Members
  • Content Count

    47
  • Joined

  • Last visited

  • Days Won

    1

zeroibis last won the day on October 12

zeroibis had the most liked content!

About zeroibis

  • Rank
    Advanced Member
  1. zeroibis

    Measuring the entire pool at every boot?

    Wow, very interesting. I have had a problem like this long ago but not in DrivePool. Same thing, some ancient files from the 90s with a permissions bug. Will definitely take note of LockHunter if I ever run into that problem again. Thanks for sharing what fixed it!
  2. zeroibis

    [Feature Request] Soft RAID Physical Disk Support

    Did not know that you guys even considered looking at raid controllers I can only imagine the mess that would be. Yea my main focus was just asking for support of HBA drives used in soft raid configurations as I know that you already have access their their data. The other part of my request was actually more of a suggestion which deals with your first response about displaying the info. I was hoping to provide in those photos a suggestion of how you could add support for this information without turning the UI upside and inside out to add support. As for dynamic disks yea I get how that can be a mess because a single physical disk could belong to multiple different virtual disks. Still though you could just have that same physical disk listed for multiple virtual disks because what the user really cares about is looking at their virtual disk what physical disks make it up and if they are healthy or not. Either way, looking forward to seeing this implemented at some point in the future. It is really the only major feature that is keeping the scanner from being 100% perfect lol. (at least for my use case rofl) Also a big thanks for all the work that has gone into this, I can no longer imagine running a server without the scanner!
  3. zeroibis

    Examples of Media Server/Setup using Drivepool

    Not really backup without some sort of versioning (at minimum turning on file history). It is basically a short term backup from the last X hours but you lose files that you delete once the process runs. I would call what your doing 8TB main drive where you have 2x 5tb pooled redundant drives. Thus at that point you would likely benefit from just making it real time or at minimum putting the entire process under drive pool. Create a simple pool with the two 5TB drives. Then create a second pool that contains the first pool and the 8TB drive. Now you can enable pool duplication. If you want the drives to only backup every so often configure drive pool to always place files first on the 8TB drive and then every so often preform a balancing operation to duplicate the files over to the pool of 5TB drives. For my case I use duplication on everything (48TB of drives for 24TB of usable space) and I backup via Backblaze. About 60% of my storage is in RAID 0 arrays (via storage spaces) which are then placed in duplication pools. I use the performance boost from RAID 0 for editing working loads. If I just had 100% static content I would not be using RAID 0 at all. I also use primocache as a write cache but plan to expand it to a read cache as to assist when browsing large image galleries or RAW images to improve responsiveness. (Using 2x 250GB 970 EVOs (two are required to overcome the 2x write performance penalty as a result of real time duplication))
  4. zeroibis

    Measuring the entire pool at every boot?

    Basically you have file corruption. If you have not had any system crashes that would have caused a write error and your HDDs were fine the causes is one of the following. Bit rot, the sector on one of the drives had not been accessed in a long time resulting in file corruption (very unlikley) Memory error, (if your not running ECC ram this is very likely) I would recommend doing some tests on your memory subsystem to ensure it is not causing file corruption.
  5. zeroibis

    File Placement

    Well you would only have them on different disks if you ran out of space while moving the file over so while it could occur it would not occur very often.
  6. I remembered from a previous conversation that the issue with implementing support to get SMART data from a soft raid set is more of an implementation issue than a technical one. I wanted to put forward a suggestion on how this could be implemented in what would hopefully be a way that would not require a ton of work. First a why this is important. Currently if you have drives in a Soft RAID set lets say in a Storage Spaces array for example StableBit Scanner will pick up the RAID set and can perform a surface scan and file system check on it. This is fantastic and you want this sort of action being preformed on the array itself and not on the individual underline drives. However, no smart data is available for the drives in the underline RAID. Now it is possible to pull this data using other software if you want o manually read it but the larger issue is that you will not get a alert from Stable Bit Scanner that the SMART data on one of the underline drives is indicating trouble like you usually would. This leads us to the first request/suggestion. 1) Make it so that at minimum we at least can get an alert that a drive has a SMART warning. Even if we can not view the smart data within the program and even if lets say we do not know what raid set the drive belongs to at minimum just knowing drive with SN: XYZ is having a problem with the automated email function would be a massive improvement. The next logical question becomes hey lets say we do know what drives are in the RAID set but how do we present this info to users in a meaningful way that does not require massive GUI changes. This leads to the second suggestion. 2) Allow the SMART data of underline drives to be viable to the user so that a user can select a soft raid disk and be able to view the status of drives that make it up. See the images below for a suggestion on how this could be implemented.
  7. zeroibis

    Scanner lost all data?

    Good to know about the backup and restore option. I am getting tired of needing to constantly re rename all my drives every time it gets upset becuase I crashed windows testing something.
  8. zeroibis

    Drivepool warning in event log after Veeam runs

    Cool, just saw this. It appears that writes that use sector by sector write verification like to trigger this. For example I only see this error generated when I am using acronis to copy to a drivepool volume. (even if done via network). Would be neat if drivepool was somehow able to automatically suppress this error message but it is something that I suspect is tied heavily in the os.
  9. zeroibis

    Correct way to apply PrimoCache write cache

    Oh BAM I found what was off. I knew based on the CPU usage and SSD usage that they were not being loaded heavy and figured it was something with the network. Changed the nic to enable jumbo frames and bam dropped the transfer down to ~9min. I do not remember turning them on before but maybe they were enabled when the ati card was connected. I do know that for some reason the ati gpu drives would screw around with the network connections in general (for example with the ati gpu it would take an extra 45 seconds to establish a network connection and this issue never occurs when an nvidia card is connected instead). Very strange behavior but at least now I am back to 9min to transfer 94GB which is what I am looking for. Especially given previous best case test SSD to SSD was getting 8.5min.
  10. zeroibis

    Correct way to apply PrimoCache write cache

    No that was just the transfer time from the 960 EVO (1tb) to a 970 evo (250gb). The transfer directly to an nvme SSD represents the fastest possible transfer time. Actually recreating that test on my current system it takes over 10min now and I am not sure why that is. Only setting changed since then was enabling sata hot plug in bios and swapping a PCIe x16 ATI card (running pcie2.0 x4) for a x1 (3.0) nvidia card. Oh it could be that the NVME has now settled to its true performance because I wrote over 11TB to it earlier this week. (that shit is warn in now)
  11. zeroibis

    Correct way to apply PrimoCache write cache

    I realized that I never posted the SSD to SSD result here it is:
  12. zeroibis

    New Pool: Duplicate before adding data or after?

    I run quite a few VMs and generally best practice is that you want one VM drive that stores your OS and additional ones to store your files. In this way it is simple to spin up another VM of your OS and load your files over there. If you are not already doing so I would highly recommend splitting your OS vms from your data storage vm files. The risk with a VM lies in data corruption. In a normal system data corruption may cause a file to be lost. In a VM system file corruption may cause the VM which contains everything to be lost. This is a major risk and is why they are generally used on ZFS volumes and things like ECC ram are used. This is not to say you can not pace them on NTFS as the latest versions actually do a way better job of managing data corruption than in the past but this is where things like raid, snapraid and drivepool etc come in. These things are not just used to help mitigate risk from drive failure but from data corruption as well.
  13. Ah, I understand now that the best candidates are files such as virtual disks that literally contain duplicate files, makes sense. Thanks for the info!
  14. Interesting, also thanks for updating with the solution you found. Is there any material you would recommend to read up on the deduplication service. It appears this is a great solution if you have a lot of files that are only accessed but not often modified, such as large sets of videos and photos. I would still imagine that there is a heavy processing and memory cost to this that my present system likely can not pay but it is something to read up on and look into for future upgrades.
  15. Ah I understand and yes I had never heard of deduplication´╗┐ before and assumed it was a block level duplication service via storage spaces. I understand now that it is a form of compression. I would presume that the decompression would have a decent performance penalty but it sounds pretty great if you do not need a lot of read performance on old random files. So you have two disks each with deduplication´╗┐ and then both of these same disks added to the same drive pool with real time duplication enabled. I am guessing that MS has changed some APIs with regards to the way you access data from the volume that is effecting drivepools ability to maintain duplicates. A work around could be to temporarily real time duplication and instead have data flow to one drive and then to the second as an archive.
×