Jump to content

Tarh

Members
  • Posts

    17
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Tarh

  1. Update: Every time I started a rebalance, it started moving stuff onto the landing drive as I said above. Even with clear extension rules going against that. But, After emptying the drive by manually moving everything to archive drives then re-measuring and then re-checking (which somehow goes twice from 0-100%), rebalancing (which took a long time and did nothing) now it works as intended, but seems much slower but I can get behind that.

    Rebalancing also takes like 15-20 minutes until it hits around ~50%, and until then disk IO is literally 0% and only after that point it actually start moving files. Either way, now it's working and respecting the rules.

  2. Hey, so I replaced a HDD and I have a landing SSD that gets filled and DrivePool did a great job of pushing files on that drive onto archive spinning rust with each balancing. But ever since the replacing of one of the HDDs, balancing takes a long time but there's zero file movement (as per Scanner, drive activity is 0).

    I've seen several old post about the same issue. There's literally zero warnings or errors in the logs (only "[Rebalance] Cannot balance. Immediate re-balance throttled." "[Rebalance] Not enabling background I/O priority" Information), it's just does nothing for some reason. In fact it moved some files from archives to the SSD that goes against every rule but then it stopped?! I did an update, reinstall, several remeasuring and rebalancing and even after 1-2 hours no files were moved at all!

    I really don't want to reset the settings as I have several dozens of extension-related rules (tiny files like pics are stored on a different SSD for overall faster access, and big fileson 10x archive drives and we're talking about 2M files).

    Before balancing it says that the "unduplicated target for rebalancing is -474GB" on the landing SSD and the archive drives each say positive ~40-48GB as it always did, yet balancing doesn't actually empty the landing SSD drive, it just does nothing for an hour or so. Nothing runs that keeps these files open.

    I'm really lost and annoyed here as I have to manually move files so that they're protected by SnapRAID.

  3. Hey, so I have a 10 disk HDD pool with 2 parity and SnapRAID and I'm considering adding an SSD and wondering if I can set DrivePool up somehow so that it'd put all tiny files the pool recieves to the SSD? Like everything under <100KB going to the SSD and the rest to the HDDs and obviously don't fill the SSD with big files.

     

    I have tons of small files that are a pain to work with on HDDs but I don't really want to manually put small files to the SSD.

     

    If it can't be done as is, would a plugin be able to do this? Basically if a file is smaller than a set amount it would go to the specific drive (if it has enough space) and the rest of the files going to the least filled disk as is the default case.

  4. Hi! So playing with the SSD caching and a torrent client, I realized that for some reason file preallocation is always done at around 32MB/s on a SATA SSD. Which is much faster even on a HDD. No other software interference, raid controller, cable quality, trim, wear or any issues exist and write caching is of course enabled. I know because if I manually start moving the same files it does it with multiple hundreds per seconds a megabyte - as it should - during this very slow prealloc. So there's no hardware bottleneck. And no, we're not talking about thousands of tiny files, they are all 1GB+ files a piece.

    Now, I'm not 100% sure it's DrivePool's fault, but then again, even after many hours of various tasks and benchmarks I can't reproduce any slowdown on the SSD.

    And since qBittorrent is preallocing files much faster on a HDD, I'd hazard a bet that it's not its fault. I've adde an SSD cache to be faster, yet it's slower than my bandwidth so something is very wrong here.

     

    https://i.imgur.com/57CPrI7.png

    So what is going on, why is this so unbearably slow?

     

    Edit: I just learned the "increase priority" fast-forward icon while balancing (well hidden) today, so is it possible prealloc is being limited by some option here?

  5. I know this has been discussed a few times and I more or less understand the issues and limitations, but I was wondering if it would be possible to give the balancer a bit of a leeway with the free space based balancing logic.

    What I mean is by default the balancer is putting files on the disk with the most free space. Great. If the next file in the queue would go to another drive (because the actively copied file makes the first target disk less free than another disk) we now have two files being copied and so on. Sounds fast and great, except this is very rare so the SSD is almost always only feeding and is limited by one HDD - which we're trying to avoid as it sucks, even without SMR drives. Like an NVMe SSD cache writing at like 100MB/s is painful. The drive fills faster than it offloads.

    So, what if we could give the balancer an optional threshold to be more flexible or in another words less strict with only pushing files to the least filled drive? Surely then it could push files to multiple drives at once in a default setup.

    For this theoritical threshold, let's say if we used the value 5%, then basically the balancer would start copying to the least filled drive as it does now, then it would look at the rest of the drives as if they had 5% more free space (respecting fill limits) therefore it could use them to push files simultanously. Obviously only copying one file to one HDD at a time (better performance, less fragments, as it is now). With a theoritical secondary option we could limit how many threads it could do at once, or just a built in one thread per one HDD (which is how it already seem to work). This way, given enough SSD bandwidth, the SSD could feed each drives at once until it's empty most of the time instead of when very rarely the stars align.

     

    I don't believe this would be overly complex (like multithreading) and I think it would make many people's lives better who use SSD caches. Unless I miss some major issue.

  6. Hi, so I'm using DrivePool with SnapRAID and I added a few drives a few months ago and plan on adding more in the future.

     

    My small issue is that new drives are getting filled with all the new files which makes the combined performance of those files worse than if they were all across the drives.

    Imagine having 8 drives filled at 20%, and I add 2 more drives. So now the new files are only on the 2 drives until they reach 20%. I'd much rather have new files on all 10 of them roughly evenly and I can do some manual balancing safely if needed.

    So how can I do this, this being basically a round-robin sort of file placement across all disks regardless of available capacity?

  7. Hi, I know it's been asked before but I'm still wondering if we can expect NVMe "smart" support in the future for these drives.

     

    In Direct I/O everything is a red cross except the "SSD" part with every option, combination of options and specific methods.

    The devices in question are listed as "Disk # - NVMe Samsung SSD 960 SCSI Disk Device"

    Using Scanner version 2.5.4.3216 and Samsung NVMe Controller 3.0.0.1802 driver. CrystalDiskInfo and some other software can get the data.

  8. That is exactly what I would have thought logically, but believe me I wouln't have waste my time registering only to waste everyone else's time making a post before testing it thoroughly.

    You can easily check it yourself how the write caching option for the individual drives added to a pool is being ignored. Just try making a pool of 2 HDDs, push a few gigs of files, preferably from an SSD or even faster source and watch the initial write speeds. Or watch resource monitor and the actual write speed of disks *after the copy has been reportedly finished*. Or try HD Tune or other disk benchmark tool that doesn't circumvent the issue (no CDM). ~900MB/s reported initial copying speeds by windows or total commander or 6 *gigabytes/second* of benchmark reading speeds with HD Tune Pro are not at all real when it comes to 2x7200rpm HDDs.

    I don't know exactly but I'm guessing the nature of the low level virtual disk driver overrides OS caching options completely. Or in other words, it has it's on cahing on top of the OS write caching. My problem is, the cache is way too big (at least 4.7GB in a system with 64GB of RAM). This is the only thing preventing me from buying the software.

  9. I don't need to measure the real speeds, I want to write without caching. I just noticed it while testing before deciding to go with the software or not.

    I don't like caching because of potential data loss. Just checked and after pushing some files to a HDD pool of two, 4.7 GB of actual data was written after it says the copy was done at ~900 MB/s reportedly, while the monitored real speed was ~100 MB/s to a single HDD (the other had much less free space so it wasn't used). That's 4.7 GB of data "on the fly" in RAM for 47 seconds that would get lost in case of a power outage. And then you'd have to check all the target files because they were all preallocated and many of them were being written simultanously from the cache. Meanwhile, without write caching and a power outage only one file would be cut off (which you could easily spot or it might not even exist, and stay marked as free space) that was actually in the process of being written and the rest wouldn't exist yet on the target.

    For me there's no reason to have gigabytes of data on the fly just to have some inaccurate speed spike in the beginning of writing tens or hundreds of gigabytes of data and to have them finish in the background minutes later hoping a power outage/bsod doesn't happen.

    If caching can't be disabled because of how the software works, I'm hoping for a user option to reduce the size to a custom value.

  10. Hi, so I'm experimenting with DrivePool and it seems to be using write caching even if the individual drives in the pool have it disabled, basically overriding it.

    But for the Covecube Virtual Disk driver there's no Policies tabs in Windows' Device Manager so write caching can't be disabled.

     

    The problem I have is that writing files to the pooled drive says it's done within seconds with inaccurately high speeds (due to caching) but I can see the actual speeds of each physical drives and they keep writing in the background even minutes after the writing was supposedly completed. Same even on SSDs. I want to see the real progress of writing the files, without extra caching, not just a promise of it being done soon in the background.

    The caching also screws with tools like HD Tune Pro claiming the read speeds are around 6 gigabytes/second for a pool with 3 SATA drives. It's basically a small built in RAM drive.

    I get the massive benefit of caching but I don't want it for my usecase and wondering if it can be disabled. Thanks.

×
×
  • Create New...