Jump to content
Covecube Inc.

Tarh

Members
  • Posts

    14
  • Joined

  • Last visited

  • Days Won

    1

Tarh last won the day on January 31 2019

Tarh had the most liked content!

Tarh's Achievements

  1. Christian helped me suggesting the use of file placement rules based on file extensions for small files. Works flawlessly for my usecase. Thanks!
  2. Hey, so I have a 10 disk HDD pool with 2 parity and SnapRAID and I'm considering adding an SSD and wondering if I can set DrivePool up somehow so that it'd put all tiny files the pool recieves to the SSD? Like everything under <100KB going to the SSD and the rest to the HDDs and obviously don't fill the SSD with big files. I have tons of small files that are a pain to work with on HDDs but I don't really want to manually put small files to the SSD. If it can't be done as is, would a plugin be able to do this? Basically if a file is smaller than a set amount it would go to the specific drive (if it has enough space) and the rest of the files going to the least filled disk as is the default case.
  3. Hi! So playing with the SSD caching and a torrent client, I realized that for some reason file preallocation is always done at around 32MB/s on a SATA SSD. Which is much faster even on a HDD. No other software interference, raid controller, cable quality, trim, wear or any issues exist and write caching is of course enabled. I know because if I manually start moving the same files it does it with multiple hundreds per seconds a megabyte - as it should - during this very slow prealloc. So there's no hardware bottleneck. And no, we're not talking about thousands of tiny files, they are all 1GB+ files a piece. Now, I'm not 100% sure it's DrivePool's fault, but then again, even after many hours of various tasks and benchmarks I can't reproduce any slowdown on the SSD. And since qBittorrent is preallocing files much faster on a HDD, I'd hazard a bet that it's not its fault. I've adde an SSD cache to be faster, yet it's slower than my bandwidth so something is very wrong here. https://i.imgur.com/57CPrI7.png So what is going on, why is this so unbearably slow? Edit: I just learned the "increase priority" fast-forward icon while balancing (well hidden) today, so is it possible prealloc is being limited by some option here?
  4. I know this has been discussed a few times and I more or less understand the issues and limitations, but I was wondering if it would be possible to give the balancer a bit of a leeway with the free space based balancing logic. What I mean is by default the balancer is putting files on the disk with the most free space. Great. If the next file in the queue would go to another drive (because the actively copied file makes the first target disk less free than another disk) we now have two files being copied and so on. Sounds fast and great, except this is very rare so the SSD is almost always only feeding and is limited by one HDD - which we're trying to avoid as it sucks, even without SMR drives. Like an NVMe SSD cache writing at like 100MB/s is painful. The drive fills faster than it offloads. So, what if we could give the balancer an optional threshold to be more flexible or in another words less strict with only pushing files to the least filled drive? Surely then it could push files to multiple drives at once in a default setup. For this theoritical threshold, let's say if we used the value 5%, then basically the balancer would start copying to the least filled drive as it does now, then it would look at the rest of the drives as if they had 5% more free space (respecting fill limits) therefore it could use them to push files simultanously. Obviously only copying one file to one HDD at a time (better performance, less fragments, as it is now). With a theoritical secondary option we could limit how many threads it could do at once, or just a built in one thread per one HDD (which is how it already seem to work). This way, given enough SSD bandwidth, the SSD could feed each drives at once until it's empty most of the time instead of when very rarely the stars align. I don't believe this would be overly complex (like multithreading) and I think it would make many people's lives better who use SSD caches. Unless I miss some major issue.
  5. Hi, so I'm using DrivePool with SnapRAID and I added a few drives a few months ago and plan on adding more in the future. My small issue is that new drives are getting filled with all the new files which makes the combined performance of those files worse than if they were all across the drives. Imagine having 8 drives filled at 20%, and I add 2 more drives. So now the new files are only on the 2 drives until they reach 20%. I'd much rather have new files on all 10 of them roughly evenly and I can do some manual balancing safely if needed. So how can I do this, this being basically a round-robin sort of file placement across all disks regardless of available capacity?
  6. Yes, I've tried that. Both Direct I/O and Scanner fails to see Samsung NVMe Health.
  7. Hi, I know it's been asked before but I'm still wondering if we can expect NVMe "smart" support in the future for these drives. In Direct I/O everything is a red cross except the "SSD" part with every option, combination of options and specific methods. The devices in question are listed as "Disk # - NVMe Samsung SSD 960 SCSI Disk Device" Using Scanner version 2.5.4.3216 and Samsung NVMe Controller 3.0.0.1802 driver. CrystalDiskInfo and some other software can get the data.
  8. That is exactly what I would have thought logically, but believe me I wouln't have waste my time registering only to waste everyone else's time making a post before testing it thoroughly. You can easily check it yourself how the write caching option for the individual drives added to a pool is being ignored. Just try making a pool of 2 HDDs, push a few gigs of files, preferably from an SSD or even faster source and watch the initial write speeds. Or watch resource monitor and the actual write speed of disks *after the copy has been reportedly finished*. Or try HD Tune or other disk benchmark tool that doesn't circumvent the issue (no CDM). ~900MB/s reported initial copying speeds by windows or total commander or 6 *gigabytes/second* of benchmark reading speeds with HD Tune Pro are not at all real when it comes to 2x7200rpm HDDs. I don't know exactly but I'm guessing the nature of the low level virtual disk driver overrides OS caching options completely. Or in other words, it has it's on cahing on top of the OS write caching. My problem is, the cache is way too big (at least 4.7GB in a system with 64GB of RAM). This is the only thing preventing me from buying the software.
  9. I don't need to measure the real speeds, I want to write without caching. I just noticed it while testing before deciding to go with the software or not. I don't like caching because of potential data loss. Just checked and after pushing some files to a HDD pool of two, 4.7 GB of actual data was written after it says the copy was done at ~900 MB/s reportedly, while the monitored real speed was ~100 MB/s to a single HDD (the other had much less free space so it wasn't used). That's 4.7 GB of data "on the fly" in RAM for 47 seconds that would get lost in case of a power outage. And then you'd have to check all the target files because they were all preallocated and many of them were being written simultanously from the cache. Meanwhile, without write caching and a power outage only one file would be cut off (which you could easily spot or it might not even exist, and stay marked as free space) that was actually in the process of being written and the rest wouldn't exist yet on the target. For me there's no reason to have gigabytes of data on the fly just to have some inaccurate speed spike in the beginning of writing tens or hundreds of gigabytes of data and to have them finish in the background minutes later hoping a power outage/bsod doesn't happen. If caching can't be disabled because of how the software works, I'm hoping for a user option to reduce the size to a custom value.
  10. Hi, so I'm experimenting with DrivePool and it seems to be using write caching even if the individual drives in the pool have it disabled, basically overriding it. But for the Covecube Virtual Disk driver there's no Policies tabs in Windows' Device Manager so write caching can't be disabled. The problem I have is that writing files to the pooled drive says it's done within seconds with inaccurately high speeds (due to caching) but I can see the actual speeds of each physical drives and they keep writing in the background even minutes after the writing was supposedly completed. Same even on SSDs. I want to see the real progress of writing the files, without extra caching, not just a promise of it being done soon in the background. The caching also screws with tools like HD Tune Pro claiming the read speeds are around 6 gigabytes/second for a pool with 3 SATA drives. It's basically a small built in RAM drive. I get the massive benefit of caching but I don't want it for my usecase and wondering if it can be disabled. Thanks.
×
×
  • Create New...