Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won

  1. Christian helped me suggesting the use of file placement rules based on file extensions for small files. Works flawlessly for my usecase. Thanks!
  2. Hey, so I have a 10 disk HDD pool with 2 parity and SnapRAID and I'm considering adding an SSD and wondering if I can set DrivePool up somehow so that it'd put all tiny files the pool recieves to the SSD? Like everything under <100KB going to the SSD and the rest to the HDDs and obviously don't fill the SSD with big files. I have tons of small files that are a pain to work with on HDDs but I don't really want to manually put small files to the SSD. If it can't be done as is, would a plugin be able to do this? Basically if a file is smaller than a set amount it would
  3. Hi! So playing with the SSD caching and a torrent client, I realized that for some reason file preallocation is always done at around 32MB/s on a SATA SSD. Which is much faster even on a HDD. No other software interference, raid controller, cable quality, trim, wear or any issues exist and write caching is of course enabled. I know because if I manually start moving the same files it does it with multiple hundreds per seconds a megabyte - as it should - during this very slow prealloc. So there's no hardware bottleneck. And no, we're not talking about thousands of tiny files, they are all 1GB+
  4. I know this has been discussed a few times and I more or less understand the issues and limitations, but I was wondering if it would be possible to give the balancer a bit of a leeway with the free space based balancing logic. What I mean is by default the balancer is putting files on the disk with the most free space. Great. If the next file in the queue would go to another drive (because the actively copied file makes the first target disk less free than another disk) we now have two files being copied and so on. Sounds fast and great, except this is very rare so the SSD is almost alway
  5. Hi, so I'm using DrivePool with SnapRAID and I added a few drives a few months ago and plan on adding more in the future. My small issue is that new drives are getting filled with all the new files which makes the combined performance of those files worse than if they were all across the drives. Imagine having 8 drives filled at 20%, and I add 2 more drives. So now the new files are only on the 2 drives until they reach 20%. I'd much rather have new files on all 10 of them roughly evenly and I can do some manual balancing safely if needed. So how can I do this, this being
  6. Yes, I've tried that. Both Direct I/O and Scanner fails to see Samsung NVMe Health.
  7. Hi, I know it's been asked before but I'm still wondering if we can expect NVMe "smart" support in the future for these drives. In Direct I/O everything is a red cross except the "SSD" part with every option, combination of options and specific methods. The devices in question are listed as "Disk # - NVMe Samsung SSD 960 SCSI Disk Device" Using Scanner version and Samsung NVMe Controller driver. CrystalDiskInfo and some other software can get the data.
  8. That is exactly what I would have thought logically, but believe me I wouln't have waste my time registering only to waste everyone else's time making a post before testing it thoroughly. You can easily check it yourself how the write caching option for the individual drives added to a pool is being ignored. Just try making a pool of 2 HDDs, push a few gigs of files, preferably from an SSD or even faster source and watch the initial write speeds. Or watch resource monitor and the actual write speed of disks *after the copy has been reportedly finished*. Or try HD Tune or other disk benchm
  9. I don't need to measure the real speeds, I want to write without caching. I just noticed it while testing before deciding to go with the software or not. I don't like caching because of potential data loss. Just checked and after pushing some files to a HDD pool of two, 4.7 GB of actual data was written after it says the copy was done at ~900 MB/s reportedly, while the monitored real speed was ~100 MB/s to a single HDD (the other had much less free space so it wasn't used). That's 4.7 GB of data "on the fly" in RAM for 47 seconds that would get lost in case of a power outage. And then you
  10. Hi, so I'm experimenting with DrivePool and it seems to be using write caching even if the individual drives in the pool have it disabled, basically overriding it. But for the Covecube Virtual Disk driver there's no Policies tabs in Windows' Device Manager so write caching can't be disabled. The problem I have is that writing files to the pooled drive says it's done within seconds with inaccurately high speeds (due to caching) but I can see the actual speeds of each physical drives and they keep writing in the background even minutes after the writing was supposedly completed.
  • Create New...