Everything posted by Overtaxed
I've got a pretty simple CloudDrive setup, a local drive that's a 250GB SSD and a big'ol CIFS share on an UNRAID server. Reading is great, either local or from the CIFS share. But writing has "strange" performance. The SSD itself, if I do a direct drive to drive copy to it using Windows from my NVME drive to the SSD, I'll get about 500MB/s. But when I copy to the Clouddrive, with the SSD set as the cache, I typically get about 80-120MB/s. I tried moving the cache to the NVME drive to see if it's a bottleneck on the SSD and it doesn't seem to make much difference. Might be a little faster, but the NVME drive is capable of 1000MB/s write (and will do it if I do an NVME/NVME copy), so it's nowhere near the speed I'd expect. Is there something that I'm doing wrong? Some setting to make it go faster when it's writing to the cache drive? I don't care how fast it empties to the CIFS share (but, it's near wire speed, in fact, it seems it can destage the data over a 1Gb/s link at almost the same speed that I can write to the SSD). Any ideas? Settings that I can/should try?
Kind of an odd question, but, since CloudDrive has the "NAS" feature, is it possible to use it to store files in their native format? I'd like to add my local NAS to my existing drive pool so that everything gets duplicated to the NAS, but, I'd also like to be able to browse those files via CIFS/NFS from other computers. Right now, I use something like FreeFileSync to copy all the data from my DrivePool to the NAS every night, but I'd really like this to be totally automatic and part of the functionality of my "mirror" pool. I did try the NAS functionality, and, while it works great, it's storing chunk files which, of course, makes the files unreadable to other clients. Any possibility of implementing a "store in native format" switch? Or is there one that I just don't/didn't see?