Jump to content

Javen

Members
  • Posts

    6
  • Joined

  • Last visited

Javen's Achievements

Newbie

Newbie (1/3)

0

Reputation

  1. Thanks a lot @Christopher (Drashna) for the suggestion. I would like to have a try. What I have done so far(Just want to confirm if I do it correctly): 1. I create 2 pools. The pool d: consisted with 3 HDDs, and pool F: consist with pool d: and one SSD. 2. I set file duplication in pool d: and set Disk Space Equalizer or Ordered File Placement there. (Despite the two plugings are different ways to balance file, is there any other difference in other dimensions e.g. performance, data reliability?) 3. I set SSD Optimizer on pool F:. Per my understanding, other balancing plugin actually doesn't make any sense as the 'parent' pool only consists by 2 devices. One is used for SSD cache already so there isn't enough device for this pool to "balance" any data. Disable all these plugins is OK as well. Is my understanding right? 4. Finally, I go to Primocache, set L1/L2(the other SSD) cache only for reading to all HDDs. (I'm not sure if I should add the SSD for pool cache here as well). What's the result? Now I remove all unneccessary drive letters but just keep the "parent" pool(previous pool F:) assigned. I ran a disk test on that, but seems Primocache L1 read cache doesn't work. It's more like a pure SSD accelereated pool. The interesting part is that once I pause Primocache cache task and do the test again, it shows almost the identical result both on reading and writing. Isn't that SP SSD cache only for writing? If I add the SP SSD cache disk into PrimoCache cache task as well, here I got(L1 cache seems working now) So is this the correct way to set up? 1. Set all balance plugins/rules and file duplications on the HDDs pool(aka. "sub" pool) (Is Disk Space Equalizer and Ordered File Placement different other than its nature of different way for placing files?) 2. Set SSD Optimizer on the "parent" pool. (Other plugins beside SSD Optimizer can be disabled as well since it makes no differences?) 3. Add all HDDs and the SSD for caching the pool to PrimoCache and assign L1/L2 cache only for reading In this way, I do have decent acceleration and no risk of data lossing.
  2. I just migrate from Unraid to Windows based Nas and evaluating storage solution. Now I'm trying to use DP and primocache together. I have 2 SSDs(2TB each) and 3 HDDs(16Tx2 +14TB). Why not using DP alone? This is because: 1. DP SSD cache is only for writing. 2. I have VMs/Containers stored on the pool and I would like to accelerate their performance as well. 3. Though it's possible to store VMs/Containers to SSD directly, still I love the idea to gather all disks together. So what I have tried till now? 1. allocate all HDDs to DP to create only one pool. Partition the both SSD to 2 volumes. One is for reading and one is for writing. Allocate the writing cache volume to DP and select them as SSD in SSD optimizer plugin. (I have some folders duplicated, so need 2 SSDs to support real time duplication) 2. allocate reading cache volumes on both SSD to Primocache L2 cache and set them to accelerate all underline HDDs. I load accelerate Reading preset as I only want to Primocache for reading. Why I don't use Primocache alone for caching? As I want to accelerate the whole pool, actually I have various types data on it. Some of them are very important and I also set duplication in DP. I heard that there is chance to break these data if Primocache failed to flush the cache to HDDs. Though I do have a UPS, still I don't want to risk my data in case of system crash. Does the current solution fulfil my expectation? Not sure. If I don't look at benchmark stats, it's OK. However, when I tried crystaldisk mark on both host and VM, I found that: 1. On the host, the acceleration is working. However it's more like purely accerlating by SSD. (~3000MB on seq read and write) I do have 4GB*2 L1 cache set for Primocache 2. On the VM, the reading acceleration is working. It's just like accelerated by RAM (20000MB+ seq read). However, seems writing is another story. It's ~300MB seq write. So I would like to understand: 1. firstly, is there a better solution? Or am I doing anything wrong? 2. If no for 1#, is there a way to boost writing performance in VMs as well? When I use Primocache for both reading & wrting, inside the VM it can score ~3000MB/s for seq reading and ~1000MB/s for seq writing.
  3. Thanks for clarification. So with SSD cache, new files are firstly writen to SSD but will move to HDD later by SSD Optimizer balancer? (so that read speed does not boost) The thing is that I have lots Windows modern apps(mostly Xbox app downloaded games from XGPU) and WSL2 based linux distros and docker desktop. I learnt here that above usage is problematic. (However, I don't quite understand the limitation in WSL2, since after I put WSL2 vhdx files to the pool and run I face no issue till now) To me, seems the only feasible way is to store those data to raw disk directly. However, that immediately means I lost the duplication feature for some of my critical data(e.g. docker containers/WSL2 distro) and caching in drivepool will not benefit for those data. For the lost duplication feature, I have no idea now how to work around. For caching, I could use primocache as well. Last thing to check, since now I have 2 SSDs, if I want to boost both read and write, does combining using primocache & diskpool SSD cache make any sense? Or only using primocache is enough and not set any cache in diskpool.
  4. Hello I'm new to Stablebit and may ask something very basic. My question is about best way to use SSD as cache for HDD. Here is my rig: Windows 11 Intel P4510 SSD 2T x1 HC550 HDD 16T x1 Element HDD 14T x1 What I want to achive is that I would like to use a SSD as cache to accelerate the HDD. To achieve that purpose, I assigned all the disks into one pool and set P4510 as SSD amd others as archive in SSD optimizer plugin. (I also removed the driver letter assigned to these disks as I would like to use them only from the pool) I store all data beside system into the pool, including movies, games, virtual machines, etc... I also set duplications for some of them.(e.g. virtual machines) However, I found virtual machines speed is slow. e.g. starting apps in windows virtual machine may cost 10+ seconds. It's just like old days when we don't have SSD. Though I don't have evidence the virtual machine is running directly from HDD instead of SSD cache, I heavily suspect that. So my questions are: 1. Is there any way to force some directory to use cache? Or if I use VM frequently enough, will diskpool copy it to cache eventually? (I guess the caching algorithm is based on the access frequence, right?) 2. If no, what I can do to achieve that goal?(Sure I understood create a pool purely based on SSD will achieve that, but let's not go to that firstly. As I still want cache enabled for HDD and the SSD slot on my mATX board is really limited, only 2.)
×
×
  • Create New...