Jump to content

Jaga

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    27

Jaga last won the day on January 12 2021

Jaga had the most liked content!

About Jaga

  • Birthday July 17

Profile Information

  • Gender
    Male

Recent Profile Visitors

2358 profile views

Jaga's Achievements

  1. My server hung the other day too, probably had it running for far too long. After a reboot I checked scanner to have it do file system checks, and noticed -all- of the data on the drives was gone. Surface scan data, file system scan data, enclosure location.... all of it reset to nothing. Seems like there's a general problem with the integrity of customer data in this regard. Sure I can do a manual backup of that stuff, if I remember to every time I change something. But I probably won't. Some type of redundancy and/or data protection scheme would be good to have.
  2. Jaga

    DrivePool + Primocache

    There should be no reason you couldn't use a RAM disk as a pool member, provided it was basic drive (emulation) with a NTFS format. You might want to force a manual save-to-disk-image before every shutdown, just for diligence. However due to the limited space on them, you'd also want to limit what files the pool was able to balance onto the RAM disk. OR, simply use the RAMdisk as a SSD Cache plugin drive, to speed up delivery of files to the pool. I stopped using Softperfect years ago, switched over to Primo RAMdisk instead. I find it to be a more robust solution that's also more flexible.
  3. That large chunk of "other" data is your existing data on the pool drives that you wanted to have show up in the pool but which don't. They don't appear in the pool since they're outside the hidden "Poolpart-xxxxx" folder in the root of each drive. Since you have multiple drives with large amounts of data on them, the best way to handle it is: resolve your problem with seeing hidden files/folders in Windows. When you can finally see the hidden Poolpart-xxxxx folder(s) in the root of all pool drives, move the existing files to the Poolpart-xxxxx folders (on the drive where the files already are, for each drive). Then when it's all moved, tell Drivepool to Re-measure. When Windows moves a file on the same drive, it's nearly instantaneous. Sometimes Windows can be buggy with the show/hide hidden files and folders feature. I've had to edit the registry on numerous occasions with my W7 Pro server, then reboot, then mess with it some more and reboot again. Keep working at it - it really is the best solution for your situation. If you simply moved the existing data from each drive directly into the pool drive (D:), it would take a while and get rather messy, since Drivepool would be trying to auto-balance files between drives that have data both being added and removed. You might even end up having to re-balance multiple times afterwards.
  4. Something that might be a nice feature addition for DP would be a slower, more incremental type of balancing, that does say... 3% of the pool per day. That way you wouldn't overwhelm the on-demand parity system with massive changes, and risk losing the parity altogether. The other way to do it is source a temporary drive (external?) that's the same size or larger than your parity drive and copy the parity info to it, and then spread files around and re-compute new parity. Although this really only saves the parity data - with massive changes across all drives, at some point it still becomes useless. Most drives can handle the read requests of several users simultaneously (for when you have an empty drive accepting all new content until it 'balances' with the Pool). The only exception I can think of is either highly fragmented volumes (which a new drive filling up would not be), or several UHD media streams at the same time. Most other cases it should be a non-issue for, and honestly - banging hard on a new drive is always a *good* test of it's durability during the warranty period. They usually die either when new, or very old. I think SnapRAID is a bit "rough around the edges" when it comes to dealing with a large number of changes, as the recommended way of handling it shows. But for established sets of drives it excels.
  5. It might be easier as well if you just blew away Pool H, and make Pool I contain Pool G and the 14.5 array. Pool H is an unnecessary level of complication for DP to deal with.
  6. When using SnapRAID it isn't advised to have a large number of changes to files on the drives. It tends to bomb out the sync process when you do something like you're proposing (moving a lot of files between drives SnapRAID does parity for). Even when they have instructions like that, it usually only works for small subsets of manual copies (not thousands of files). I am saying this from experience in dealing with a large number of file changes on drives in a SnapRAID array (9 in total with 4 2-part parity drives). I've had SnapRAID totally crap out after making large changes to the files (both deletions and a re-balance in DP), and it's simply easier to just re-create your parity from scratch. If you're really set on adding a drive and spreading out your pool files equally afterwards, I'd recommend increasing the Drivepool balancing priority to get it done more quickly, and then completely re-doing SnapRAID parity from scratch afterwards (i.e. deleting your parity and re-building it with a new sync). It is the fastest operation and has the least chance of getting messed up. Yes, you are left without the parity data for a short period of time (from when you delete it to when it's re-created), but I'd rather have that situation than force the drives to do all the work you outlined and then discover that the modified/updated parity may not be any good (or watch SnapRAID crash mid-sync). Plus - new parity creation only does read operations on the data drives, which is the least stressful to them overall.
  7. There's plugin development information available for Drivepool on the Development Wiki - creating your own to do what you want seems the more direct route. Additionally the Drivepool Command Line utility is packaged with the standard distribution, you can call it with dpcmd.exe and run a brief list of commands with it. I haven't seen any mention of an API, but that doesn't mean one may not exist. Alex and Christopher are working on new things all the time. Creation of a custom plugin to do what you want might also benefit others, and integrate seamlessly with Drivepool.
  8. Yep - ran into the hassle when I first went to do the free upgrade to W10 (~2 years ago?). Was on a Skylake already and had to do a rebuild. I only remember it being a huge hassle and bit of a nightmare when I went to do the upgrade (+rebuild) not long after the free upgrade period ended (W7 to W10). Perhaps they've loosened it up since then, simply for those reasons. I just feel obligated to let people know so they can do some reading prior to the upgrade and protect themselves. MS purposefully obfuscates licensing, so digging is almost mandatory. If it still works, great.
  9. MS silently backs this because the license is a one-time, non-transferable license good for use on that hardware build only. When the CPU/Motherboard/etc (any major component) needs to be replaced, the license becomes invalid and you are forced to purchase a new one at full price, because "the machine changed". It's a stepping-stone to being locked in to W10, which ultimately doesn't save you anything except a little time. You're going to have to spend money at some point - only the mobile/tablet sector still gets OSs for free. If you aren't getting anything with W10 that you seriously need, consider heavily before replacing your W7 OS.
  10. What was your architecture for that last test zeroibis? You've changed hardware and testing schemes a bunch. I'm assuming that was with 2 SSDs working independently as DP SSD Cache drives for your 2x duplication pool.
  11. Just some feedback on the tests: I don't think you ever mentioned if you had over-provisioned your drives with Samsung Magician. If not, the large number of writes were probably negatively affecting the results with v-NAND rewrites, seeing as they had 94GB copied to them over, and over, and over again. There was probably very little time for any garbage collection to occur on the drives to keep performance high. Looks like larger block sizes are definitely an improvement for you though. That's actually the recommended way to use the SSD Cache plugin - a physical number of drives equal to the level of pool duplication. And while you can use pools, within pools, within pools... I'm dubious about any impacts to overall speed in getting the writes to where they finally need to go. As in the case with a single SSD caching the top level pool, which holds the pool volume itself, which consists of duplicated pools, each which can consist of multiple member volumes. I guess it depends on how efficient the DP balancing algorithms are where child pools are concerned, which I've never benchmarked. In a single SSD Cache plugin volume scenario, you lose the safety of duplication (obviously) and are limited by the max space restraint of the plugin (as you found), and can only try to address it by enabling real-time balancing on that top-level cached pool. Real time balancing has historically had performance considerations to weigh carefully - if the cache drive is moving items off at the same time items are being added - performance gains are shot to hell. Or it stops moving items off when new ones are added and you run into the space restraints again. For additional tests I'd highly recommend a smaller data set to use (perhaps 20GB) and ample drive over-provisioning, so that you don't run into GC issues on the caching drive(s). I know that Primocache pre-fetches (read caching) volumes in the same cache task sequentially (not simultaneously), so it may force reads to multiple target disks the same way. It would be worth benchmarking to see - setup a RAID 0 volume with multiple SSDs, make that the L2 and cache all pool drives with it, then send some data at it and see how it performs. Do the writes flush at the speed of a single target disk's write speed, or jump up to multiples? Then recreate that L2 with just one SSD and compare with new testing. I don't have spare SSDs to test with here, but you could try it and see. If it does flush to multiples at the same time, you'd probably be able to saturate many HDDs in the pool with full write performance.
  12. Jaga

    Few requests

    New service to replace BitFlock?
  13. Great, then you know your target file sizes, and can adjust the block size higher. Probably no less than 16k then, or 32k. Whicher you pick, it can't be smaller than the cluster of the target drives. Will be interested to hear how it works out for you.
  14. Yep, sounds like you might have exceeded the system's abilities at that time. I've never had a performance issue using a L2 with a decent SSD, but then I've never benchmarked during heavy activity either. Usually with a SSD and write cache, setting the Primocache block size equal to the formatted cluster size is best for performance. It comes with some overhead, but for high IOPS and small file operations it's helpful - I'd recommend matching the two for your use. Since you're just using the cache as a L2 write cache on data volumes only (no boot drive, right?), then I'd recommend leaving the L1 cache off. If you're using the entire SSD as a write cache and may be filling it up quickly... do you over-provision your SSD? Samsung recommends 10% over-provision, but I find even 5% can be enough. In scenarios where the SSD gets full quickly, that over-provision can really help. It looks like you tried Native and Average defer-write modes - normally I stick with Intelligent or Buffer (Romex claims buffer is best for high activity/throughput scenarios). Those are really only good with longer timeouts however, like 30s+. I like allowing my drive time to build up writable content using a high defer-write time (600s), and it even helps with trimming unnecessary writes (i.e. a file gets copied to the volume, hits the L2 cache, is used for whatever reason, and moments later is deleted - the write to the destination drive never happens and the blocks get trimmed while still in the cache). When using a write-only L2 cache, just keep the slider all the way to the right to tell it 100% goes to the write cache. You might also want to check "Volatile cache contents" to ensure Primocache completely flushes the L2 between reboots/shutdowns. You aren't using it as a read cache, so there's no reason for it to hold any content at shut down.
×
×
  • Create New...