Jump to content

Jaga

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by Jaga

  1. My server hung the other day too, probably had it running for far too long. After a reboot I checked scanner to have it do file system checks, and noticed -all- of the data on the drives was gone. Surface scan data, file system scan data, enclosure location.... all of it reset to nothing. Seems like there's a general problem with the integrity of customer data in this regard. Sure I can do a manual backup of that stuff, if I remember to every time I change something. But I probably won't. Some type of redundancy and/or data protection scheme would be good to have.
  2. Jaga

    DrivePool + Primocache

    There should be no reason you couldn't use a RAM disk as a pool member, provided it was basic drive (emulation) with a NTFS format. You might want to force a manual save-to-disk-image before every shutdown, just for diligence. However due to the limited space on them, you'd also want to limit what files the pool was able to balance onto the RAM disk. OR, simply use the RAMdisk as a SSD Cache plugin drive, to speed up delivery of files to the pool. I stopped using Softperfect years ago, switched over to Primo RAMdisk instead. I find it to be a more robust solution that's also more flexible.
  3. That large chunk of "other" data is your existing data on the pool drives that you wanted to have show up in the pool but which don't. They don't appear in the pool since they're outside the hidden "Poolpart-xxxxx" folder in the root of each drive. Since you have multiple drives with large amounts of data on them, the best way to handle it is: resolve your problem with seeing hidden files/folders in Windows. When you can finally see the hidden Poolpart-xxxxx folder(s) in the root of all pool drives, move the existing files to the Poolpart-xxxxx folders (on the drive where the files already are, for each drive). Then when it's all moved, tell Drivepool to Re-measure. When Windows moves a file on the same drive, it's nearly instantaneous. Sometimes Windows can be buggy with the show/hide hidden files and folders feature. I've had to edit the registry on numerous occasions with my W7 Pro server, then reboot, then mess with it some more and reboot again. Keep working at it - it really is the best solution for your situation. If you simply moved the existing data from each drive directly into the pool drive (D:), it would take a while and get rather messy, since Drivepool would be trying to auto-balance files between drives that have data both being added and removed. You might even end up having to re-balance multiple times afterwards.
  4. Something that might be a nice feature addition for DP would be a slower, more incremental type of balancing, that does say... 3% of the pool per day. That way you wouldn't overwhelm the on-demand parity system with massive changes, and risk losing the parity altogether. The other way to do it is source a temporary drive (external?) that's the same size or larger than your parity drive and copy the parity info to it, and then spread files around and re-compute new parity. Although this really only saves the parity data - with massive changes across all drives, at some point it still becomes useless. Most drives can handle the read requests of several users simultaneously (for when you have an empty drive accepting all new content until it 'balances' with the Pool). The only exception I can think of is either highly fragmented volumes (which a new drive filling up would not be), or several UHD media streams at the same time. Most other cases it should be a non-issue for, and honestly - banging hard on a new drive is always a *good* test of it's durability during the warranty period. They usually die either when new, or very old. I think SnapRAID is a bit "rough around the edges" when it comes to dealing with a large number of changes, as the recommended way of handling it shows. But for established sets of drives it excels.
  5. It might be easier as well if you just blew away Pool H, and make Pool I contain Pool G and the 14.5 array. Pool H is an unnecessary level of complication for DP to deal with.
  6. When using SnapRAID it isn't advised to have a large number of changes to files on the drives. It tends to bomb out the sync process when you do something like you're proposing (moving a lot of files between drives SnapRAID does parity for). Even when they have instructions like that, it usually only works for small subsets of manual copies (not thousands of files). I am saying this from experience in dealing with a large number of file changes on drives in a SnapRAID array (9 in total with 4 2-part parity drives). I've had SnapRAID totally crap out after making large changes to the files (both deletions and a re-balance in DP), and it's simply easier to just re-create your parity from scratch. If you're really set on adding a drive and spreading out your pool files equally afterwards, I'd recommend increasing the Drivepool balancing priority to get it done more quickly, and then completely re-doing SnapRAID parity from scratch afterwards (i.e. deleting your parity and re-building it with a new sync). It is the fastest operation and has the least chance of getting messed up. Yes, you are left without the parity data for a short period of time (from when you delete it to when it's re-created), but I'd rather have that situation than force the drives to do all the work you outlined and then discover that the modified/updated parity may not be any good (or watch SnapRAID crash mid-sync). Plus - new parity creation only does read operations on the data drives, which is the least stressful to them overall.
  7. There's plugin development information available for Drivepool on the Development Wiki - creating your own to do what you want seems the more direct route. Additionally the Drivepool Command Line utility is packaged with the standard distribution, you can call it with dpcmd.exe and run a brief list of commands with it. I haven't seen any mention of an API, but that doesn't mean one may not exist. Alex and Christopher are working on new things all the time. Creation of a custom plugin to do what you want might also benefit others, and integrate seamlessly with Drivepool.
  8. Yep - ran into the hassle when I first went to do the free upgrade to W10 (~2 years ago?). Was on a Skylake already and had to do a rebuild. I only remember it being a huge hassle and bit of a nightmare when I went to do the upgrade (+rebuild) not long after the free upgrade period ended (W7 to W10). Perhaps they've loosened it up since then, simply for those reasons. I just feel obligated to let people know so they can do some reading prior to the upgrade and protect themselves. MS purposefully obfuscates licensing, so digging is almost mandatory. If it still works, great.
  9. MS silently backs this because the license is a one-time, non-transferable license good for use on that hardware build only. When the CPU/Motherboard/etc (any major component) needs to be replaced, the license becomes invalid and you are forced to purchase a new one at full price, because "the machine changed". It's a stepping-stone to being locked in to W10, which ultimately doesn't save you anything except a little time. You're going to have to spend money at some point - only the mobile/tablet sector still gets OSs for free. If you aren't getting anything with W10 that you seriously need, consider heavily before replacing your W7 OS.
  10. What was your architecture for that last test zeroibis? You've changed hardware and testing schemes a bunch. I'm assuming that was with 2 SSDs working independently as DP SSD Cache drives for your 2x duplication pool.
  11. Just some feedback on the tests: I don't think you ever mentioned if you had over-provisioned your drives with Samsung Magician. If not, the large number of writes were probably negatively affecting the results with v-NAND rewrites, seeing as they had 94GB copied to them over, and over, and over again. There was probably very little time for any garbage collection to occur on the drives to keep performance high. Looks like larger block sizes are definitely an improvement for you though. That's actually the recommended way to use the SSD Cache plugin - a physical number of drives equal to the level of pool duplication. And while you can use pools, within pools, within pools... I'm dubious about any impacts to overall speed in getting the writes to where they finally need to go. As in the case with a single SSD caching the top level pool, which holds the pool volume itself, which consists of duplicated pools, each which can consist of multiple member volumes. I guess it depends on how efficient the DP balancing algorithms are where child pools are concerned, which I've never benchmarked. In a single SSD Cache plugin volume scenario, you lose the safety of duplication (obviously) and are limited by the max space restraint of the plugin (as you found), and can only try to address it by enabling real-time balancing on that top-level cached pool. Real time balancing has historically had performance considerations to weigh carefully - if the cache drive is moving items off at the same time items are being added - performance gains are shot to hell. Or it stops moving items off when new ones are added and you run into the space restraints again. For additional tests I'd highly recommend a smaller data set to use (perhaps 20GB) and ample drive over-provisioning, so that you don't run into GC issues on the caching drive(s). I know that Primocache pre-fetches (read caching) volumes in the same cache task sequentially (not simultaneously), so it may force reads to multiple target disks the same way. It would be worth benchmarking to see - setup a RAID 0 volume with multiple SSDs, make that the L2 and cache all pool drives with it, then send some data at it and see how it performs. Do the writes flush at the speed of a single target disk's write speed, or jump up to multiples? Then recreate that L2 with just one SSD and compare with new testing. I don't have spare SSDs to test with here, but you could try it and see. If it does flush to multiples at the same time, you'd probably be able to saturate many HDDs in the pool with full write performance.
  12. Jaga

    Few requests

    New service to replace BitFlock?
  13. Great, then you know your target file sizes, and can adjust the block size higher. Probably no less than 16k then, or 32k. Whicher you pick, it can't be smaller than the cluster of the target drives. Will be interested to hear how it works out for you.
  14. Yep, sounds like you might have exceeded the system's abilities at that time. I've never had a performance issue using a L2 with a decent SSD, but then I've never benchmarked during heavy activity either. Usually with a SSD and write cache, setting the Primocache block size equal to the formatted cluster size is best for performance. It comes with some overhead, but for high IOPS and small file operations it's helpful - I'd recommend matching the two for your use. Since you're just using the cache as a L2 write cache on data volumes only (no boot drive, right?), then I'd recommend leaving the L1 cache off. If you're using the entire SSD as a write cache and may be filling it up quickly... do you over-provision your SSD? Samsung recommends 10% over-provision, but I find even 5% can be enough. In scenarios where the SSD gets full quickly, that over-provision can really help. It looks like you tried Native and Average defer-write modes - normally I stick with Intelligent or Buffer (Romex claims buffer is best for high activity/throughput scenarios). Those are really only good with longer timeouts however, like 30s+. I like allowing my drive time to build up writable content using a high defer-write time (600s), and it even helps with trimming unnecessary writes (i.e. a file gets copied to the volume, hits the L2 cache, is used for whatever reason, and moments later is deleted - the write to the destination drive never happens and the blocks get trimmed while still in the cache). When using a write-only L2 cache, just keep the slider all the way to the right to tell it 100% goes to the write cache. You might also want to check "Volatile cache contents" to ensure Primocache completely flushes the L2 between reboots/shutdowns. You aren't using it as a read cache, so there's no reason for it to hold any content at shut down.
  15. Yep - you ran into the 4k cluster volume limitation at 16TB. Drivepool has no such limitations - the max pool size is (for all practical purposes) unlimited. As far as breaking your Storage Spaces volume up, it'll depend on what kind of grouping the drives have. If it was a "no resiliency" pool (and it sounds like yours was created that way), each drive that you remove from the Storage Spaces pool will have it's data moved on to other drives in the pool, before the drive is completely removed and available for other uses. To do this, open the "Physical Drives" area in Storage Spaces, and next to the drive you want to remove click "Prepare for removal". Your Storage Spaces pool should have partially empty drives in it, since you've hit the 16TB cap using the others already. You can look at each physical drive in the Storage Spaces GUI to determine which have data on them and which don't. Remove those with the smallest used space first, as they stand the best chance having their data migrated to free space in the rest of the pool. Once you start getting drives out of the Storage Spaces pool, you can use them to create a Drivepool pool and just copy data from one pool to the other. Once all the drives are moved over, install the Disk Space Equalizer plugin for Drivepool, go to the Balancing area of the UI and toggle it on (which forces a full re-balance of the entire pool). When it's done, toggle it back off, then have a Coke and a smile.
  16. Yeah - Drivepool's SSD cache plugin isn't a true block-level drive cache like Primocache. Instead it's a temporary front-end fast mini-pool. The two are apples and oranges trying to achieve similar results (at least from a write cache perspective). You can emulate two SSDs for the cache plugin by partitioning your single SSD into two equal volumes, adding each to separate child pools, then assigning both child pools to the SSD Cache plugin as targets. You won't have any redundancy (which goes against the whole duplication to protect files strategy on the SSD cache pool), but it will work. Primocache *should* be fast enough, certainly faster than your pool drives if it's a 970 EVO, and at least as fast as DP's SSD Cache plugin. If you want to post here (or PM me) your config for it, we can see if there are changes that would help it's performance for you. Or even post over on the Primocache support forum. I've worked with it for around 5-6 years now, and am very familiar with it. I have it running on all my machines using L1's, L2's and combinations of both.
  17. I usually refer to the link in Christopher's signature when I want to refresh myself on what the "other" data type is. It varies with the number of files on the disk, the number of directories, the slack space (cluster efficiency), non-pooled files, etc.
  18. While Christopher answered for you already, I'll add this: You can specify in Primocache a different "block size" for it to use than what the format's cluster size is. The trade-offs in that regard are faster random access speed (smaller block size - all the way down to the cluster size of the volume) using more memory, vs lower memory overhead with larger block sizes (and potentially better throughput for large files). When setting up a L2 against a set of Pool drives, which normally tend to be larger than a boot drive, it's beneficial to keep Primocache's block size larger to reduce overhead. And if you find you aren't getting a high enough hitrate because your L2 SSD isn't quite large enough (10%+ data coverage from a read/write cache is ideal), you can actually stripe multiple SSDs and use the resulting stripe as the L2 target, to save costs.
  19. That is also a scenario where de-registration/uninstall/re-install/re-registration can help. If you want to try it, here is how: To deactivate the license: go into "Scanner Settings..", navigate to the License tab, click View license details, then deactivate. Then do a manual uninstall of Stablebit Scanner. Re-download the latest version, to be sure your first copy wasn't bad in some way. I find cleaning out Windows temp files to be helpful, especially with botched or damaged installations. Tools like CCleaner, Wise Disk Cleaner/Wise Registry Cleaner, etc. Re-install, and then re-activate your copy with your activation key.
  20. Ah right - only the Disk Management stripe didn't show up for you. Let me change the config on my current storage space and see how that affects the performance. Not sure how the USB connection will do with performance however. Edit: Changed the columns setting to 2, and re-created the volume. The performance went up for sequential reads by 83%, and sequential writes by 69%. The bad news is that anything non-sequential lost performance significantly. I'm not sure personally if the ~75% gain in sequential speeds (offset by losses for random access) are worth double the chance of volume failure, but that's entirely up to you. Seems like you found the right answer for interleaving with Storage Spaces and Drivepool after all. And I may have to take another look at why I don't use Storage Spaces, since it's possible to present a volume from it to Drivepool now. This is why I love metrics and architecture testing - you find out something new all the time.
  21. You might want to run a manual CHKDSK /R on it from a command prompt to know for sure, before sending the drive in.
  22. So I did some testing with Storage Spaces in Windows 10 (Build 17134.228), with two connected separate physical drives (4TB WD Reds in external WD USB 3.0 enclosures). The Storage Space I setup was to stripe the drives (no resiliency) to best imitate a RAID 0 architecture. The results are interesting: Stablebit Drivepool can see and add the Storage Spaces drive (S:) to a pool without any problems. Stablebit Scanner also sees the Storage Spaces drive as a single large virtual disk (no underlying drives are displayed, nor is any SMART data). Note: I didn't specify NumberOfColumns or Interleave when setting up the no Resiliency Storage Space. I then did a performance test with CrystalDiskMark against the Storage Space volume (not the Pool volume), and ended up with speeds that I'd consider normal for a single WD 4TB Red drive: After this, I did the same performance test against the Pool drive (E:) to see what if any impact Drivepool has when using Storage Spaces: Performance appears to be unaffected, and is normal for the drives per the manufacturer specs (150MB/s read/write sequential).
  23. Further testing on my end with a single Basic disk being converted to Dynamic showed that DP recognizes the Simple disk, but doesn't recognize the Dynamic one - so you are right. I must have been thinking of spanned testing with another utility. That's one of the inherent dangers of doing so many architecture/metrics tests, though I do love to do them. I'll tinker a bit with Storage spaces if I can find some spare physical drives around here to use.
  24. Hmm, none of those letters are an ideal solution for a Pool volume. How about mounting your DvD drives to folder paths themselves, instead of allowing them to take up precious letters? It's rare that I see systems that have used all available drive letters, and that's a situation where folder mount points get very useful. I currently only have two drive letters in my system - C: (boot) and D: (for the pool). Everything else, whether it is an actual volume, or virtual drive, is mounted in a folder on the C: drive. I have 9 pool drives and 4 parity drives all mounted inside C:\Pool_Drives. I have a temp drive for large FTP downloads mounted to the C:\FTP-Temp\ folder. It might be an ideal solution for you to start mounting all your physical drives (except C: of course) to paths on C:, so that you can free up some drive letters. You can de-assign drive letters and assign folder paths in the Windows Disk Management tool. The only pre-requisite is that the folders to map a volume to have to exist prior to the mapping.
×
×
  • Create New...