Jump to content
Covecube Inc.

Jaga

Members
  • Content Count

    413
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by Jaga

  1. My server hung the other day too, probably had it running for far too long. After a reboot I checked scanner to have it do file system checks, and noticed -all- of the data on the drives was gone. Surface scan data, file system scan data, enclosure location.... all of it reset to nothing. Seems like there's a general problem with the integrity of customer data in this regard. Sure I can do a manual backup of that stuff, if I remember to every time I change something. But I probably won't. Some type of redundancy and/or data protection scheme would be good to have.
  2. Jaga

    DrivePool + Primocache

    There should be no reason you couldn't use a RAM disk as a pool member, provided it was basic drive (emulation) with a NTFS format. You might want to force a manual save-to-disk-image before every shutdown, just for diligence. However due to the limited space on them, you'd also want to limit what files the pool was able to balance onto the RAM disk. OR, simply use the RAMdisk as a SSD Cache plugin drive, to speed up delivery of files to the pool. I stopped using Softperfect years ago, switched over to Primo RAMdisk instead. I find it to be a more robust solution that's also more flex
  3. That large chunk of "other" data is your existing data on the pool drives that you wanted to have show up in the pool but which don't. They don't appear in the pool since they're outside the hidden "Poolpart-xxxxx" folder in the root of each drive. Since you have multiple drives with large amounts of data on them, the best way to handle it is: resolve your problem with seeing hidden files/folders in Windows. When you can finally see the hidden Poolpart-xxxxx folder(s) in the root of all pool drives, move the existing files to the Poolpart-xxxxx folders (on the drive where the files alr
  4. Something that might be a nice feature addition for DP would be a slower, more incremental type of balancing, that does say... 3% of the pool per day. That way you wouldn't overwhelm the on-demand parity system with massive changes, and risk losing the parity altogether. The other way to do it is source a temporary drive (external?) that's the same size or larger than your parity drive and copy the parity info to it, and then spread files around and re-compute new parity. Although this really only saves the parity data - with massive changes across all drives, at some point it still be
  5. It might be easier as well if you just blew away Pool H, and make Pool I contain Pool G and the 14.5 array. Pool H is an unnecessary level of complication for DP to deal with.
  6. When using SnapRAID it isn't advised to have a large number of changes to files on the drives. It tends to bomb out the sync process when you do something like you're proposing (moving a lot of files between drives SnapRAID does parity for). Even when they have instructions like that, it usually only works for small subsets of manual copies (not thousands of files). I am saying this from experience in dealing with a large number of file changes on drives in a SnapRAID array (9 in total with 4 2-part parity drives). I've had SnapRAID totally crap out after making large changes to the files
  7. There's plugin development information available for Drivepool on the Development Wiki - creating your own to do what you want seems the more direct route. Additionally the Drivepool Command Line utility is packaged with the standard distribution, you can call it with dpcmd.exe and run a brief list of commands with it. I haven't seen any mention of an API, but that doesn't mean one may not exist. Alex and Christopher are working on new things all the time. Creation of a custom plugin to do what you want might also benefit others, and integrate seamlessly with Drivepool.
  8. Yep - ran into the hassle when I first went to do the free upgrade to W10 (~2 years ago?). Was on a Skylake already and had to do a rebuild. I only remember it being a huge hassle and bit of a nightmare when I went to do the upgrade (+rebuild) not long after the free upgrade period ended (W7 to W10). Perhaps they've loosened it up since then, simply for those reasons. I just feel obligated to let people know so they can do some reading prior to the upgrade and protect themselves. MS purposefully obfuscates licensing, so digging is almost mandatory. If it still works, great.
  9. MS silently backs this because the license is a one-time, non-transferable license good for use on that hardware build only. When the CPU/Motherboard/etc (any major component) needs to be replaced, the license becomes invalid and you are forced to purchase a new one at full price, because "the machine changed". It's a stepping-stone to being locked in to W10, which ultimately doesn't save you anything except a little time. You're going to have to spend money at some point - only the mobile/tablet sector still gets OSs for free. If you aren't getting anything with W10 that you serio
  10. What was your architecture for that last test zeroibis? You've changed hardware and testing schemes a bunch. I'm assuming that was with 2 SSDs working independently as DP SSD Cache drives for your 2x duplication pool.
  11. Just some feedback on the tests: I don't think you ever mentioned if you had over-provisioned your drives with Samsung Magician. If not, the large number of writes were probably negatively affecting the results with v-NAND rewrites, seeing as they had 94GB copied to them over, and over, and over again. There was probably very little time for any garbage collection to occur on the drives to keep performance high. Looks like larger block sizes are definitely an improvement for you though. That's actually the recommended way to use the SSD Cache plugin - a physical number o
  12. Jaga

    Few requests

    New service to replace BitFlock?
  13. Great, then you know your target file sizes, and can adjust the block size higher. Probably no less than 16k then, or 32k. Whicher you pick, it can't be smaller than the cluster of the target drives. Will be interested to hear how it works out for you.
  14. Yep, sounds like you might have exceeded the system's abilities at that time. I've never had a performance issue using a L2 with a decent SSD, but then I've never benchmarked during heavy activity either. Usually with a SSD and write cache, setting the Primocache block size equal to the formatted cluster size is best for performance. It comes with some overhead, but for high IOPS and small file operations it's helpful - I'd recommend matching the two for your use. Since you're just using the cache as a L2 write cache on data volumes only (no boot drive, right?), then I'd recomme
  15. Yep - you ran into the 4k cluster volume limitation at 16TB. Drivepool has no such limitations - the max pool size is (for all practical purposes) unlimited. As far as breaking your Storage Spaces volume up, it'll depend on what kind of grouping the drives have. If it was a "no resiliency" pool (and it sounds like yours was created that way), each drive that you remove from the Storage Spaces pool will have it's data moved on to other drives in the pool, before the drive is completely removed and available for other uses. To do this, open the "Physical Drives" area in Storage Spaces, a
  16. Yeah - Drivepool's SSD cache plugin isn't a true block-level drive cache like Primocache. Instead it's a temporary front-end fast mini-pool. The two are apples and oranges trying to achieve similar results (at least from a write cache perspective). You can emulate two SSDs for the cache plugin by partitioning your single SSD into two equal volumes, adding each to separate child pools, then assigning both child pools to the SSD Cache plugin as targets. You won't have any redundancy (which goes against the whole duplication to protect files strategy on the SSD cache pool), but it will wo
  17. I usually refer to the link in Christopher's signature when I want to refresh myself on what the "other" data type is. It varies with the number of files on the disk, the number of directories, the slack space (cluster efficiency), non-pooled files, etc.
  18. While Christopher answered for you already, I'll add this: You can specify in Primocache a different "block size" for it to use than what the format's cluster size is. The trade-offs in that regard are faster random access speed (smaller block size - all the way down to the cluster size of the volume) using more memory, vs lower memory overhead with larger block sizes (and potentially better throughput for large files). When setting up a L2 against a set of Pool drives, which normally tend to be larger than a boot drive, it's beneficial to keep Primocache's block size larger to reduce o
  19. That is also a scenario where de-registration/uninstall/re-install/re-registration can help. If you want to try it, here is how: To deactivate the license: go into "Scanner Settings..", navigate to the License tab, click View license details, then deactivate. Then do a manual uninstall of Stablebit Scanner. Re-download the latest version, to be sure your first copy wasn't bad in some way. I find cleaning out Windows temp files to be helpful, especially with botched or damaged installations. Tools like CCleaner, Wise Disk Cleaner/Wise Registry Cleaner, etc. Re-
  20. Ah right - only the Disk Management stripe didn't show up for you. Let me change the config on my current storage space and see how that affects the performance. Not sure how the USB connection will do with performance however. Edit: Changed the columns setting to 2, and re-created the volume. The performance went up for sequential reads by 83%, and sequential writes by 69%. The bad news is that anything non-sequential lost performance significantly. I'm not sure personally if the ~75% gain in sequential speeds (offset by losses for random access) are worth double the chan
  21. You might want to run a manual CHKDSK /R on it from a command prompt to know for sure, before sending the drive in.
  22. So I did some testing with Storage Spaces in Windows 10 (Build 17134.228), with two connected separate physical drives (4TB WD Reds in external WD USB 3.0 enclosures). The Storage Space I setup was to stripe the drives (no resiliency) to best imitate a RAID 0 architecture. The results are interesting: Stablebit Drivepool can see and add the Storage Spaces drive (S:) to a pool without any problems. Stablebit Scanner also sees the Storage Spaces drive as a single large virtual disk (no underlying drives are displayed, nor is any SMART data).
  23. Further testing on my end with a single Basic disk being converted to Dynamic showed that DP recognizes the Simple disk, but doesn't recognize the Dynamic one - so you are right. I must have been thinking of spanned testing with another utility. That's one of the inherent dangers of doing so many architecture/metrics tests, though I do love to do them. I'll tinker a bit with Storage spaces if I can find some spare physical drives around here to use.
  24. Hmm, none of those letters are an ideal solution for a Pool volume. How about mounting your DvD drives to folder paths themselves, instead of allowing them to take up precious letters? It's rare that I see systems that have used all available drive letters, and that's a situation where folder mount points get very useful. I currently only have two drive letters in my system - C: (boot) and D: (for the pool). Everything else, whether it is an actual volume, or virtual drive, is mounted in a folder on the C: drive. I have 9 pool drives and 4 parity drives all mounted inside C:\Poo
×
×
  • Create New...