Jump to content
Covecube Inc.

Jaga

Members
  • Content Count

    413
  • Joined

  • Last visited

  • Days Won

    27

Jaga last won the day on January 12

Jaga had the most liked content!

About Jaga

  • Rank
    Advanced Member
  • Birthday July 17

Profile Information

  • Gender
    Male

Recent Profile Visitors

1202 profile views
  1. My server hung the other day too, probably had it running for far too long. After a reboot I checked scanner to have it do file system checks, and noticed -all- of the data on the drives was gone. Surface scan data, file system scan data, enclosure location.... all of it reset to nothing. Seems like there's a general problem with the integrity of customer data in this regard. Sure I can do a manual backup of that stuff, if I remember to every time I change something. But I probably won't. Some type of redundancy and/or data protection scheme would be good to have.
  2. Jaga

    DrivePool + Primocache

    There should be no reason you couldn't use a RAM disk as a pool member, provided it was basic drive (emulation) with a NTFS format. You might want to force a manual save-to-disk-image before every shutdown, just for diligence. However due to the limited space on them, you'd also want to limit what files the pool was able to balance onto the RAM disk. OR, simply use the RAMdisk as a SSD Cache plugin drive, to speed up delivery of files to the pool. I stopped using Softperfect years ago, switched over to Primo RAMdisk instead. I find it to be a more robust solution that's also more flex
  3. That large chunk of "other" data is your existing data on the pool drives that you wanted to have show up in the pool but which don't. They don't appear in the pool since they're outside the hidden "Poolpart-xxxxx" folder in the root of each drive. Since you have multiple drives with large amounts of data on them, the best way to handle it is: resolve your problem with seeing hidden files/folders in Windows. When you can finally see the hidden Poolpart-xxxxx folder(s) in the root of all pool drives, move the existing files to the Poolpart-xxxxx folders (on the drive where the files alr
  4. Something that might be a nice feature addition for DP would be a slower, more incremental type of balancing, that does say... 3% of the pool per day. That way you wouldn't overwhelm the on-demand parity system with massive changes, and risk losing the parity altogether. The other way to do it is source a temporary drive (external?) that's the same size or larger than your parity drive and copy the parity info to it, and then spread files around and re-compute new parity. Although this really only saves the parity data - with massive changes across all drives, at some point it still be
  5. It might be easier as well if you just blew away Pool H, and make Pool I contain Pool G and the 14.5 array. Pool H is an unnecessary level of complication for DP to deal with.
  6. When using SnapRAID it isn't advised to have a large number of changes to files on the drives. It tends to bomb out the sync process when you do something like you're proposing (moving a lot of files between drives SnapRAID does parity for). Even when they have instructions like that, it usually only works for small subsets of manual copies (not thousands of files). I am saying this from experience in dealing with a large number of file changes on drives in a SnapRAID array (9 in total with 4 2-part parity drives). I've had SnapRAID totally crap out after making large changes to the files
  7. There's plugin development information available for Drivepool on the Development Wiki - creating your own to do what you want seems the more direct route. Additionally the Drivepool Command Line utility is packaged with the standard distribution, you can call it with dpcmd.exe and run a brief list of commands with it. I haven't seen any mention of an API, but that doesn't mean one may not exist. Alex and Christopher are working on new things all the time. Creation of a custom plugin to do what you want might also benefit others, and integrate seamlessly with Drivepool.
  8. Yep - ran into the hassle when I first went to do the free upgrade to W10 (~2 years ago?). Was on a Skylake already and had to do a rebuild. I only remember it being a huge hassle and bit of a nightmare when I went to do the upgrade (+rebuild) not long after the free upgrade period ended (W7 to W10). Perhaps they've loosened it up since then, simply for those reasons. I just feel obligated to let people know so they can do some reading prior to the upgrade and protect themselves. MS purposefully obfuscates licensing, so digging is almost mandatory. If it still works, great.
  9. MS silently backs this because the license is a one-time, non-transferable license good for use on that hardware build only. When the CPU/Motherboard/etc (any major component) needs to be replaced, the license becomes invalid and you are forced to purchase a new one at full price, because "the machine changed". It's a stepping-stone to being locked in to W10, which ultimately doesn't save you anything except a little time. You're going to have to spend money at some point - only the mobile/tablet sector still gets OSs for free. If you aren't getting anything with W10 that you serio
  10. What was your architecture for that last test zeroibis? You've changed hardware and testing schemes a bunch. I'm assuming that was with 2 SSDs working independently as DP SSD Cache drives for your 2x duplication pool.
  11. Just some feedback on the tests: I don't think you ever mentioned if you had over-provisioned your drives with Samsung Magician. If not, the large number of writes were probably negatively affecting the results with v-NAND rewrites, seeing as they had 94GB copied to them over, and over, and over again. There was probably very little time for any garbage collection to occur on the drives to keep performance high. Looks like larger block sizes are definitely an improvement for you though. That's actually the recommended way to use the SSD Cache plugin - a physical number o
  12. Jaga

    Few requests

    New service to replace BitFlock?
  13. Great, then you know your target file sizes, and can adjust the block size higher. Probably no less than 16k then, or 32k. Whicher you pick, it can't be smaller than the cluster of the target drives. Will be interested to hear how it works out for you.
  14. Yep, sounds like you might have exceeded the system's abilities at that time. I've never had a performance issue using a L2 with a decent SSD, but then I've never benchmarked during heavy activity either. Usually with a SSD and write cache, setting the Primocache block size equal to the formatted cluster size is best for performance. It comes with some overhead, but for high IOPS and small file operations it's helpful - I'd recommend matching the two for your use. Since you're just using the cache as a L2 write cache on data volumes only (no boot drive, right?), then I'd recomme
×
×
  • Create New...