Jump to content

Jaga

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by Jaga

  1. Works just fine for me - tested Drivepool 64-bit download. If you have a VPN, try temporarily disabling it. And you might try disabling/enabling your network adapter for good measure. If neither of those work, clean the DNS cache on your computer with something like CCleaner.
  2. You could do it with a combination of a VPN, Drivepool pool(s), and Clouddrive using file share(s). Here's how I think it could work: The VPN connects all computers on the same local net. Each computer has a Pool to hold data, and the Pool drive shared so the local net can access it. Clouddrive has multiple file shares setup, one to each computer connected via VPN and sharing a Pool. Each local Pool can have duplication enabled, ensuring each local Clouddrive folder is duplicated locally X times. The file shares in Clouddrive are added to a new Drivepool Pool, essentially combining all of the remote computer storage you provisioned into one large volume. Note: this is just me brainstorming, though if I were attempting it I'd start with this type of scheme. You only need two machines with Drivepool installed and a single copy of Clouddrive to pull it off. Essentially wide-area-storage.
  3. Usually when you connect pooled drives to a computer and then install Drivepool, it sees those hidden poolpart folders and automatically adds the Pool. But you may need a separate copy of Drivepool for each OS, even on the same system. Fortunately it's not expensive to get additional copies if you already own one. The only thing you'd need to do after booting either OS, is have Drivepool re-measure so it has correct stats (you could even use the command line utility on boot to do it). You'd also want to ensure both copies of DP had identical settings, to avoid unnecessary work on the drives. If you have Drivepool installed on both platforms and aren't seeing the pool under both, try uninstalling the W10 copy and then reinstalling it. It -should- automatically pick up the pool and mount it.
  4. I hadn't realized it was a different discount for products you do own, vs products you don't. That's great to know. More purchases incoming...!
  5. My answer is by no means official, but the way I understand it is that you have an activation license when you buy any single (or package) of products. Drivepool, Scanner, or Clouddrive. When you buy additional copies of any of the three and enter your activation ID after clicking "Already own one of our products", then you can purchase those other copies at a discount of $10 each, up to a total of 10 of each product. i.e. if you owned one copy of Drivepool, you could purchase 9 more copies of DP, 10 copies of Scanner, and 10 copies of Clouddrive... all at a discount of $10/copy. The first one is what "gets you in the door" for further discounts. I would test, but I already own a copy of all three products. You could do it on the purchase page - scroll to the bottom, enter your Activation ID, click "Already own one of our products", and see what pricing it gives you.
  6. Other than letting Christopher take over, I have one last thing to suggest... Copy the offending files off the pool to a different local disk (i.e. not any disk in the pool). Verify they are good in the new location, then delete the originals. Then set permissions on the copied ones and move them back to the pool. Then try a re-measure.
  7. Are the errors truly random? Have you run a chkdsk /f against all the physical drives in your pool to see if the underlying file system and contents are okay? Drivepool passes file system commands to the drives, so if it's encountering errors on a NTFS volume, then chkdsk should see those as well. When you created the Drivepool volume, did the disks you put it on have any other files on them outside the pool? That would throw the re-measure off, and make the 'Other' category in DP seem abnormally large. When you migrated from the other file system, did you reset Windows/NTFS permissions on all of the content? Can you move them by hand yourself? Are you using the Disk Space Equalizer plugin to force the balancing? (DP won't immediately re-balance unless this is enabled) Do you have any unusual file placement rules in DP that might mess with the balancing routine?
  8. Jaga

    Scanner activity log?

    Looked through the forums here and didn't see it posted elsewhere. Do we as users have access to any kind of readable activity log for Scanner? Rather than just trusting it's doing what it's supposed to be doing, the SysAdmin in me wants to actually go and confirm that. File system tests, sector tests, Smart updates, etc.
  9. DP just passes drive operations to the underlying drives. The files would still be there, and still show in their folders when any app told DP to "go look and see what's there". Until then, it wouldn't know they were part of the pool, and the pool measurement (for balancing and space used) would appear incorrect. But you'd still be able to access/stream the files just fine. I open Scanner and Drivepool daily to check on status and re-measure the pool, and read SnapRAID logs. Takes all of 3-5 minutes, and gives me peace of mind. If you had powershell scripts moving files to physical drives nightly, just open Drivepool every now and then and tell it to re-measure. DP does have a command line utility, but I'm unsure if it's compatible with current version(s), or if it allows re-measuring.
  10. DP wouldn't know until you told it to measure pool space. But it wouldn't care either, especially if you (mostly) tried to keep space balanced with the PS scripts. Just re-measure the pool manually once in a while and it'll be happy. I suspect the FIFO feature as a request wouldn't be hard to implement. Can always keep fingers crossed.
  11. Drivepool doesn't really care if you manually drop a file/folder inside it's hidden Poolpart folder. In fact, that's one way to get games to install to the pool correctly as a workaround. You have to re-calculate the pool space after so Drivepool's stats are current, but that's not a big deal. So yes, from that perspective you could easily do it with Powershell and scripts. But you're left with manually doing the balancing yourself when those files come off the SSDs. Perhaps it's a good feature request for DP's SSD cache too - a FIFO-like system where not all the files come off at once. Just one checkbox and a slider to determine how "full" you want the SSD to stay.
  12. Don't calculate parity on the SSD drive. Or.. be prepared to have SnapRAID re-calculate a large amount of it's parity every time your SSD empties out. It's a volatile volume that you're trying to do parity on, which is naturally going to lead to a lot of activity and re-calcs. If you really want to do it right, make a RAID 1 mirror with multiple SSDs, then use that as the Drivepool SSD cache, and don't do parity on it with SnapRAID. You'll still have temporary redundancy in the mirror. Other options: speed up your pool by getting faster drives. Install a fast SSD or NVMe along with Primocache as a L2 caching option against the Pool. Or manually manage what you keep on the SSD and what gets moved to the pool. Lots of ways to handle it. Drivepool wasn't setup to do what you're trying to do with SnapRAID, as SnapRAID likes mostly static archives (or fast disks to work with when there are large numbers of changes). I'm still unsure why even a 5400 RPM spindle drive can't deliver information fast enough to stream a 4k UHD movie, for example. My WD Reds @ 5400RPM can easily read/write fast enough to saturate my Gigabit network. Is there a reason you simply have to have the latest content on a faster SSD?
  13. The threaded copies are really what interested me most, but I'm already saturating Gigabit over normal file shares, so probably wouldn't gain much if anything. Still - had to ask. Thanks Christopher.
  14. Ah, nice. Then you can put a decent NVMe in your second socket. Given your circumstances, I'd go for a nice Samsung NVMe, whatever you can afford. It should fix the bottleneck problem if it's large enough and on the faster socket.
  15. Pro has much higher endurance, and would be the recommended one for higher usage. Samsung SSD/NVMe Pro editions are top shelf. But don't you already have a 950 Pro that's getting bottlenecked? Or is the 970 you are considering purchasing a NVMe? This site can be helpful when comparing NVMe's and SSDs for speed and pricing.
  16. Since you're fixed now (grats!) perhaps I could hijack the topic and ask about the advantages of running NFS on a Windows platform that doesn't include it natively? i.e. Win 7, Win 8, Win 10. I'm a performance nut, and am always looking out for new software to improve my systems. I'll freely admit I have no prior experience with NFS in the past, but it does sound intriguing.
  17. I'll give an e-cookie to know the answer!
  18. Jaga

    DrivePool + Primocache

    Hey @fattipants2016 It's actually rather easy to do, I have my L2 SSD cache (in Primocache) set to flush writes every 30 seconds, but you can configure that interval to whatever you want. 24 hours would be 86400 seconds, you'd put it into the Latency box in the following interface for Primo: The L2 cache is persistent, even across reboots (you can enable/disable that at will). It will hold writes as long as you tell it to, or not at all. So from 0 (disabled write caching) up to whatever interval you want (I haven't tried super-long delays yet). It'll flush those writes to the disk it's caching automatically. I just tested it with the value of 86400 (1 day) and it took the setting without complaint. I suspect the timer 'resets' and begins counting down after a manual flush (you can do it with a button press in the UI), so you can determine what time of day it flushes easily.
  19. I think you are in a unique situation and no one else has the problem. Usually a network connection isn't going to outpace a SSD, and if it does there are other options to fall back on. I use a 20GB L1 RAM cache in my system to avoid bottle-necking my drives (even a Samsung 850 Pro) and speed up all disk operations, out of 64GB total RAM. In your case, if you have (or can install) 64GB or 128GB of memory and dedicate ~80% of that to a L1 RAM cache, you might avoid most of the issues you are seeing. A RAM cache will go a long way to help eliminate unnecessary writes (live trimming), redundant reads, reads against written data, etc. You might also consider limiting up/down transfer speeds with Clouddrive's bandwidth limiter in the meantime, so you don't choke the drive(s) with 100% activity.
  20. Well, a 64GB or 128GB RAM cache would probably solve the bottleneck issue. But that's a bunch of money into RAM, and the motherboard to support it. I think he means the ability to set the cache folder separately for reads and writes. i.e. all reads would be cached on drive X, all writes would be cached on drive Y. He's trying to increase throughput at the drive level since it's his current bottleneck for transfers. Apparently the SSD just can't keep up with up/down at the same time.
  21. Jaga

    DrivePool + Primocache

    I briefly tried IOZone, but it's command line only, doesn't create graphs in the XLS file, and the data is more than I'd care to use in a casual comparison. Trying Sandra and unchecking bypass windows cache showed throughput around 4x that of normal Windows caching. I'm not sure I trust that result however, since it does things like testing disk position/etc. It doesn't just have a raw throughput (read/write) for different queue depths and test file sizes.
  22. How large are the files you are talking about? Small enough to fit (mostly) in memory temporarily? If so a RAM cache might help your downloading/uploading. I use a 20GB L1 cache (using 3rd party software) with a variable write time delay, to help avoid bottlenecks like that. The notion of specifying up/down caches for Clouddrive is interesting (I do like the idea), just not sure they could expose & split that feature between two separate physical drives. Or... perhaps it's time for a NVMe? You could even use the NVMe drive as a L2 cache in front of your Samsung SSD, to speed up all read/write ops and even reduce writes with built-in trimming. P.S. I'm officially jealous of your transfer speeds! Still stuck on conventional 400/20 cable broadband here. Fiber has been "coming" for a decade now, still not in sight!
  23. Jaga

    DrivePool + Primocache

    The physical disks. Drivepool's pool doesn't have any real sectors to cache, which is what Primo works with. The three drives in my screenshot above (Z, Y, and X) are my pool drives, so I told Primo to cache them physically. If you're just going to setup read-and-write caching on a L2 SSD, then you don't need to have the server on a UPS for power protection.
  24. Jaga

    DrivePool + Primocache

    Will it cache that information? Yep. Anything that exists on the drive can be cached. Will it leave the drive asleep and still deliver the information? Probably not, since it allows the Windows cache to operate alongside it, and I'd suspect it would wake the drive up on any kind of ping for information. Plus - it's impossible currently to force Primo to cache what you want. It caches what it sees as the most accessed information. IF that information happens to be your file/folder information for the drives and Drivepool, then some or even many of them might not wake up. However - from what I understand, the single greatest wear-and-tear operation on a spindle drive, is spinning up from sleep repeatedly. A long time ago I used to allow mine to sleep, but having learned this since.. I stopped allowing them to spin down. The savings on power between a spinning idle drive and a sleeping idle drive aren't large, and I value lifespan over minimal cost savings. You might want to review all of your drives and see if the extra energy use is worth the additional wear on them. Drivepool's SSD optimizer is definitely not a replacement for Primocache's L2 caching. The closest it would come is acting as a pseudo-write-buffer for the DrivePool, which Primocache's L2 SSD caching could do just as well, since it works for both read and write caching at the block level. Drivepool's SSD feature is a temporary and fast adding-files-to-the-pool-only area, whereas Primocache's L2 SSD cache is a true read/write buffer for any kind of drive installed on a computer. And, it can persist in-between reboots of the machine. They are very different animals, both trying to increase speed, and both having their uses in different ways to different effect. DP's SSD feature targets the pool for writes only, Primocache's targets any drive(s) in the system for reads and writes. I wouldn't recommend using both at the same time on the Pool. While Drivepool's SSD feature is definitely a nice addition to the suite, Primocache's L2 effect on the system/pool would be much more effective, but also cost more money as you need to buy a license for it. Separately, Primocache also offers L1 RAM caching. As an example: when I rebuild my storage pool this summer, I'll be dropping in a rather large SSD to use as a read/write buffer with Primocache on that server, since the L1 RAM requirements to cache against a large storage Pool would just be way too big - 2TB of RAM just isn't going to happen. But a fast 2TB SSD as a L2 cache in Primo could easily handle caching on a 50TB Drivepool (4% of total data covered for reads, all writes covered). I *could* simply use DrivePool's SSD optimizer, but that wouldn't help with read caching, and I've already bought the license for Primocache on that machine. Definitely room for both products, depending on what you want/need and how much money you're willing to put towards the solution. Ultimately as far as caching and pool speed goes without considering cost, Primocache (using the same SSD) would be a more encompassing solution than DP's SSD optimizer.
×
×
  • Create New...