Jump to content
Covecube Inc.

YagottaBKiddin

Members
  • Content Count

    9
  • Joined

  • Last visited

  • Days Won

    1

YagottaBKiddin last won the day on September 2 2015

YagottaBKiddin had the most liked content!

About YagottaBKiddin

  • Rank
    Newbie
  1. Drashna. quick question: During numerous rounds of observation of the read-stripping performance, I have noticed that it seems to only read from one disk, especially when the read-speed (which directly correlates to the write speed at the destination) is too slow to handle the additional bandwidth boost that reading from another drive simultaneously would provide. Basically, it seems to figure out that reading from more than one drive would not improve performance, as the outbound pipe appears to be saturated already, so why bother reading from more than one drive anyway. This is in the single large file copy scenario, not multiple smaller files to multiple destinations (e.g. streaming to multiple devices) scenario. This question was prompted by randomly checking the tooltip of the "Read Stripping" bar on the drive pool UI. In the large single file copy (~500 GiB file) scenario where I almost always only see reads from one drive in the X2 pool. The tooltip reports "65-70 per second Disk Holds" which makes me think that reading from another drive would not help anyway, so why bother. (It is a network file copy to a slow archive NAS that saturates at ~40 MB/s, slow parity protected array destination, unRAID box). Is this assumption correct? I'll try to set up a RAM drive on the same box hosting the drive pool and set up some large file copies to the RAM drive, which should have *HUGE* write bandwidth and see if that makes a difference in what Read-Stripping does. PS: StableBit should consider adding a RAM drive service to the suite of tools you already provide <nudge nudge> ;^)
  2. Yup, using latest stable Scanner. It's not a show stopper, lol. Thanks for the 411 on the beta!
  3. When reconnecting to a disconnected RDP session, where the Scanner UI is left open on the desktop, it blanks out the drive info in the main window upon re-connection to the RDP session and refresh of the desktop UI across the RDP connection. The drives are still in the list, if you click on the blanked out UI, the lower panel updates appropriately, but the main panel remains empty. Client: Windows 10 Server: Windows 2012 Storage Server R2 Reproducible: Consistent. Workaround: Close Scanner UI and restart it. Thinking it is a user-switching thing, even though the RDP re-connection is for the same user. Normal Blanked out
  4. I did the voodoo, using a ~37 GiB file to test with. DrivePool *does* activate read-stripping at different points during the file copy operation as I saw it reading from two drives simultaneously at different points. It did not always read from two drives during the copy. The odd bit is that when it was reading from one drive, performance was about 15-20% better than when two drives were streaming the file simultaneously, which is the opposite of what I would expect. I'm thinking that might be a controller/bus saturation issue eh? I have the logs and screenshots of the different read performance rates if you are interested. Just shout and I'll upload them.
  5. You are of course correct. The 2012 line is more closely aligned with wiin 8.x than win 10. Yes, all of the hard drives are connected to the same controller (on-board IntelĀ©, Z97 based, 6x SATA-III in AHCI mode). All 5 drives in the pool are 4 TiB HGST DeathStars.
  6. I'm seeing the same behavior on Windows Storage Server 2012-R2 (no Storage Spaces configured anymore ) , which is an, uh, close cousin of Windows 10, apparently. All drives are on the same SATA-III on-board controller (Intel Z97). Drive Pool build 2.1.1.561. Let me know if there is anything in particular you need to assist with this on your end.
  7. Indeed I do use Scanner + DrivePool (I mentioned this at the very bottom of the first post It's not a shameless plug when it is good advice and a great deal! I'll be converting my other 2012-R2 StorageSpace server over to DrivePool soon (already have the DrivePool key) and will be adding Scanner to it as well. The integration of the two has come a long way in the past few years! (I've been a StableBit customer for a few years now).
  8. I had read some of the performance issues regarding Storage Spaces, and indeed most were in reference to parity. Most of what I ran across was for the initial versions from a few years ago. Apparently there's still work to be done! Thanks again for the great tool! Now to implement either snapRAID or maybe just go with corz and keep hashes of all the files... corz can also keep a list of all hashed files on any given disk so if one exhales the magic blue smoke, at least there's a list of what was on the drive...
  9. Good evening everyone. I found this very interesting, fascinating even: I had just set up a Windows 2012-R2 Storage Server, with ~20 TiB of drives to function primarily as a media server running Emby and file-server endpoint for various backups of machine images etc. CPU: Intel Core i7-4790K RAM: 32 GiB DDR3 2666MHz. OS is hosted on an M.2 PCIe 3.0 x4 SSD (128 GiB) *screaming fast*. 5 x 4 TiB HGST DeathStar NAS (0S003664) 7200 RPM/64 MB cache SATA-III Initially, the machine was set up with one large mirrored storage pool (no hot spares) composed from 5 primordial 4 TiB HGST sata-III disks all connected to sata-III ports on the motherboard (Z97 based). The graph below was taken during the process of copying ~199 files in multiple directories of various sizes (15K - 780 GiB) for a total of ~225 GiB. Fig 1: Copy To Storage Spaces 2012-R2 mirrored pool What isn't apparent from the graph above are the *numerous* stalls where the transfer rate would dwindle down to 0 for 2-5 seconds at a time. The peaks are ~50 MB/s. Took *forever* to copy that batch over. Compare the above graph with the one below, which is another screenshot of the *exact* same batch of files from the exact same client machine as the above, at almost exactly the same point in the process. The client machine was similarly tasked under both conditions. The target was the exact same server as described above, only I had destroyed the Storage Spaces pool and the associated virtual disk and created a new pool, created from the exact same drives as above, using StableBit DrivePool (2.1.1.561), transferring across the exact same NIC, switches and cabling as above also. More succinctly: Everything from the hardware, OS, network infrastructure, originating client (Windows 10 x64) is exactly the same. The only difference is that the pooled drives are managed by DrivePool instead of Storage Spaces: Fig 2: Massive file transfer to DrivePool (2.1.1.561) managed pool. What a huge difference in performance! I didn't believe it the first time. So, of course, I repeated it, with the same results. Has anyone else noticed this enormous performance delta between Storage Spaces and DrivePool? I think the problem is with the writes to the mirrored storage pool, I had no problems with stalls or inconsistent transfer rates when reading large batches of files from the pool managed by Storage Spaces. The write performance is utterly horrid and unacceptable! Bottom Line: The combination of drastically improved performance with DrivePool over Storage Spaces, plus the *greatly* enhanced optics into the health of the drives provided with Scanner: DrivePool + Scanner >>>> Storage Spaces. Hands Down, no contest for my purposes. The fact that you can mount a DrivePool managed drive on *any* host that can read NTFS is another major bonus. You cannot do the same with StorageSpace managed drives, unless you move all of them en-bloc. Kudos to you guys! <tips hat>
×
×
  • Create New...