Jump to content

~Slyfox

Members
  • Posts

    4
  • Joined

  • Last visited

Reputation Activity

  1. Like
    ~Slyfox reacted to YagottaBKiddin in DrivePool FTW vs. Storage Spaces 2012 R2   
    Good evening everyone.  I found this very interesting, fascinating even:
     
    I had just set up a Windows 2012-R2 Storage Server, with ~20 TiB of drives to function primarily as a media server running Emby and file-server endpoint for various backups of machine images etc.  
     
    CPU: Intel Core i7-4790K
    RAM: 32 GiB DDR3 2666MHz.  
    OS is hosted on an M.2 PCIe 3.0 x4 SSD (128 GiB) *screaming fast*. 
    5 x 4 TiB HGST DeathStar NAS (0S003664) 7200 RPM/64 MB cache SATA-III
     
    Initially, the machine was set up with one large mirrored storage pool (no hot spares) composed from 5 primordial 4 TiB HGST sata-III disks all connected to sata-III ports on the motherboard (Z97 based).
     
    The graph below was taken during the process of copying ~199 files in multiple directories of various sizes (15K - 780 GiB) for a total of ~225 GiB.
     
     

    Fig 1: Copy To Storage Spaces 2012-R2 mirrored pool
     
    What isn't apparent from the graph above are the *numerous* stalls where the transfer rate would dwindle down to 0 for 2-5 seconds at a time. The peaks are ~50 MB/s. Took *forever* to copy that batch over.
     
    Compare the above graph with the one below, which is another screenshot of the *exact* same batch of files from the exact same client machine as the above, at almost exactly the same point in the process. The client machine was similarly tasked under both conditions. The target was the exact same server as described above, only I had destroyed the Storage Spaces pool and the associated virtual disk and created a new pool, created from the exact same drives as above, using StableBit DrivePool (2.1.1.561), transferring across the exact same NIC, switches and cabling as above also. More succinctly: Everything from the hardware, OS, network infrastructure, originating client (Windows 10 x64) is exactly the same.
     
    The only difference is that the pooled drives are managed by DrivePool instead of Storage Spaces:
     

    Fig 2: Massive file transfer to DrivePool (2.1.1.561) managed pool.
     
    What a huge difference in performance!  I didn't believe it the first time.  So, of course, I repeated it, with the same results.
     
    Has anyone else noticed this enormous performance delta between Storage Spaces and DrivePool?
     
    I think the problem is with the writes to the mirrored storage pool, I had no problems with stalls or inconsistent transfer rates when reading large batches of files from the pool managed by Storage Spaces. The write performance is utterly horrid and unacceptable!
     
    Bottom Line: The combination of drastically improved performance with DrivePool over Storage Spaces, plus the *greatly* enhanced optics into the health of the drives provided with Scanner:  DrivePool + Scanner >>>> Storage Spaces.  Hands Down, no contest for my purposes. The fact that you can mount a DrivePool managed drive on *any* host that can read NTFS is another major bonus.  You cannot do the same with StorageSpace managed drives, unless you move all of them en-bloc.
     
    Kudos to you guys!  <tips hat>
     
  2. Like
    ~Slyfox reacted to Carlo in Cloud Drive Using up my Local Storage   
    I here what your saying Chris but don't fully agree.
    I do understand that you need some "working space" to store a few blocks to have ready for the thread(s) doing the uploading.
     
    However from a high level view it appears a background process is creating all these blocks getting them ready for the upload and just does it things without communicating with the uploading thread(s).  It appears this process has no concept of how far ahead of the upload it is or not and just continues until it has created all the blocks needed for the duplicate section or it runs out of space (which ever comes first).
     
    In reality it only needs to stay X blocks ahead of the upload threads.  Weather you have 5 blocks ready or 5,000 makes no difference as you can only upload so fast.  Anything more than this is "wasted".  This shouldn't be hard to calculate since you know at any time the max upload threads available.
     
    But with all that said I just want to clarify I'm not against a reasonble working storage area.  But it doesn't need to be 4TB which will take a few days at best to upload.  No way, shape or form should it need to be that far ahead.
     
    Carlo
     
    PS I know this is BETA and fully expect issues, so no issues in that regard.  Just hoping to give feedback along the way to help you guys deliver the best product possible as it has so much promise especially combined with DrivePool.
  3. Like
    ~Slyfox reacted to Christopher (Drashna) in ReFS   
    You are very welcome.
     
    And we do apologize for not being able to get to this. However, we are a very small company, so getting to everything is impossible. We'd love to add the feature (though ReFS really shines with Storage Spaces, unfortunately), but there aren't enough hours in the day. 
     
    And just FYI, I did bump this, as I've talked to Alex about this recently, and adding the ability to use the ReFS drives may work (though it will need extensive testing, which Alex doesn't have time for at the moment)
×
×
  • Create New...