Jump to content

LindsayCole

Members
  • Posts

    35
  • Joined

  • Last visited

Posts posted by LindsayCole

  1. Hey All,

     

     I'm using my server for Plex. I recently moved my PLex appdata to the pool drive.

     

    Since then, I am unable to populate metadata inside Plex.

     

    From what I understand, apparently the problem is with hard linking.

     

    Does Stablebit not support hard linking? Is there a plan in place to support it?

     

     

  2. Burst Test - I tested all drives. (for a 10 min period admittedly)- they all tested 210+.

     

    A bit of insight into my setup;

     

    Server 2012 R2

    Dell C1100

    24GB RAM

    2x Xeon L5630s

    IBM 1015 flashed to IR mode.

    SE3016 SAS Enclosure

    Plex Media Server

    Sonarr

    uTorrent

    DrivePool / Scanner

     

    I could have sworn I had access to the SMART data a month ago, checking now- I have nothing. I recently learned that the 1015 uses SAT (SCSI to ATA Translation) - does DrivePool support SAT?

     

     

    Thanks!

  3. Heyas

     

    I have a 27.1 TB array, consolidated through DrivePool.

     

    I suspect I have something lagging the array down. I am getting intermittent drops in performance (this is primarily a Plex server) and am having trouble nailing down the reason behind the slow down. I'm trying to nail down if I have a problem with the storage subsystem.

     

    Is there an easy way to test each drive for speeds?

  4. Ah, yeah, that's a lot of storage, and I can sympathize (with my ~42TBs of pooled storage) (and those IBM ServeRAID m1015 cards are nice, aren't they :) ).

     

    I love them!

     

    If RichCopy is doing the writes in parallel, it really depends on the disks in the pool, and how much free space they have (again, measured in absolute size, not percent).

    If they're all about equal, then they should, in theory write to each disk separately and speed the process. But still, that depends.

     

    I did some testing, with only two threads, I saw a speed increase to about 175-325MB/s. Seems like it may have been an issue with RichCopy.

     

    As for the SSD Optimizer balancer, nope, doesn't matter. You could use a slow 5400RPM laptop drive, if you really wanted to... But most people use SSDs for speed and not 15k RPM drives. So that's why it has the name. So you absolutely could use the 15k RPM drives as the "SSD" drives here. In fact, I was going to recommend that. :)

     

    Perfect, I'm leaning towards that.

     

    Additionally, in case it helps (sorry for not mentioning this sooner, as well), but if these drives are all on the same system, you could just "seed" the pool. 

    http://wiki.covecube.com/StableBit_DrivePool_Q4142489

     

    StableBit DrivePool can absolutely use RAID drives, as long as you're not using Dynamic disks for the RAID (but I suspect you're using hardware RAID here). Just add the existing RAIDed drives to the pool, "seed" the pool, and then you're set. 

    And if you want to migrate the data off of the RAID drives, you could use the "Disk Usage Limiter" to do this. However, this may be slower than a straight file copy, as balancing is run in a background IO priority. Which can be changed: http://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings (set FileBalance_BackgroundIO to "False")

     

    I saw that, I needed to move the data out of a .vhdx mounted storage space- so the seeding wouldn't have worked well for me.

     

    And if you're moving a lot of small files, then yes, that can definitely happen. That's the thing that sucks about small files.... they really kill the transfer speed. But if that's 50MB/s for small files, that's not bad.

     

    That definetly seems to have been the issue.

     

    Have you had any experience yourself, using SnapRaid with DrivePool to acheive parity?

     

    I'm at a critical spot with the build, where I need to make the decision- and I don't want a half assed solution ;)

     

    It seems like FlexRaid will do everything I want, but I really like DrivePool :)- but FlexRaid seems to really complicate things. I want to set and forget ;-)

  5. Hey Drash, thanks for the response.

     

    Let me give you a clear view of my topology.

     

    I have a server, with 20TB of internal storage. I have a SAS card, flashed to IT mode, connected to a SAS enclosure with another 20TB of storage. So networking is out of the picture.

    I am using RichCopy (which is doing three write threads in parallel). My hopes were, that because DP would be writing to multiple disks (from a faster source) that I would get increased speeds since each individual drive is able to be written to on its own.

     

    I was most definitely making assumptions when I was doing this setup, but thats how I learn best ;)

     

    If I used a balancing plug in, say Disk Space Equalizer, do you expect that speeds would increase? I just want to set my expectations properly :)

    With the SSD Optimizer, does it only work with SSD's? Or could I configure it to work with some 15K RPM drives as well?

     

    Also, its hard to say for sure, but when I see the drops at 50MB/s, it could also be smaller files, which could easily bottleneck the transfer.

  6. Hello,

     

    I currently have a massive transfer underway, 12TB via three write threads at a time (using RichCopy).

     

    I am seeing only 4 drives (out of 16) getting used, read / write speeds are average at best (50-125 MB/s).

     

    Is this the expected performance on this? I expected faster being that source is all 15k RPM drives, and that destination is multiple drives.

     

    Is it normal for it to only use a few of the drives at a time?

     

    Is it normal to only be getting a single drives throughput at a time, even though I see it putting files on two different drives at once?

     

    Thanks!

    post-1788-0-99456200-1417032399_thumb.jpg

  7. I am recovering a Storage Spaces array that is 12.6TB in size. Using ReclaiMe's Storage Spaces Recovery to do it. 

     

    It's a long story, but essentially I was running in a MS unsupported way, presenting each drive as its own RAID 0 to Windows, then had a RAID controller fail. 29 drives, that when I hooked back up, because they were set to Windows through a RAID card, Windows did not know how to enumerate the drives, so it couldn't recreate the Space.

     

    To further complicate, no-one has really figured out how to recover when using Microsoft's de duplication. So, I had one of the developers of ReclaiMe remoted in, she gave me the ability to export the entire array as a Disk image (.vhdx) which should work okay, since dedupe is automatically handled via it.

     

    So I am restoring a single 12.6TB file, after the recovery it shouldn't be a big deal if I want to use DrivePool, since I will be removing all of the data from the vhdx. Which given, is an extremely unique (and hopefully one time) problem.

     

    Still deciding if it is a good fit for me or not, I really like the ability to use parity. As I have a fair bit of data that I want to be able to sustain a failure on, but using DrivePool for this, would require me to have almost double the storage for duplication, and I don't have the ability to grow to that size at this time.

     

    A normal RAID setup, isn't a good fit for me, as I need to be able to add drives on the fly, of mismatched size.

     

    Storage Spaces will work okay for me, but I dislike it's calculations on how much space it makes available- it just doesn't follow a logical convention (for example, I create a simple space (no redundancy)- add all drives, and have a total capacity of 15.5TB- yet it only allows me to create a volume of 13.6TB, With 1.82 unused that I can create another volume with. Which makes no real sense.

     

    I want the flexibility and parity of storage spaces, but with a logical, intuitive interface like DrivePool. I am managing about 27 hard drives, + two SSD's for caching, and + two more in RAID 1 for OS. All drives are now controlled by an HBA, so I don't run into this problem in the future.

     

    Wow, this really devolved into a rant.

     

    Thanks for the help.

  8. Hello All,

     

     I ran into a problem using Server 2012R2 with DrivePool. I am in the testing phases to roll out for my setup here.

     

     It looked easy enough, I set all my drives to pool, and started a restore from my old server to my new server, DrivePool listed that I had 18TB available, and upon viewing it under Computer, the disk agreed. However, under Disk Management- it showed tapped out at 2TB on the DrivePool disk.

     

     When I was running my restore of data to the DrivePool volume, it stopped at 3.6TB and said insufficient space and tapped out. It filled one 4TB hard drive, and called it a day. When there was still another 8TB to restore. (Filetype was a .vhdx) Does DrivePool not support file sizes greater than the capacity of a single disk? In this case, it was 12.6TB.

     

     I have resorted back to Storage Spaces in the interim (this is essentially my swing server so I can reconfigure the main server with fresh drives)- so I will still need to figure out the plan going forward- which hopefully includes DrivePool.

     

    Thanks,

×
×
  • Create New...