High Performance SSD ZFS Pool

Discussion in 'Hard Drives and Solid State Drives' started by T_Minus, Mar 19, 2015.

  1. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,883
    Likes Received:
    1,509
    ESXI Guest OS's in each of my host servers will be on Intel S3700 RAID10 array or RAID1 depending on the host. Storage/media on RAID-Z3... I want to make a high performance pool. 1 to start, and likely break it up into separate iSCSI.

    Which drives would you use:

    4x Samsung 850 Pro (128gb)
    4x Crucial M550 (25gb)
    4x Intel S3500 (160 - 200gb)
    4x Intel 730 (480gb)

    The goal is that this is the "high performance" storage. Lowest latency, best overall writes and reads, and mixed performance vs read or write heavy. I plan to setup a "read" and "write" heavy SSD array next.

    I know some have power loss protection, and some don't but I also use redundant battery units w/shut down so that is a lower priority at the moment for these systems. I have others that could benefit from that potentially if I don't use them here.

    Obviously the Intel 730 provides MORE storage (which I don't need now or in near future), so I could go with just 2 in a RAID1 configuration for least # of drives and most storage, and not worry about the minor differences for now as it's not heavily utilized. (This may be smartest route.)
     
    #1
  2. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,612
    Likes Received:
    4,565
    Buying new the 730's are nice drives.
     
    #2
    Patriot likes this.
  3. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,883
    Likes Received:
    1,509
    Should have stated I already own all these drives. So purchasing / price isn't a factor.

    And all but the S3700 are NIB or I put minor tests on them but were purchased new not used/abused.


    For my non ESXI machines I was able to snag some Intel 320s for $20 each :) Not bad to run linux on in raid1.
     
    #3
  4. jtreble

    jtreble Member

    Joined:
    Apr 16, 2013
    Messages:
    88
    Likes Received:
    10
    Could you change things up a bit: ZIL 2@S3700s and mirror the remaining SSDs (one pool)? Just a thought.
     
    #4
  5. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,883
    Likes Received:
    1,509
    You mean a SLOG with S3700's? I'm using S3700 for my SLOG for my RAID-z2 array, and S3700 for local VM Guest OS usage. These should perform this duty great I imagine.

    I want to try doing a RAID CONTROLLER ($120/each for 1GB is a bit pricey but 2GB for $240 is a lot cheaper than the ZeusDrive -- and this is only a test, I may just remove them) (2 drive pool) SLOG for my SSD Performance Array.

    At this point I'm deciding which SSD to go with :) out of the bunch I have. I think I'll probably just go with the mirror of the 730s since I have so many of those right now see how it works, and if needed go up to 4 of them in a RAID10 or RAID0 that gets backed up nightly to the RAID-z2.

    If time allows I'll test out the M550 and the 850 PROs too most likely... I also want to compare performance with ZFS vs. pass-through, and some other variants of configurations.
    (All in about 2 weeks)
     
    #5
  6. Entz

    Entz Active Member

    Joined:
    Apr 25, 2013
    Messages:
    269
    Likes Received:
    62
    I would be curious to see how the 850 Pros do in a sync-write ZFS scenario. Assuming you are going to be doing forced sync writes due to iSCSI. M550s are likely going to be well below average and require a SLOG. If your confident in your setups stability you could just do sync=normal (default) and they would be fine (Data wouldn't be guaranteed to be on disk but most likely would be).

    +1 for the 730s great combination of everything. Good sync write performance (no need for SLOG in my experience), Larger capacity, Power loss caps (maybe++) etc.
     
    #6
  7. mrkrad

    mrkrad Well-Known Member

    Joined:
    Oct 13, 2012
    Messages:
    1,241
    Likes Received:
    51
    I'd suggest if you are using consumer samsung 850 pro's you put in serious amount of OP%. I had to do this to stabilize without TRIM under ESXi to get consistent MS latency, otherwise you'll get the dreaded DISCONNECT/Latency warnings. The M550's sound good too but i've not had much personal experience with them, they do have some power protection.

    Intel - you can't go wrong with intel! As long as they aren't the sandforce variety!
     
    #7
    T_Minus likes this.
  8. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,883
    Likes Received:
    1,509
    I'm leaning toward the 730 just for the extra space, and it's intel :) If I can get a deal on some larger S3700 I would go that route too, but I only have 200gb and enough for my guest OS pool. I MIGHT use the HGST 200gb SAS drives if I can find 1 more, I only have 3... and then go all S3700 for guest vm pool in my other host(s).

    Since I plan to go NVME by end of summer in my other ESXI Host more dedicated to high IOP VMs this will really just be 'fast' VM storage, and/or misc db non-important stuff :) This is just going in my 24/7 "ON" ESXI host, where the others may be as-needed for projects and not run every day after day.

    I can see it now... using the SSD Pool instead of general storage because it's faster, and just selling off my spinners and adding 1TB SSD slowly, hahaha.
     
    #8
  9. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,883
    Likes Received:
    1,509
    I never went with the sammy 840 pro in servers because of what I read about how they handle cache and once it's loaded the drive crawls to a nothing.

    heard anything on that?

    The m550 attracted me because of the power loss they have, but I don't think it's 100% like enterprise I'm not sure yet. haven't dug into the specs .
     
    #9
  10. mrkrad

    mrkrad Well-Known Member

    Joined:
    Oct 13, 2012
    Messages:
    1,241
    Likes Received:
    51
    No I've got over 30 840 pro's using lsi megaraid in raid-1 been solid for 2 years with heavy SQL server usage (CRM,AX, Custom B2B catalog). I had to run 33% OP to get them to stabilize without TRIM however it was a reasonable cost given the lack of cheap options back when the 830 went EOL and the 840 pro was the only affordable option!

    A good place to get opinions on Consumer SSD in RAID/SERVER scenario has for me been here, and WebHostingTalk.com datacenter/colo forum! These folks have picked apart all of the options!

    Nowadays we have so many affordable options for drives, it almost doesn't make sense to rock Samsung 850 pro's but the performance and longevity are definitely in my books as very positive!

    I've still got some 830's rocking out nearing 3 years of 1DriveWritePerDay well past their MWI=0 wear out point! Knock on wood!

    I don't shut off my servers, nor do they ever crash or run out of power, so that may be contributing towards my luck++ with samsung 830/840pro and hopefully 850 pro's sooner than later!
     
    #10
  11. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,883
    Likes Received:
    1,509
    Great to know!

    I've been a member on WHT since around 2001 :) That's where I learned of the 840 PRO issues, but it sounds like by OP them you've not run into any problems. Great to know, not that I have any anymore :) I do have some 830s I plan to use in misc linux builds, the 830s working great still!

    Do you have more info on your 30x 840 Pro RAID? Chassis, Backplanes/expander, Controller(s), OS, IOPs, tests/screenshots, network connectivity, iscsi or ? etc :)
     
    #11
  12. mrkrad

    mrkrad Well-Known Member

    Joined:
    Oct 13, 2012
    Messages:
    1,241
    Likes Received:
    51
    I just run 4 drives per LSI megaraid controller. All in HP proliant G6/G7 server/backplanes and Dell R610/R410 backplanes! 9260,9271, all running esxi 5.1. So they are all DAS setup.

    Raid-1 with Spanning under ESXi as raid-10 presented latency issues and did not allow the controllers to exercise the random iops that I needed.

    All of the HP/Dell servers have two Emulex OCE11102 dual-port SFP+ 10gbe nic's going to two netgear XSM7224S 24-port 10gbe switches!

    It just all works! I found the use of SAN (hp lefthand -4 units) to be more burden than gain honestly! It was a real PITA to script a proper power-down script for the APC units that would shut off all of the servers before shutting down the 4 SAN nodes so they wouldn't freak out and vote a member out of quorum. Convenience was not worth the extra latency of ISCSI imo..

    DAS for me is far simpler and more performant than any SAN that I can afford to manage!
     
    #12
  13. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,883
    Likes Received:
    1,509
    Why do you think the M550 would be bad? I believe they have data loss protection, but also some type of write accell is that what messes them up?
     
    #13
  14. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,883
    Likes Received:
    1,509
    I think I'm going to end up using the HGST SAS SSD drives once I find a 4th. Not the fastest IOPs drive but rather nice endurance.
     
    #14
  15. Entz

    Entz Active Member

    Joined:
    Apr 25, 2013
    Messages:
    269
    Likes Received:
    62
    Data loss protection is only in flight not true end to end like the Intels.

    That comment was in regards to sync writing. I have not tested one directly but have done M500s and MX100s which are very similar. The problem is they suck at sync writes so you either need a SLOG to help (which will lower your overall performance) or ignore sync writes. By suck I mean a 730 480GB is 12x faster at sync=always as a SLOG then a 512GB MX100 in one of my tests. So I would expect that an array of 730s would likely be significantly faster then one of M550s if you didn't use a SLOG and used the on disk ZIL.

    YMMV of course :)
     
    #15
  16. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,883
    Likes Received:
    1,509
    Good to know! Thanks, now I'm going to have to do some testing!
     
    #16
Similar Threads: High Performance
Forum Title Date
Hard Drives and Solid State Drives High performance DAS - NVMe or SSD May 8, 2018
Hard Drives and Solid State Drives High (somehow) performance SSD for Centos workstation Oct 22, 2015
Hard Drives and Solid State Drives SSD recommendations for high TBW Sep 30, 2019
Hard Drives and Solid State Drives What's the best high endurance enterprise M.2 on the market right now? Aug 28, 2019
Hard Drives and Solid State Drives Issues with Dell dual Internal SD Modules on Gen 12 or higher servers? Aug 26, 2019

Share This Page