High Performance SSD ZFS Pool

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
ESXI Guest OS's in each of my host servers will be on Intel S3700 RAID10 array or RAID1 depending on the host. Storage/media on RAID-Z3... I want to make a high performance pool. 1 to start, and likely break it up into separate iSCSI.

Which drives would you use:

4x Samsung 850 Pro (128gb)
4x Crucial M550 (25gb)
4x Intel S3500 (160 - 200gb)
4x Intel 730 (480gb)

The goal is that this is the "high performance" storage. Lowest latency, best overall writes and reads, and mixed performance vs read or write heavy. I plan to setup a "read" and "write" heavy SSD array next.

I know some have power loss protection, and some don't but I also use redundant battery units w/shut down so that is a lower priority at the moment for these systems. I have others that could benefit from that potentially if I don't use them here.

Obviously the Intel 730 provides MORE storage (which I don't need now or in near future), so I could go with just 2 in a RAID1 configuration for least # of drives and most storage, and not worry about the minor differences for now as it's not heavily utilized. (This may be smartest route.)
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Should have stated I already own all these drives. So purchasing / price isn't a factor.

And all but the S3700 are NIB or I put minor tests on them but were purchased new not used/abused.


For my non ESXI machines I was able to snag some Intel 320s for $20 each :) Not bad to run linux on in raid1.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
You mean a SLOG with S3700's? I'm using S3700 for my SLOG for my RAID-z2 array, and S3700 for local VM Guest OS usage. These should perform this duty great I imagine.

I want to try doing a RAID CONTROLLER ($120/each for 1GB is a bit pricey but 2GB for $240 is a lot cheaper than the ZeusDrive -- and this is only a test, I may just remove them) (2 drive pool) SLOG for my SSD Performance Array.

At this point I'm deciding which SSD to go with :) out of the bunch I have. I think I'll probably just go with the mirror of the 730s since I have so many of those right now see how it works, and if needed go up to 4 of them in a RAID10 or RAID0 that gets backed up nightly to the RAID-z2.

If time allows I'll test out the M550 and the 850 PROs too most likely... I also want to compare performance with ZFS vs. pass-through, and some other variants of configurations.
(All in about 2 weeks)
 

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
I would be curious to see how the 850 Pros do in a sync-write ZFS scenario. Assuming you are going to be doing forced sync writes due to iSCSI. M550s are likely going to be well below average and require a SLOG. If your confident in your setups stability you could just do sync=normal (default) and they would be fine (Data wouldn't be guaranteed to be on disk but most likely would be).

+1 for the 730s great combination of everything. Good sync write performance (no need for SLOG in my experience), Larger capacity, Power loss caps (maybe++) etc.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
I'd suggest if you are using consumer samsung 850 pro's you put in serious amount of OP%. I had to do this to stabilize without TRIM under ESXi to get consistent MS latency, otherwise you'll get the dreaded DISCONNECT/Latency warnings. The M550's sound good too but i've not had much personal experience with them, they do have some power protection.

Intel - you can't go wrong with intel! As long as they aren't the sandforce variety!
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I'd suggest if you are using consumer samsung 850 pro's you put in serious amount of OP%. I had to do this to stabilize without TRIM under ESXi to get consistent MS latency, otherwise you'll get the dreaded DISCONNECT/Latency warnings. The M550's sound good too but i've not had much personal experience with them, they do have some power protection.

Intel - you can't go wrong with intel! As long as they aren't the sandforce variety!
I'm leaning toward the 730 just for the extra space, and it's intel :) If I can get a deal on some larger S3700 I would go that route too, but I only have 200gb and enough for my guest OS pool. I MIGHT use the HGST 200gb SAS drives if I can find 1 more, I only have 3... and then go all S3700 for guest vm pool in my other host(s).

Since I plan to go NVME by end of summer in my other ESXI Host more dedicated to high IOP VMs this will really just be 'fast' VM storage, and/or misc db non-important stuff :) This is just going in my 24/7 "ON" ESXI host, where the others may be as-needed for projects and not run every day after day.

I can see it now... using the SSD Pool instead of general storage because it's faster, and just selling off my spinners and adding 1TB SSD slowly, hahaha.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I'd suggest if you are using consumer samsung 850 pro's you put in serious amount of OP%. I had to do this to stabilize without TRIM under ESXi to get consistent MS latency, otherwise you'll get the dreaded DISCONNECT/Latency warnings. The M550's sound good too but i've not had much personal experience with them, they do have some power protection.

Intel - you can't go wrong with intel! As long as they aren't the sandforce variety!
I never went with the sammy 840 pro in servers because of what I read about how they handle cache and once it's loaded the drive crawls to a nothing.

heard anything on that?

The m550 attracted me because of the power loss they have, but I don't think it's 100% like enterprise I'm not sure yet. haven't dug into the specs .
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
No I've got over 30 840 pro's using lsi megaraid in raid-1 been solid for 2 years with heavy SQL server usage (CRM,AX, Custom B2B catalog). I had to run 33% OP to get them to stabilize without TRIM however it was a reasonable cost given the lack of cheap options back when the 830 went EOL and the 840 pro was the only affordable option!

A good place to get opinions on Consumer SSD in RAID/SERVER scenario has for me been here, and WebHostingTalk.com datacenter/colo forum! These folks have picked apart all of the options!

Nowadays we have so many affordable options for drives, it almost doesn't make sense to rock Samsung 850 pro's but the performance and longevity are definitely in my books as very positive!

I've still got some 830's rocking out nearing 3 years of 1DriveWritePerDay well past their MWI=0 wear out point! Knock on wood!

I don't shut off my servers, nor do they ever crash or run out of power, so that may be contributing towards my luck++ with samsung 830/840pro and hopefully 850 pro's sooner than later!
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Great to know!

I've been a member on WHT since around 2001 :) That's where I learned of the 840 PRO issues, but it sounds like by OP them you've not run into any problems. Great to know, not that I have any anymore :) I do have some 830s I plan to use in misc linux builds, the 830s working great still!

Do you have more info on your 30x 840 Pro RAID? Chassis, Backplanes/expander, Controller(s), OS, IOPs, tests/screenshots, network connectivity, iscsi or ? etc :)
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
I just run 4 drives per LSI megaraid controller. All in HP proliant G6/G7 server/backplanes and Dell R610/R410 backplanes! 9260,9271, all running esxi 5.1. So they are all DAS setup.

Raid-1 with Spanning under ESXi as raid-10 presented latency issues and did not allow the controllers to exercise the random iops that I needed.

All of the HP/Dell servers have two Emulex OCE11102 dual-port SFP+ 10gbe nic's going to two netgear XSM7224S 24-port 10gbe switches!

It just all works! I found the use of SAN (hp lefthand -4 units) to be more burden than gain honestly! It was a real PITA to script a proper power-down script for the APC units that would shut off all of the servers before shutting down the 4 SAN nodes so they wouldn't freak out and vote a member out of quorum. Convenience was not worth the extra latency of ISCSI imo..

DAS for me is far simpler and more performant than any SAN that I can afford to manage!
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I would be curious to see how the 850 Pros do in a sync-write ZFS scenario. Assuming you are going to be doing forced sync writes due to iSCSI. M550s are likely going to be well below average and require a SLOG. If your confident in your setups stability you could just do sync=normal (default) and they would be fine (Data wouldn't be guaranteed to be on disk but most likely would be).

+1 for the 730s great combination of everything. Good sync write performance (no need for SLOG in my experience), Larger capacity, Power loss caps (maybe++) etc.
Why do you think the M550 would be bad? I believe they have data loss protection, but also some type of write accell is that what messes them up?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I think I'm going to end up using the HGST SAS SSD drives once I find a 4th. Not the fastest IOPs drive but rather nice endurance.
 

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
Data loss protection is only in flight not true end to end like the Intels.

That comment was in regards to sync writing. I have not tested one directly but have done M500s and MX100s which are very similar. The problem is they suck at sync writes so you either need a SLOG to help (which will lower your overall performance) or ignore sync writes. By suck I mean a 730 480GB is 12x faster at sync=always as a SLOG then a 512GB MX100 in one of my tests. So I would expect that an array of 730s would likely be significantly faster then one of M550s if you didn't use a SLOG and used the on disk ZIL.

YMMV of course :)
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Good to know! Thanks, now I'm going to have to do some testing!