All SSD home NAS / SAN for VMware

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Fairlight

New Member
Oct 9, 2013
21
3
3
Hi guys

I was looking for some recommendations on what I could use software wise to provide storage to my VMware Lab, this is the kit I have:

2 x Dell R710s running VMware ESXi 6 (one with 288GB RAM, the other a measily 48GB but soon to be upgraded), both have 6 x 450GB 15K SAS drives and 2 x Qlogic / BC-1020 10GbE SFP+ PCIe cards

1 x Dell R510 with 32GB RAM and dual QC procs, this unit has 8 x 450GB Samsung SSDs sitting in it and a Dell H310 RAID Controller and 1 x Qlogic / BC-1020 10GbE SFP+ PCIe cards

On the switch side:

1 x Cisco 4948-10G-E with 2 x 10GbE X2 modules
3 x Cisco 3560E (10G model) also with 2 x 10GbE X2 modules

I bought the R510 with the intention of using it as a storage node full of SSDs aimed at maximum performance but having tried FreeNAS I see that it doesn't appear to support the BC-1020 10GbE cards which is annoying!

So with this I'm looking for some recommendations from you guys on what free software you would choose to run on the R510 to maximise the performance of those SSDs. I was hoping to present LUNs to VMware via 10GbE iSCSI using the BC-1020's.

Any suggestions?

I was looking at:

FreeNAS but doesn't seem to work with the BC-1020
ScaleIO I just saw and downloaded however I only have one storage node full of SSDs....

Much appreciate any info.

Thanks
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
You can use a virtualized SAN on both ESXi machines like I do and suggest for many many years now and use the Qlogic on ESXi and connect the virtual SANs via vmxnet3 vnics. Storage must be connected directly to the storage VM via pass-through.

This gives you near barebone performance and you have 2 ESXi servers and 2 SAN servers for any use/ HA/ failover cases. See my concept (with a ready to use Solarish/ ZFS storage VM template)

http://napp-it.org/doc/downloads/napp-in-one.pdf
 

Fairlight

New Member
Oct 9, 2013
21
3
3
Hi gea

Thanks for the link and suggestion, certainly open to the idea of using the R510 running VMware as it obviously gives me more VMware compute but is the H310 now going to be up to the job for this? if not can you suggest a HBA with more grunt? otherwise I'm going to try it and will let you know the results.

Thanks again
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Dell H310 (with IT mode firmware from LSI 9211) is a very good HBA for ZFS.
A little faster for SSD only storage are controller based on the 3008 chipset like the LSI 9300-8i
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
The only upgrade would be like @gea said to get a 3008 chipset which is capable of 12Gb/s -- SAS3. For SSD it would be an upgrade, and if you want the 'best' performance then you'll need to go 3008 chipset w/out an expander.

Keep in mind they sell some (much more rare) "2 on 1" HBA too, so you can get 2x HBA on 1 card but if you're using all SSD depending on the PCIE slot may not matter / use all bandwidth with 1x HBA anyway.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
First thought...don't let a $20-30 component (BC 1020 10G card) derail you from using FreeNAS, while it is annoying a Intel X520-da1/da2 or a Chelsio (I hear they work great w/ BSD's) I think the models are 320/420/etc. adapter may be the trick. Get rid of that 1020, I had to once due to switch/nic optic lockin. No biggie.

H310 is great for vt-D passthru to stg appliance. Just boot ESXi via usb/sata DOM/small ssd via on board sata port, all your base are belong to ZFS H310 :-D

OmniOS and napp-it are also good options indeed!

EDIT: One suggestion though, if you're gonna dabble in both stg platforms, save yourself some trouble/heartache and go the Intel route, I have use Intel's in both OmniOS (and other Illumos deravitives) as well as in FreeNAS (and other BSD's), no issues. Only reason I say that is I am unsure how compat Chelsio's are w/ Illumos although they are THE 'go-to' in BSD variants. Running AIO here as well so I like Gea's idea as well to get a lil' dual purpose action outta that R510.
 
Last edited:

Fairlight

New Member
Oct 9, 2013
21
3
3
Hey guys

First off many apologies for taking an age to reply, following my post life and work swamped me for most of this week.

Some great recommendations here and for that thank you all!

I am currently playing around with the setup before I settle on a given choice, at the moment I have gone for the simple approach (Patrick thanks for your recommendations also):

1. Dell R510 - Baremetal CentOS 7 x64 install running ZFS & 8 x SSDs (currently RAID0 yes no redundancy for testing, I backup daily via Veeam)
2. Still with the BR-1020 (for now) in both the R510 and the VMware host (R710)
3. Cisco 4948-10G-E with 2 x uplink ports for SAN (R510) and 9000 MTU
4. Cisco 3560E with 2 x uplink ports for VMware 10GbE "LAN"
5. All fibre is LC-SC OM3 (fibre is a pain)

Now the ZFS configuration is the default, I am not doing anything off the beaten track yet with that (I am still learning having not used ZFS that extensively) and the performance is "ok", locally on the box its very fast but via VMware its quick but nothing out of this world, I need to find out why and run some benchmarks (note I have not yet flashed the H310 it is using the default Dell firmware and JBOD, but as I say locally its very quick) - can you recommend a good benchmark test that won't severely deplete the life of the SSDs? (total size is 3.2TB)

I still like the idea of running VMware on the R510 and getting some compute use out of it as well as storage, that would really help as I need 3 nodes and currently I'm down to 2 (I was hunting around for a HP C3000 but to no avail) but I need to sort a 2nd datastore that can house the napp-it VM so I can passthrough the H310.

I guess there are some additional upgrades I need to consider:

1. Switching around the BR-1020 card in the R510 for the Intel (I actually already have an Intel X520-DA2 Dell F3VKG lying around I could swap out but I will need to firmware upgrade the X520 as the R510 BIOS is not recognizing it (unknown device)).

2. Switch purchase, I'm going to need more ports soon so I could do with switching out the 4948 for something else but is there anything cheap enough that will match its port buffer performance? I was looking at the EdgeSwitch16 XG but I have no idea what the performance is like.

Cheers guys!
 
Last edited:

aero

Active Member
Apr 27, 2016
346
86
28
54
It will be hard to find something cheap to match the performance of a 4948-10ge, that's a great switch. 16MB of buffers is nothing to sneeze at. (dual PS, great L3 feature set, low latency for its day being store-forward)!!!

How many 10G ports do you need? How many 1G? Do you need layer 3 features?
 

Fairlight

New Member
Oct 9, 2013
21
3
3
This is the problem, I'm wandering into Nexus territory to retain the buffer performance, wasn't the 4948-10GE originally designed to almost run alongside Nexus as a ToR switch back then?

So I've had a bit of time to play around with my array and running CrystalDisk on a VM (!) I'm getting the same performance as my RAID 5 based 6 x 450GB 15K SAS drives local to the server (H700) which is not good, trying to look at what maybe the problem here when I get chance.

I guess first things first is to replace the 10GbE SFP BR-1020s so I will be dropping the Intel X520-DA2 card into the "SAN" this week, then I just need to grab a new card for the VMware host, any suggestions? I saw a couple of Solarflare cards on eBay that also support SR-IOV but I'm not sure whats best for performance, latency is way down btw that is impressive.

I've ordered some more caddies so I can fill up the remaining slots with 2 x 450GB 15K SAS and 2 x 4TB SATA for backups.