Cheap/Easy way to connect ESX hosts to Linux SAN at speeds over 1GIG

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Atomicslave

Member
Dec 3, 2014
48
11
8
Cheap/Easy way to connect ESX hosts to Linux SAN at speeds over 1GIG

So as the title says, I am looking for a cheap easy way to connect my ESX hosts

ESX02 - Dell R710
ESX01 - Dell T5500
SAN - Asus mobo with spare pci express slots in a Norco 4220 case with 2x IBM1015's running mdadm. 12x 2tb black and ent drives in a raid 6.

I have played around with NFS and ISCSI over teamed ports LACP blah it just doesnt work that well so I am looking for something thats simpler. We use all 10Gig ethernet at work which seems very nice but way to expensive for a home setup (switch, cables and cards). I am just wondering what other people do and what options they have found the good and the bad.

Thanks for any advice
 

markarr

Active Member
Oct 31, 2013
421
122
43
I have found that mpio works better than lacp with iscsi. Use spare nics on the hosts and create vmkernal ports using the nics, vlan the switch and connect the nas to the switch via more nics.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Yes - MPIO should give much better performance than LACP for iSCSI connections. Used FC gear is also cheap (a quick ebay search is showing a lot of 4G Qlogic QLE2462 HBA's around $20 each), though there might be a bit of a learning curve involved there. I know Qlogic cards can be put into target mode for use in your SAN-box, not sure on other brands.

But looking at your workload and configuration, you might not see much performance gain. Multiple ESX hosts (assuming with multiple VMs each) is going to be presenting a random write workload, which md RAID-6 is not going to be fast at.
 

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
MPIO is the way to go. I have 4 1GBE NICs dedicated to MPIO on my ESXi host and 4 1GBE NICs dedicated iSCSIto my SAN.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
It might also be worth noting that MPIO works at the SCSI layer, and should work over any transport. So regardless of support from managed switches or anything, you can use MPIO to load-balance over multiple ethernet/iSCSI, FC, SAS, IB, etc. connections.
 

Mike

Member
May 29, 2012
482
16
18
EU
You can create a 'proxy-vm' with a bunch of 1gbit nics and a bonding device, back to back with the SAN. This could then be proxied to the host with a vmxnet adapter at speeds over 1gbit. Far from ideal, but since Vmwarez is a locked POS (software that is).
 

bds1904

Active Member
Aug 30, 2013
271
76
28
What speed PCIe slot do you have available? 4x or 8x?

What OS do you use for your SAN? Solaris based OS's work really well with the Q-Logic target mode. That's actually what I use at home. If you have an 8x PCIe you could actually use a QLE-2464 for the HBA in target mode then a QLE-2462 in each ESXi box. That would give you 8Gb between the SAN and ESXi with MPIO.

If you don't use a solaris based system MPIO or a brocade 1020 10Gb nic is the best.

If you did use the 1020 you would need 3 cards and 2 SFP+ DAC's (direct attach cables).
 
Last edited:

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
You don't mention how many ports are on your switch. If you have lots of 1Gbe switch ports free then the cheapest way would be to pick up dual or quad port cards and stick a load in each box.

For your standard network connections then team the NIC ports together.

For your iSCSI connections then don't team them and use MPIO as others suggest. Also helps keep it tidy having a separate iSCSI VLAN.

If you don't have enough ports then get 3 x Brocade 1020 dual port 10Gbe NICs as also suggested here. Just make sure you get the proper cables. I have just done this myself :)
 
  • Like
Reactions: Patriot

Atomicslave

Member
Dec 3, 2014
48
11
8
That's for all the info guys, so the 10GBe route seems to be the way to go are there any relatively cheap switch's that might give you 3 ports on them. If not they can be interconnected and IP'd like normal Ethernet i would assume? So i could put a dual port card in my server and connect that out to another card in each of my ESX hosts?

Some people seem to be having some issues with those brocades judging by the forums are the Mellanox 10GBe cards better supported?

I am either going to keep running my linux San under vmware or i am going to switch to solaris 11 havent 100% decided yet.

I am choosing to not use ISCSI and MPIO I have used it in prod with a netapp and equallogic and just wasn't impressed with the throughput I just started at another company that is running 100% 10GBe so it would also be a good learning experience playing with it abit altho it sounds quite plug and play.
 

wildchild

Active Member
Feb 4, 2014
389
57
28
You are VERY mistaken is you think MPIO will ever give you more that 1 GB for a single stream.
If you are running mutliple servers, performing multiple requests, shouldn't be any problem

which makes me wonder, what kinds or transfers are you running ?

I am running 3 x ESXi , each with 2 x 1 Gb to a OmniOS which has 4 x 1 GB devided over 2 LACP trunks doing MPIO , and i'm NEVER seeing anything comming close to filling up that 1 GB fully

point being, equallogic have a VERY BAD MPIO implementation, netapp i don't know.....
 

Atomicslave

Member
Dec 3, 2014
48
11
8
I am not going to debate MPIO with you, you could be right. As I said I have used it and had issues that dell and vmware couldnt solve we had some very slow transfers to multiple hosts could never get a single stream over 1gig to any one vm at a time maybe it was dells implementation or vmware round robin, I even added the dell MEM drivers and that didnt help much. So I am just going to try something different this time. Do I need this kind of speed at home no not really but do I want it yes I do :).
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
You are VERY mistaken is you think MPIO will ever give you more that 1 GB for a single stream.
If you are running mutliple servers, performing multiple requests, shouldn't be any problem

which makes me wonder, what kinds or transfers are you running ?

I am running 3 x ESXi , each with 2 x 1 Gb to a OmniOS which has 4 x 1 GB devided over 2 LACP trunks doing MPIO , and i'm NEVER seeing anything comming close to filling up that 1 GB fully

point being, equallogic have a VERY BAD MPIO implementation, netapp i don't know.....
MPIO is not like LACP - as I said above it operates at the SCSI layer. The whole point of it is to join together multiple paths (streams), either for redundancy or increased bandwidth (usually both). A iSCSI with MPIO connection between two servers each having 2 1Gb NICS should have no problem filling both pipes, assuming the storage backend can handle it. Even if it is a single-threaded app just doing IO to a single file.

And yes, it is also quite possible to have MPIO configured wrong and have it reduce performance - especially if you get into more complicated setups than just a server with some disks running a software target. The equallogic arrays for instance have two controllers, both active and with access to all of the LUNs - with all ethernet ports connected MPIO should tell you that you have 4 paths to storage. But you also have to account for the fact that for a given LUN, those controllers are active/passive (only one controller owns a LUN at any given time), and if MPIO is sending commands down paths to the non-owning controller for that LUN that there is a performance penalty while the array forwards the command to the correct controller, and then also has an extra hop sending the response back to the client. Most arrays of this architecture now support ALUA to communicate with MPIO clients as to which paths are active/optimal vs active/non-optimal and everything 'just works', but before ALUA support in clients/targets was common this was a pain-in-the-ass problem to optimize on most arrays.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
with mpio esxi default of iops=1000 it doesn't do squat except for long backups (maybe) , with IOPS=1 you give up linear bandwidth for random iops performance. I found a number between 10 and 100 to perform best (ISCSI lefthand)
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
with mpio esxi default of iops=1000 it doesn't do squat except for long backups (maybe) , with IOPS=1 you give up linear bandwidth for random iops performance. I found a number between 10 and 100 to perform best (ISCSI lefthand)
Agree'd - the ESXi default settings for round-robin suck. I've done a fair bit of testing in that area against quite a few of HP's P2000, P4000, and P6000 arrays (I wish they would stop renaming things, that is the set of names I like best) on both iSCSI and FC transports and found the best balance to be setting the IOPS= value to 1/2 of the HBA's configured maximum queue depth, or 32, whichever is less. When the IOPS= number is greater than the HBA queue depth, you end up spending time waiting for that queue when you could be using an alternate path with an empty queue.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Also, having your flow control setup on everyside and DELAYED ACK (esxi advanced option) makes a huge difference. With the ISCSI lefthand, I had two interfaces per node (3 nodes) with ALB bonding with flow controller Enabled! Since the network was dedicated to ISCSI traffic [no vlan sharing etc] - This made a ton of difference! I went from 1000 of packets dropped/delayed per day to maybe 1-2 packets DELAYED!

Tuxdude is definitely right, about IOPS=32 was what I used with the 3-node lefthand network raid cluster and it would tear up all 4 nic's per vmware host (4 iscsi connects per LUN per host!)