Starting out with 10gb help

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RyC

Active Member
Oct 17, 2013
359
88
28
(Split from the great deals topic)
Finally ready to jump into 10gb networking, but I want to make sure I get the right parts and have this thought through.

2x Mellanox Connect-X2 cards from https://forums.servethehome.com/index.php?threads/10gb-sfp-single-port-cheaper-than-dirt.6893/
2x 10GBASE-SR SFP+ MMF 850nm Transceiver - $16 (do I have to specify Mellanox or leave it at Generic?)
1x LC-LC Duplex 10G OM4 50/125 Multimode Fiber Patch Cable

Here's my scenario:
Drop these into two ESXi 6.0U1 hosts. I want to use these to share out napp-it AIO storage over NFS from Host A to Host B (and nothing else probably, currently NFS is going over the internal ethernet LAN with everything else :eek:).

So if I want to isolate the 10gb NFS network from everything else, does it go like this? On napp-it, set the interface with a static IP on a private subnet. On the hosts, assign a vSwitch to the adapters, set (a new?) vmkernel with a static IP on the vSwitch, and log into NFS. Can I make sure napp-it only shares NFS out a single adapter so I can use the other one purely for management? And can one vmkernel be assigned for NFS and the other for management/etc? Is there anything else I'm missing?

Thanks so much for your help, this is my first time wading into this.
 

brendantay

Member
Aug 12, 2015
128
18
18
Australia
Following due to planning to go 10gb myself. These parts are exactly what I planned on purchasing. So I hope it's all good.


Plus any recommendations for 48x 1gb switch with 2-4 10gb ports and. Able to use above transceivers.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Your spot on and headed down the proper path @RyC

If you really want to scale NFS please ensure you dedicate several vmkernel ports per NFS datastore mount (all on seperate vlan's w/ different IP addressing schemes as well as vlan interface off filer/NAS cooresponding to each NFS stub subnet).

This will let NFS perform very well on 10G network granted you have the disk I/O to drive it.

@brendantay As far as switch, a microtik, lb4m, cisco sg500x, juniper ex3300 will do the trick for $400-600 typically...even lower on the microtik and lb4m avenue I am pretty sure, I just prefer a 'deep networking pedigree' switch vendor. YMMV

Got mine from this guy, has more now if you can swallow the junos pill/take the plunge to the 'dark side'.

Juniper EX3300-24T
 
Last edited:

RyC

Active Member
Oct 17, 2013
359
88
28
Haha I know right!

One more thing I thought of, if the 10gb NFS network is completely isolated from the rest of the network, everything gets a static IP, but what should the router be set as? The napp-it IP?

I'll leave the Fiberstore compatibility at "Generic" since it seems the Mellanox cards will accept pretty much any transceiver. Thanks so much for the tips.
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
@brendantay If you only need a few SFP+ ports the Juniper EX3300's are nice for the switch side, we got several new for work and I got a bunch for -48 and -48P's for labbing / experimentation @ home from eBay.
I also just ran across the T1048 from Quanta earlier today while preparing some other quotes. It might be a worthy option too as it is fanless
QuantaMesh T1048-P02 QuantaMesh T1048-P02S but not seen any reviews of it yet.
 

brendantay

Member
Aug 12, 2015
128
18
18
Australia
thanks @whitey & @Blinky 42 - I was looking at buying a lb4m, I just thought id take a few additonal recommendations.

Based on the cost I doubt ill beat the Lb4m's - I just require a 10gb link between my VM host and my desktop and I figure ill replace my existing 24port dell gb switch in the process
 

pyro_

Active Member
Oct 4, 2013
747
165
43
Might also want to take a look at the mikrotik crs226 switches. Fairly cheap, will give you two 10gb ports and fanless
 
  • Like
Reactions: cthulolz

brendantay

Member
Aug 12, 2015
128
18
18
Australia
I did have a look at the mikrotik having used a lot of them before for routers and such, however its almost double the price of the lb4m :p
 

RyC

Active Member
Oct 17, 2013
359
88
28
Ok parts are ordered!

In the meantime, I tried out the steps I listed above for isolating the NFS network with some existing 1gbe NICs. I found out ESXi hosts can only have 1 default gateway set across all vmkernel ports. It looks like I could have added a static route for the NFS network, but since the 2 hosts are directly connected to each other, it doesn't need one (correct me if I'm wrong, it seems to be working for now).

Anyway, I reassigned 1 napp-it NIC (other napp-it NIC is assigned to the existing LAN for management) and added a new vmkernel (existing vmkernels still on existing LAN for management again) on both hosts all with static IPs in 172.16.6.0/27. The NICs assigned to the vSwitches have these new "NFS" vmkernels (and 1 napp-it NIC) and the NICs on the 2 hosts are directly connected over Ethernet. I removed and re-added the datastore using the 172 address I statically assigned to napp-it and the datastore came right up on both hosts! Hopefully I've isolated the "NFS" network correctly. The routing tables on both hosts indicate that 172.16.6.0/27 is going over the "NFS" vmkernel port and not the existing "Management" vmkernel port, so I assume it's working correctly.

I was trying to replicate the network and connection scheme the Mellanox cards will have so when the Mellanox cards arrive, I think I can just drop them in, switch out the current NICs for them on the vSwitches, and it should continue chugging just like that?
 
Last edited:

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
If you have the ability to update the firmware, it is recommended. If you need to go out of your way to do so, don't bother, they will likely still function as intended
 

RyC

Active Member
Oct 17, 2013
359
88
28
Thanks so much for your help everyone, I got all the parts and put everything together. It went pretty much exactly as I explained above. The cards I got (MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe SFP+ NETWORK CARD) ended up having the real Mellanox PSID instead of an OEM one so upgrading the firmware to the latest version was straightforward. I popped them in the servers, and they were immediately recognized in ESXi 6.0U1 and auto-detected the correct link speed of 10000 full duplex. I just assigned the NICs into the appropriate vSwitches and NFS continued exactly as before.

Here's a few (not very interesting) photos of the installation!






 

RyC

Active Member
Oct 17, 2013
359
88
28
I'm planning to add another server thanks to the E5-2670 deal, and so I need to attach it to NFS share (over 10g preferably). There don't seem to be any relatively cheap switches with just 4 or so SFP+ ports (unless I'm mistaken), so if someone knows if this works or not, that would be great:

Pick up a dual port Mellanox ConnectX-2 off eBay and install it in the ESXi host "A" which hosts the napp-it AIO VM. Attach both interfaces (in Active mode) on the dual port ConnectX-2 to the vSwitch which the napp-it and NFS vmkernel are on. Connect host "B" and "C" to the ConnectX-2 in host "A" and statically assign IPs/vmkernels/etc like I did before (since the 10g network is still isolated from the main network).

It seems that vSwitches don't forward packets between the physical interfaces attached to them, so host "B" will NOT be able to reach host "C" over the 10g network, but as long as "B" and "C" can just reach "A" for the NFS datastore, then should this work? Or do I really need to spring for a proper SFP+ switch?
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
@RyC Is there any reason you went the Transceiver/Fiber cable route over Twinax? Just wondering since I'm getting ready to buy my own cabling and was wondering if there was any reason other than preference.
 

RyC

Active Member
Oct 17, 2013
359
88
28
@RyC Is there any reason you went the Transceiver/Fiber cable route over Twinax? Just wondering since I'm getting ready to buy my own cabling and was wondering if there was any reason other than preference.
No reason other than I might move the servers further physically apart sometime in the future, so I just went the fiber route now. Plus fiber sounds cooler ;)
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
@RyC Is there any reason you went the Transceiver/Fiber cable route over Twinax? Just wondering since I'm getting ready to buy my own cabling and was wondering if there was any reason other than preference.
Length of the run would be one reason. I tried to use a 10m passive DAC cable and saw terrible speeds (less than MB/s). Switching to fiber and transceivers saturates the 10GBe connection with iperf now.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
No reason other than I might move the servers further physically apart sometime in the future, so I just went the fiber route now. Plus fiber sounds cooler ;)
Makes sense. Did you wind up having to specify the brand when you purchased the transceivers from FS.com?


Length of the run would be one reason. I tried to use a 10m passive DAC cable and saw terrible speeds (less than MB/s). Switching to fiber and transceivers saturates the 10GBe connection with iperf now.
Right. For me distance isn't an issue because at the moment I won't need any more than 1m cables. However being able to extend that in the future may push me to the transceiver route.
 

RyC

Active Member
Oct 17, 2013
359
88
28
Makes sense. Did you wind up having to specify the brand when you purchased the transceivers from FS.com?
For the Mellanox NICs I posted above, I did not have to specify a brand for the transcievers (left it at Generic). Some other NICs/switches may require the transciever to be programmed with the right brand though
 

RyC

Active Member
Oct 17, 2013
359
88
28
To answer my latest question, I got it working, but slightly more roundabout than I wrote out above. Adding 2 physical NICs to a vSwitch and expecting things to behave like I said above does not work. I was able to ping the vmkernel on Host A from both B and C, but only the host connected to the first NIC in the vSwitch physical adapter list would be able to ping the napp-it VM.

What I ended up doing is I added another vmxnet3 NIC to the napp-it VM and statically assigned it 172.17.X.X (the existing NIC is 172.16.X.X). I added that new NIC to a new vSwitch which the new physical 10G NIC attached. So now both hosts could ping napp-it over 10G. On each ESXi host, I modified the /etc/hosts file to give the napp-it VM a common domain name even though each host would be accessing via a different IPs. Then I remounted the NFS share using the domain name instead of IP address. This is so vCenter recognizes the NFS share as the same share across all hosts, even though they may have different IPs. All hosts seem to be working great now, and I still haven't needed to buy a real 10G switch.

I ended up buying Intel Ethernet Server Adapter X520-DA2 w/Intel Hologram from Natex and it was shipped quickly and packaged well.
I picked up 2 Intel compatible SFP+ modules from Fiberstore again Intel E10GSFPSR 1000BASE-SX and 10GBASE-SR SFP+ Transceiver | FS.COM (no issues using the existing Mellanox/generic transceivers on the other end of these) and another patch cable LC-LC 10G 50/125 OM4 Duplex Multimode PVC/LSZH/OFNP Fiber Patch Cable | FS.COM

I selected Air Mail shipping because I expected to buy the rest of the parts I needed much later than I actually did, and Fiberstore bumped me to DHL shipping for free! Their customer service so far has been really great to me.