Affordable 10GbE+ for point to point between two VMware hosts ($100)

Discussion in 'Networking' started by Alfa147x, Mar 22, 2020.

  1. Alfa147x

    Alfa147x Member

    Joined:
    Feb 7, 2014
    Messages:
    126
    Likes Received:
    18
    I'm looking for options to connect two VMware ESXi, 6.7.0 machines with the most out of the box compatibility with the host systems.

    I am considering a Chelsio T520 for 10Gbe but would like to explore any higher throughput options for TCP/IP and vMotion for the same price ($50-100 per card) while avoiding the purchase of a 10 Gbe switch.
     
    #1
  2. itronin

    itronin Active Member

    Joined:
    Nov 24, 2018
    Messages:
    290
    Likes Received:
    177
    Chelsio is still pretty pricey isn't it? between 80 and 100 or so for SFP+ plus the cable cost? - whoops sorry I see your budget is $200 total... hmmm

    Would a lot less work?

    If you are okay with doing some firmware updates then your cheapest option above 10gbe is probably a pair of MCX354A-QCBT (search 649281-B21) - you'll have to flash the card to make it a MCX354A-FCBT but at $35.00 or so a card for dual 40Gbe and both tall and short brackets... seems like a no brainer.

    netapp QSFP cables work just fine so figure your length and order the right one. Range seems to be between $8.00 (2m) and up depending on length.

    Assuming short length you can do 2 cards, and a 2m cable for about $78.00 total or thereabouts depending on whether free ship or not.

    I used these cards in a pair of ESXI 6.7 systems connected to an ICX6610 (so not back to back) and they worked great as did the netapp cables.
    the flash was easy. I've since moved them to my FreeNAS boxes and put in Mellanox Connectx-312A - a little cheaper than your Chelsio's.
     
    #2
    Last edited: Mar 22, 2020
    Alfa147x and fohdeesha like this.
  3. fohdeesha

    fohdeesha Kaini Industries

    Joined:
    Nov 20, 2016
    Messages:
    1,692
    Likes Received:
    1,450
    yeah 40gbE between two connectX-3 cards is the way to go, and will be well under your $100
     
    #3
    Alfa147x likes this.
  4. Alfa147x

    Alfa147x Member

    Joined:
    Feb 7, 2014
    Messages:
    126
    Likes Received:
    18
    Perfect. Thanks for the concise summary!

    Any concerns with running two MCX354A-FCBT without a switch in the middle?
     
    #4
  5. dandanio

    dandanio Active Member

    Joined:
    Oct 10, 2017
    Messages:
    102
    Likes Received:
    41
    #5
  6. fohdeesha

    fohdeesha Kaini Industries

    Joined:
    Nov 20, 2016
    Messages:
    1,692
    Likes Received:
    1,450
    nope
     
    #6
    Alfa147x and itronin like this.
  7. Alfa147x

    Alfa147x Member

    Joined:
    Feb 7, 2014
    Messages:
    126
    Likes Received:
    18

    I think you have the wrong thread?
     
    #7
  8. dandanio

    dandanio Active Member

    Joined:
    Oct 10, 2017
    Messages:
    102
    Likes Received:
    41
    #8
    Alfa147x likes this.
  9. Alfa147x

    Alfa147x Member

    Joined:
    Feb 7, 2014
    Messages:
    126
    Likes Received:
    18
    Whoa cool. Looks like getting a 10Gbe switch just went up in priority.
     
    #9
  10. Alfa147x

    Alfa147x Member

    Joined:
    Feb 7, 2014
    Messages:
    126
    Likes Received:
    18
    Hey, I got the NICs and they're great plug and play no issues.

    Vmware question. I want to use this for both TCP/IP and Vmotion. I created a new vSwitch on both hosts and added the port 1 from each NIC. What provides DHCP?
     
    #10
  11. itronin

    itronin Active Member

    Joined:
    Nov 24, 2018
    Messages:
    290
    Likes Received:
    177
    for vMotion my suggestion is to use static IP's for each vmk interface, obviously in the same subnet.
    You can set up a guest that sits on the vswitch if you want it to hand out IP's. I'd recommend linux of freebsd with isc-dhcpd or windows server with the DHCP role installed if you want it GUI.
    How many guests on each ESXI box are you talking about? If it is a handful you might just want to static everything.
    You can overlay subnets on the same physical interconnect too to keep your network addressing separate.
    You can try vlans on back to back connections and I don't see why it wouldn't work but I honestly do not know not having tried it.
    You may have to enable promiscuous mode in the vSwitches though ...
     
    #11
    mathiastro and Alfa147x like this.
  12. Alfa147x

    Alfa147x Member

    Joined:
    Feb 7, 2014
    Messages:
    126
    Likes Received:
    18
    Thanks @itronin - Another question. If I go down that route can I use the same vSwitches for a vmnetwork so my VMs can use the 40 Gbe line for communication between without hitting my slower 1gbe physical switch?

    Basically the topology is where I have one vmhost as my storage server and the 2nd vmhost where all my compute VMs live. vMotion isn't really that important but high throughput between vm1 on host1 and vm1 on host2 is what I'm looking for.

    Thanks again
     
    #12
  13. itronin

    itronin Active Member

    Joined:
    Nov 24, 2018
    Messages:
    290
    Likes Received:
    177
    port groups and vswitches...

    per vmhost
    vm guests<->port-groups<->vswitches<->physical nics
    vmks<->vswitches<->physical nics

    the meta layers in ESXI keep things separate for you. You can do vmotion or not alongside your guests on the same vswitch, but your guests access the vswitch via a port group.

    If I understand you correctly your storage server will be a guest on let's say vmhost 1 and your clients will be guests on vmhost2 using your high speed connection(s). I'm gonna guess that you also have 1Gbe connections on each vmhost as well and that is probably connected to a switch/router configuration that has Internet access.

    If your storage client guest vms don't need internet then this is relatively straightforward Just assign the virtual nic in each guest to the port group that's connected to your high speed vswitch.

    Now, if you do want the guest vms to access things on the 1Gbe network (Internet included) you'll either have to set up a vm to act as a router between your high speed port group and the 1Gbe network so that guest will need (2) nics, one in the 1Gbe port group and one in the high speed port group *or* you can put two nics in each guest and dual home them.

    If they are dual homed you reference the storage server from your guests using the IP subnet connected to the high speed port group. that keeps the storage traffic off the 1Gbe network.

    If you are using a vm as a router to interconnect your high speed and 1Gbe network then your other guests will always be using the high speed port group and you'll just specify their def gw as the high speed interface on the vm acting as a router.

    this is kind of obvious but I'm gonna say it anyway to be clear.
    You'll have at least two different subnets, one on your 1gbe network and a different IP subnet on your high speed network. Best practices generally have you a vmotion network in another subnet (and vlan) too... But nothing wrong with one step at a time.

    Just out of curiosity what NIC's did you end up purchasing? the 40Gbe? What did you get for a DAC? Did you get NIC's with dual interfaces and 2 DACs? You can team the two NIC's together on the same vswitch if you so choose. Single stream will still be limited to the throughput of a single NIC - but just making sure you realized that.
     
    #13
  14. Alfa147x

    Alfa147x Member

    Joined:
    Feb 7, 2014
    Messages:
    126
    Likes Received:
    18
    Yes, you have all the assumptions correctly. The text in green is exactly what I'm missing.

    I hope this helps. I quickly put together a diagram of my home lab.
    [​IMG]

    I was attempting to use a spare 1gbe nic on the general compute host to connect back to the physical switch so the DNS/DHCP server could also issue addresses.

    My new plan is to make a vlan (On the vmhost and not on the Sophos fw) and thus a separate subnet with static addressing for any app/vm that wants high-speed communication with the Xpenology VM. I'll have separate NICs on each VM connected to the high-speed vSwitch. I might consider another VM dedicated to DNS/DHCP for just the high-speed switch.
     
    #14
    itronin likes this.
  15. Alfa147x

    Alfa147x Member

    Joined:
    Feb 7, 2014
    Messages:
    126
    Likes Received:
    18
    Hardware:
    • 2x HP 544QSFP MCX354A-FCBT flashed to Mellanox Firmware
    • NetApp 112-00256 1 meter QSFP SAS Cable

    I also bought a pair of fiber QSFP AOC. The NetApp cable is very difficult to cable manage but I have not tested the fiber cables.
     
    #15
    itronin likes this.
  16. itronin

    itronin Active Member

    Joined:
    Nov 24, 2018
    Messages:
    290
    Likes Received:
    177
    spiffy drawing!

    Yep the QSFP AOC's can be challenging to find on a budget. That's actually what I use (x2) for my storage server. I do have a pair of netapp cables and concur they are unwieldy and hard to manage due to their thickness and shielding. But they do work and are CHEAP!

    based on your drawing I'd static IP anything on the high speed network. You should be able to leave the pi where it is and use dhcp-forwarding in your switch to handle dhcp (assuming you are using isc-dhcp on the pi) for any vlans you set up.
     
    #16
    Alfa147x likes this.
  17. Alfa147x

    Alfa147x Member

    Joined:
    Feb 7, 2014
    Messages:
    126
    Likes Received:
    18

    Sweet the AOCs work perfectly. Here's a link - I spent $26 for 2.
     
    #17
    itronin likes this.
  18. itronin

    itronin Active Member

    Joined:
    Nov 24, 2018
    Messages:
    290
    Likes Received:
    177
    same seller I purchased mine from for the same price. They go in and out of stock. I bought 4 of the last 5 they had last year when I was looking.

    IMO its a very good deal at 12.00 a cable for AOC - little higher power usage than the DAC and a little more expensive but as you've pointed out a much easier cable to work with, route etc.!
     
    #18
  19. Alfa147x

    Alfa147x Member

    Joined:
    Feb 7, 2014
    Messages:
    126
    Likes Received:
    18
    Really odd problem. None of the VMs (on Giulietta) are able to connect to the internet with the two AOCs connected. I'm going to remove the disconnected NICs from the vmswitch and then reconfigure teaming them together.

    What type of VMkernel do I need for VSphere NFS traffic? Trying to clone a VM to a NFS share for backup and it's routing NFS traffic over the 1000Mne connection.
     
    #19
  20. itronin

    itronin Active Member

    Joined:
    Nov 24, 2018
    Messages:
    290
    Likes Received:
    177
    would need more information on the first question.

    On the second. Are you trying to mount the NFS datastore from a guest storage server back to its ESXI host without using a network adapter? Ie you get LIGHTSPEED ... okay not lightspeed but you get in essence ESXI bus speed.

    If so it can be done at least on a standalone ESXI host. I have not tried this on a host in an vcenter managed cluster.

    It basically goes like this:
    Create a vswitch let's call it Private and set the security to accept
    Promiscuous mode
    MAC address changes
    Forged transmits
    Create a portgroup for the guest hmm let's call it Private guest and pick a VLAN ID you won't use on your real network. Say 68 in my case and point it at vSwitch Private.
    Create another portgroup for the host, call it Private host and use the same VLAN ID as the previous step.
    Add a NIC to your guest VM connected to the Private guest port group.
    Create a vmk and connect it to the Private host port group.
    Oh and of course these private connections need to be IP'ed in the same subnet as each other and it is obviously not the same that you are using anywhere else in your network and could only be routable if you created virtual router on the host that connected your private network to a wired network...

    In my example the storage guest is actually serving storage back to the ESXI host to store other guests. Its a bootstrap process. Things get a bit funky during the ESXI host startup its all about the storage guest startup timing and the host then trying to start the guests that reside on the NFS mounted storage from the guest.

    warning and danger will robinson.
    This is not something I would do in production at least not for those I loved because this is a very META thing to do.

    here are some pictures.

    . Screen Shot 2020-03-30 at 8.42.00 PM.png Screen Shot 2020-03-30 at 8.40.07 PM.png Screen Shot 2020-03-30 at 8.40.37 PM.png Screen Shot 2020-03-30 at 8.39.17 PM.png Screen Shot 2020-03-30 at 8.54.49 PM.png
     
    #20
    Alfa147x likes this.
Similar Threads: Affordable 10GbE+
Forum Title Date
Networking How can I establish VPN connection to home? Which affordable router to get? Feb 26, 2020
Networking MTU on affordable SFP+ to RJ45 modules Nov 18, 2018
Networking 100G Ethernet becoming affordable? Aug 3, 2017
Networking Open source bare metal switches? (on the more affordable side) Feb 9, 2017
Networking Step up affordable rack switch Sep 1, 2016

Share This Page