Mellanox SX6012 - infiniband networking advice/checkup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ItsValium

New Member
May 28, 2015
4
1
3
45
Hello all,

New user, long time reader. This is a bit of a mixed topic about more than just networking but mostly it fits, if I'm wrong, please move it to the appropriate section.

I got my hands on two Mellanox SX6012 switches and a couple of ConnectX-2 VPI cards and I am looking for a couple of answers on the questions that I have concerning setup of these device into my existing infrastructure (Nothing in production, all in testing/educational environment).

First of my current setup :
  • Hypervisor nodes (8 of them and 2 spares) :
    HPDL160G6 with dual 6-core xeons and 72 GB of RAM, no local storage running ESXi 5.5 from USB with 6 Gbit NIC's (2 built-in Broadcom based and 4 on a HP NC364T add-in card).
  • Storage nodes (2 of them) :
    Supermicro X9SCM-F based with E31230 Xeons and 16 GB RAM each, 6 Gbit NIC's (2 built-in Intel based and 4 on a HP NC364T add-in card) running Nexenta CE, reflashed M1015 HBA's in IT-mode with a 5*2 mirror pool of 1TB WD RE HDD drives and a 24 GB slog SSD together with a 200 GB l2arc SSD, with the OS on two mirrored 250 GB HDD's.
  • Switching :
    Two Cisco SG500X 48 Gbit + 4 10Gbit SFP+ ports stacked together using the stack ports (10 Gbit ring between the switches).
  • Extra :
    Several other machines attached to the switches running a variaty of OS's (both desktop and server-class)
Now what I would like to do is put the ConnectX-2 VPI cards in the hypervisors and the storage nodes, and connect one port of each to one of the 2 SX6012 switches. And expand my storage with an extra node with an all SSD-based pool. But also uplink the two switches to both cisco's using QSFP+ to SFP+ breakout cables, along with some of the other machines using intel X520-DA2 cards so the other hosts can access the storage through faster links than 1Gbit.

I have read a lot, looked at spec-sheets and the whole mess as much as I could find about the subject but since the switches are in transport and the seller wasn't very technical I have no idea yet if they have the license installed for the ethernet-infiniband gateway functionality they support.

Here are my questions for those of you that have read my lengthy post so far:
  • If the switches have the licenses installed, will I be able to use infiniband links from my hypervisors and storage and ethernet links from the others without issues?
  • If they don't have the licenses, can I configure the switches in ethernet only mode but still use infiniband cabling from the hypervisors and storage nodes and still get my whole setup to talk to each other?
  • Are the SX6012 switches able to stack together and do LACP spread over both of them?
  • Assuming I get everything setup in an ideal configuration, what kind of pool layout would you recommend for my storage expansion? Hardware wise I would think of a dual 4 or 6-core CPU setup with 128 - 256 GB RAM and up to 18 Samsung 850 PRO SSD's as datadisk with up to 6 Intel 3710 SSD as slog devices.
Can anyone shed some light on these questions? Feel free to jump in on any other aspect too if you think I'm on the wrong track with this.

Regards from Belgium!