this is my 1st small cluster project (without a great Unix background) hoping to break records with IB OFED support.. is it OK to ask from a range of accumulating issues, now that purchased Mellanox switches / ConnectX NICs are sitting in the US ..waiting for a bright Closure / Go / node.js Admin... I'm using 2 brocade "fiber channel" cables with qsfp connectors between two connect-x3 and it works flawless. Running ib-diag (a tool from mellanox) and it shows support for up to qdr infiniband support: fantastic .. i bought 10 ..
Length: 1 m
Type: Copper cable- unequalized
I think the connectx-3 (and newer) don't work very well with 3rd party active cables & transceivers for 40gbe/infiniband. I have tested so far fdr finisar (all the same model) and various cisco 40gbe transceivers + fibers.
No links between the hosts (host<->host). ??? ??? ibDiag works with OpenSM ?
With the qsfp+ to sfp+ adapter (Using a 40GbE (QSFP+) NIC with a 10GbE Switch (SFP+)) I could get all transceivers for 10gbe to work, no matter if they were mellanox, finisar or cisco branded.
I expect IB switching to be (internal) dominant host to host .. ie: dual Server http// socket management to IPoIB DB access (small 6 node.js cluster w/global RAM db) ( aiming for 100% diskless requests) sure, RoCE provides a state-wide distributed datacenter file access , but what i want is fast datagram UDP-RTP access within an internal IB SDN .. that appears to be restricted to X3-pro cards in RoCE (2) mode .. Can anyone point to UDP - IB RDMA functionality using standard X3 NICs ? dont need the Eth encapsulation, as its internal (private network) or even via ConnectX2 ?
With mention of FDR and QSFP+ i read here (some time back) was a way to reduce switches by splitting a 40Gb into 4 x 10Gb lanes ..the cables are cheap enuf.. even tho a 5030 is overkill for managing a 2 + 6 node (2 back-end session servers + 6 IB isolated DB servers ) it seems a 40 / 4 x 10Gb split from each 5030 active port may work as a crossover into 4 10Gb X2 / X3 NICs (with 8 nodes, 8 dual port NICs is 16 10Gb/S IB ports .. IS there something to gain by using a 40Gb / split 4 x 10Gb cable ? a link is appreciated on how 4 lane 10Gb split links from (or into) a 40GB switcher port does or does not reduce switches.. the right and wrong way : ) also the excellent 5030 setup shown was just the trick to get me started ( i bought 8 !! ) Thanx , that was a booming post here on STH .. as are many rigs .. how are we going to make this happen ? http// request loader (500,000 sessions with 4 packet inspection and return / client per Sec .. lets see where the mem leaks are..
Mellanox excellent ASIC breakthru's enhance design of SDN access / load balancing options.. moving so fast in a blur of updates, making it difficult for less seasoned home-brewers to set a well defined course, now that OFED opens the doors. For example, sometimes the more basic drivers now indicate dependency where prior managed hardware worked without such , as OpenSM ..
"opensm is an InfiniBand compliant Subnet Manager and Administration, and runs on top of OpenIB.
.. what about IbDiag .. how does it fit into OFED ..
"opensm provides an implementation of an InfiniBand Subnet Manager and Administration. Such a software entity is required to run for in order to initialize the InfiniBand hardware (at least one per each InfiniBand subnet). " .. is that as separate daemon for each subnet / or as a single process ?? and what about for managed switches (they are redundant for RoCE ) but what about IPoIB / IB RDMA..
I read that OpenSM must run alongside OFED .. has anyone found a comprehensive dependency list along with expected overhead of binary library support ? Does OFED Centos7.4 have a more relaxed requirement than FreeBSD 11 .. and what of the Cache binaries and their dependency ? in short , Ldconfig builds a library cache at boot-time and many older Aout binaries are part of FBSD. Netscape / SOCKS modules have Aout binaries.. Where is a comprehensive list ? what overhead does XFree86 compatibility bestow ? Why do i need XFree86 to run Aout binaries with ELF and what crazy load does the linker Rtld impose on the 229 binaries ? Are these added to by OFED ? or bypassed completely ?
The whole reason to go OFED is kernel bypass .. (at least reduced kernel microservices) ..
That's a bit far of field, yet how does XFree86 interoperate with OFED drivers / libraries, and do i want XFree86 and Aopen binaries at all ? ( NO !! ) How much Aopen / ELF baggage rides along with OFED and is there a way to replace the old binaries ?? ( like use Firedragon , or Centos 7+ distro ? )
opensm also now contains an experimental version of a performance manager as well. OK sounds good but how much extra load is this in the Network Stack ? the whole idea of ASICs is to reduce stack chatter (i believe the Counters in the X4 NICs go a long way .. pity they are $1000 .. any advice on the above quandry (s) is keenly appreciated .. great stuff brewing..