I have a few servers that I do like to connect with 10Gbit and later 40Gbit (Mellanox ConnectX-3 cards in ETH mode) but without an 'expensive' and loud switch.
I can get the cards with 2 ports for a little bit more than cards with 1 port so the initial investment is nearly the same ; 6 cards for 6 cards in total.
If I'd connect each card with a DAC cable (cost effective) and give each couple of cards their own network (/24) then I can easily manually add routing to each server so it knows where it's neighbors are.
Of course the server that is furthest away would just have 1 route to it so packets know which interface they should use.
Downside:
- bandwidth is shared so when server A is sending to D all data goes through B and C and
- higher latency due to multiple hops
- CPU usage on each server even while traffic is not for him
- if one cable breaks half of the network is down
- one server down, half the network down/unreachable
Upside:
- doesn't need to buy a switch
- no electricity costs for switch
- no noise (network cards are notorious silent)
- slightly more complicated to setup routing
- cannot leverage the dual network ports for redundancy or bonding
- can keep adding servers without running out of ports on the switch
This is for a 'home-lab' so not really mission critical servers. All
servers would be running services like file-server, machine-learning, VMs etc.
Anyone done this with 10 Gbit or 40 Gbit Ethernet and has some suggestions or remarks or advice ?