Hi guys
I need some advice about 40Gbps networking.
I have a small cluster composed by 3 EPYC 2 ESXi servers acting as compute nodes with some NVME disk on board and 1 storage node with hybrid storage (cpu here is E5 2680 v2).
The VMs of the computantions nodes need to exchanges frequently a bunch of data with low latency and sometimes also to dinamically move VMs from one node to the others.
At beginning I started thinking about connecting directly them with nic with double ports at 25Gbps Ethernet, but this technology is quite new, so is not cheap. From what I see, probably with less money I can buy used 40gpbs nic. But here it seems there are multiple choices.
The first one is choosing if 1) connect directly at 40 gbps the 3 servers given that two port per nic allow this, and add another nic at 10gbs for the storage node (or simple use the esisting 1gbs link of the motherboard) or 2) use a switch with 40 gbps to connect all 4 servers an maybe more (of course this will cost more).
The second choice is about which nic to use. Personal preference is for Mellanox ones because of their large compatibility with ESXi, FreeBSD and Linux. But there are several models and several OEM (like the HP ones) available. In your opinion which models should I look at?
Thanks!
I need some advice about 40Gbps networking.
I have a small cluster composed by 3 EPYC 2 ESXi servers acting as compute nodes with some NVME disk on board and 1 storage node with hybrid storage (cpu here is E5 2680 v2).
The VMs of the computantions nodes need to exchanges frequently a bunch of data with low latency and sometimes also to dinamically move VMs from one node to the others.
At beginning I started thinking about connecting directly them with nic with double ports at 25Gbps Ethernet, but this technology is quite new, so is not cheap. From what I see, probably with less money I can buy used 40gpbs nic. But here it seems there are multiple choices.
The first one is choosing if 1) connect directly at 40 gbps the 3 servers given that two port per nic allow this, and add another nic at 10gbs for the storage node (or simple use the esisting 1gbs link of the motherboard) or 2) use a switch with 40 gbps to connect all 4 servers an maybe more (of course this will cost more).
The second choice is about which nic to use. Personal preference is for Mellanox ones because of their large compatibility with ESXi, FreeBSD and Linux. But there are several models and several OEM (like the HP ones) available. In your opinion which models should I look at?
Thanks!