I'm planing a high performance ZFS iSER SAN as storage for 4 ESXi hosts and I got a couple of questions. This will not become an absolutly no latency high-availability cluster, but if the primary-node becomes faulty and the SAN will be "online" again in about 10 seconds, this will be just fine. I will use pcsd/corosync/pacemaker for this.
First the planed configuration:
Question is about NICs and Switches. For iSER I need RDMA capable NICs and switches.
I'm looking for CAVIUM FastLinQ QL45412HLCU-CI (2x 40GbE each node) for the SAN and CAVIUM FastLinQ QL41132HLCU (2x 10GbE for ESXi) for each ESXi but I'm totaly unsure about which switch I should pick. I know DCB and PFC is a must have, but I'm confused about the need of ETS, does the switches realy need this for iSER?
Because of some budget limitations (well it's kind of still enough budget ) we will reuse some already existing hardware like cpus and ram, so I do not want to spend another 20k only for the two switches.
Maybe someone got a suggestion which switches I should buy for this (used hardware from ebay is OK too). In fact a switch with 4 SFP+ (or maybe RJ45?) 10GbE and 2 QSFP+ 40GbE Ports will be enough for throughoutput, more 10/40GbE Ports would be just a nice to have feature.
I'm looking forward in your (hopefuly many) suggestions.
Regards
First the planed configuration:
- debian, ubuntu or centos - I have not decided jet, probably it gets ubuntu 18.04
- Supermicro SuperStorage Server SSG-2029P-DN2R24L (2-node in a box system)
- 2x Intel Xeon Bronze 3104 (6x 1.7GHz, no HT, no Turbo) each node
- 12x16GB (primary-node) + 4x16GB (failover-node)
- 8-12 Samsung SSD PM1725a 3.2TB U.2
- dual ported 40GbE NIC each node - not decided jet
- dual ported 10GbE NIC each ESXi - not decided jet
- some 10GbE+40GbE Switches - not decided jet
Question is about NICs and Switches. For iSER I need RDMA capable NICs and switches.
I'm looking for CAVIUM FastLinQ QL45412HLCU-CI (2x 40GbE each node) for the SAN and CAVIUM FastLinQ QL41132HLCU (2x 10GbE for ESXi) for each ESXi but I'm totaly unsure about which switch I should pick. I know DCB and PFC is a must have, but I'm confused about the need of ETS, does the switches realy need this for iSER?
Because of some budget limitations (well it's kind of still enough budget ) we will reuse some already existing hardware like cpus and ram, so I do not want to spend another 20k only for the two switches.
Maybe someone got a suggestion which switches I should buy for this (used hardware from ebay is OK too). In fact a switch with 4 SFP+ (or maybe RJ45?) 10GbE and 2 QSFP+ 40GbE Ports will be enough for throughoutput, more 10/40GbE Ports would be just a nice to have feature.
I'm looking forward in your (hopefuly many) suggestions.
Regards