Now that I’ve built a pair of fully-solid state ISCSI targets using TrueNAS core I’d like to upgrade my network to provide the lowest latency from these boxes to my 3-node ESX 7.03 cluster and I’m looking for recommendations.
PCI slots are at a premium on my boxes so I’m thinking a dual-port HBA per server and a managed switch. I have a mental model of ~$50 per nic and ~$300 for the switch but that is flexible.
I’d like to go RDMA over infiniband but I might consider RoCE if a good argument was made.
My current setup is a Force10 S4810 10g SFP+ switch and ConnectX-3 dual port SFP+ nics with one port for VM traffic and one port for iSCSI traffic. Right now I’m toying with the idea of a standalone RDMA network for iScSI traffic and preserving the 10g over fiber network for user traffic.
Does anyone know of a parts list and/or guide to make this happen?
PCI slots are at a premium on my boxes so I’m thinking a dual-port HBA per server and a managed switch. I have a mental model of ~$50 per nic and ~$300 for the switch but that is flexible.
I’d like to go RDMA over infiniband but I might consider RoCE if a good argument was made.
My current setup is a Force10 S4810 10g SFP+ switch and ConnectX-3 dual port SFP+ nics with one port for VM traffic and one port for iSCSI traffic. Right now I’m toying with the idea of a standalone RDMA network for iScSI traffic and preserving the 10g over fiber network for user traffic.
Does anyone know of a parts list and/or guide to make this happen?