Hello all,
I was thinking, for my living room where I have quite a few servers running (silent !), to do a custom 10gbe IPoIB setup. In my lab room I could use the Voltaire 4036 (non-E version unfortunately) which I still need to find a way to silence. It's way too noisy even for the server room .
Anyway ... back to the topic ... I was thinking of using a few "Mellanox ConnectX-2 VPI Dual Port Networking Card MHRH2A-XSR" to inter connect these servers. Because the servers need to be connected also to the lab room, I was thinking of using the following setup:
Router/Switch:
- Proxmox VE (Debian based) virtualization platform doing switching/routing
- Intel/Dell X540 T2 uplink (yeah because unfortunately the cabling is already in place so I really needed RJ45)
- Some other mellanox dual QSFP+ cards (I have a bunch of Mellanox Hca-30024) connected to clients using QSFP+ to 4xSFP+ adapters
Other servers/clients (about 5-10 of those):
- Proxmox VE (Debian based) virtualization platform
- 1 x MHRH2A-XSR card (found them on eBay at 10$ each )
Could this setup run? Every server/client will need to use IPoIB. On top of that the master router/switch will also have to either do routing or bridging (which, according to some posts here on servethehome, may be difficult/impossible to do).
Could this setup work relatively easily?
I had considered other alternatives but for one reason or the other in the end it wasn't really convenient. All of these of course without a switch
- Intel/Dell X540 T1 or T2 -> expensive (~140 $ off eBay with shipping + duties + customs) but cheap cables
- Mellanox 10gbe EN ConnectX-2 SFP+ -> expensive dual ports (~ 80 $ off second hand) + expensive cables
The biggest problem in this setup is that I'm also limited in the amount of PCIe slots I have available on the motherboard. A maximum of 2 x8 PCIe3 slots are usable on each platform. Therefore "just" using a bunch of cheap 1 port SFP+ cards is not really an option once you factor in the cost for PCIe riser, extender, splitter, ...
The IB solution would only require a QSFP+ card in the master switch. A dual Mellanox 700ex2-q (Voltaire Hca-30024) would allow to directly connect up to 8 servers using 2 x QSFP+ to 2 x 4 x SFP+ adapters. I would say this is hard to beat for the price
What kind of performance may I expect on a Intel Xeon E3 1231 v3 platform (16GB / 32GB RAM)?
For me the biggest open point is if/how IPoIB can be bridged . Or, if routed, how much of a performance penality may I expect? I don't plan to achieve full 10gbe networking. Even say 3-4 gbe would be good.
Thank you guys
I was thinking, for my living room where I have quite a few servers running (silent !), to do a custom 10gbe IPoIB setup. In my lab room I could use the Voltaire 4036 (non-E version unfortunately) which I still need to find a way to silence. It's way too noisy even for the server room .
Anyway ... back to the topic ... I was thinking of using a few "Mellanox ConnectX-2 VPI Dual Port Networking Card MHRH2A-XSR" to inter connect these servers. Because the servers need to be connected also to the lab room, I was thinking of using the following setup:
Router/Switch:
- Proxmox VE (Debian based) virtualization platform doing switching/routing
- Intel/Dell X540 T2 uplink (yeah because unfortunately the cabling is already in place so I really needed RJ45)
- Some other mellanox dual QSFP+ cards (I have a bunch of Mellanox Hca-30024) connected to clients using QSFP+ to 4xSFP+ adapters
Other servers/clients (about 5-10 of those):
- Proxmox VE (Debian based) virtualization platform
- 1 x MHRH2A-XSR card (found them on eBay at 10$ each )
Could this setup run? Every server/client will need to use IPoIB. On top of that the master router/switch will also have to either do routing or bridging (which, according to some posts here on servethehome, may be difficult/impossible to do).
Could this setup work relatively easily?
I had considered other alternatives but for one reason or the other in the end it wasn't really convenient. All of these of course without a switch
- Intel/Dell X540 T1 or T2 -> expensive (~140 $ off eBay with shipping + duties + customs) but cheap cables
- Mellanox 10gbe EN ConnectX-2 SFP+ -> expensive dual ports (~ 80 $ off second hand) + expensive cables
The biggest problem in this setup is that I'm also limited in the amount of PCIe slots I have available on the motherboard. A maximum of 2 x8 PCIe3 slots are usable on each platform. Therefore "just" using a bunch of cheap 1 port SFP+ cards is not really an option once you factor in the cost for PCIe riser, extender, splitter, ...
The IB solution would only require a QSFP+ card in the master switch. A dual Mellanox 700ex2-q (Voltaire Hca-30024) would allow to directly connect up to 8 servers using 2 x QSFP+ to 2 x 4 x SFP+ adapters. I would say this is hard to beat for the price
What kind of performance may I expect on a Intel Xeon E3 1231 v3 platform (16GB / 32GB RAM)?
For me the biggest open point is if/how IPoIB can be bridged . Or, if routed, how much of a performance penality may I expect? I don't plan to achieve full 10gbe networking. Even say 3-4 gbe would be good.
Thank you guys