I hear good things about the forums you found those instructions on.I followed these instructions and flashed my cards with the stock firmware: Flash OEM (IBM) Mellanox ConnectX-3 EN with stock firmware?
I hear good things about the forums you found those instructions on.I followed these instructions and flashed my cards with the stock firmware: Flash OEM (IBM) Mellanox ConnectX-3 EN with stock firmware?
I crossflashed mine, no issues whatsoever :-DI wonder if we can flash those to stock Mellanox firmware as well.
think they should all do RoCE, as long as they are capable of Ethernet at all and not IB - only.@T_Minus All mellanox connectx 3 cards use the same silicone and support the same features. Only differences are the speeds (fdr, fdr10 or qdr) and port counts.
I would say the most interesting feature is Rdma Over Converged Ethernet. With roce the nics can directly write to the ram without involving the cpu or os, freeing those ressources.
@T_Minus All mellanox connectx 3 cards use the same silicone and support the same features. Only differences are the speeds (fdr, fdr10 or qdr) and port counts.
I would say the most interesting feature is Rdma Over Converged Ethernet. With roce the nics can directly write to the ram without involving the cpu or os, freeing those ressources.
yes, the 4300 is EN, so RoCE would be the way to go on these for rdma. haven't compared RoCE to IB yet as EN switches with higher port counts are quite rare. but in theory RoCE should be offloaded to the nic and be very close to IB in terms of latency, performance and overhead on the CPU.
Funny I was looking at the red tab ones LOT OF 6 IBM 45W9392 Mellanox Infinaband 40GbE QSFP+ Passive Copper Cable | eBayLOT OF 6 IBM 45W9377 Mellanox Infinaband 40GbE QSFP+ Passive Copper Cable | eBay
are those ok ?. costs are under $20.
Cool toppic for a Thesis!With regard to "latency, performance and overhead on the CPU" i should be able to shed some light in about 7-8 months as i'm writing a thesis about this topic. I have access to 2 such adapters in 2 servers i'm using to practical experiments.
One important aspect with 40GbE RoCE is the IRQ affinity. This can be tuned using vendors own utility.
Other stuff like NUMA, Intel's HT, Turbo etc also affect the performance (to be confirmed).
Cool toppic for a Thesis!
will you compare plain IB vs. RoCE ?
for what type of application?
(i would be strongly interested in SRP and iSer)
Also, it would be really cool to know how latency is affected when
a.) two nodes / hca's direct connect
b.) two nodes, connected via switch, both IB and EN / RoCE
in the end, it's a shame EoIB seems to be dead, would have eliminated the ipoib oddities regarding bridges...
Cool topis for a thesis. Don't hesitate to share it with us once it's done. We deal with high traffic video servers pushing tens of Gbps each and ability to push several Gbps more from each of the servers is valuable for us. Of cource, the biggest problem at the moment are IRQ.With regard to "latency, performance and overhead on the CPU" i should be able to shed some light in about 7-8 months as i'm writing a thesis about this topic. I have access to 2 such adapters in 2 servers i'm using to practical experiments.
One important aspect with 40GbE RoCE is the IRQ affinity. This can be tuned using vendors own utility.
Other stuff like NUMA, Intel's HT, Turbo etc also affect the performance (to be confirmed).