I think the newer CPUs are more efficient with I/O than the older e5-2450s v1 we use. On the other hand we paid less than $100 each second hand so I'm not complaining. I saw 37 Gbit so I'm far from complaining.Interesting!
My little home setup is able to hit about 37gbit with 4 iperf connections pretty consistently.
E5 1650v3 on the client side and a e5 2628L V4 12c, 1.7GHZ on the Nas / server side.
And all that over a 10m QDR DAC cable with everything in ETH mode.
Though, I couldn't test "realworld" since the fastest storage currently installed are 2wd reds. Freenas caching goes up to 800MB until it has to write that off to disk, lol.
As far as I have read, the HPC people seem to put two cards into their machines, one for each CPU to not have to use the QPI link between CPUs.
So having it on CPU 1 seems like a good idea. Another option might be to set on which CPU the software for the nic runs, but I have no clue how that would be done, or if it is possible. It should I think.
Gona have to take a look at my setup this evening.
Have you tried other protocolls? I read somewhere that scp has static flow control buffers that could become bottlenecks.Now we're trying to understand why we see 50 MB/sec from ramdisk to ramdisk SCP copy over a 40 Gbit connection
Yes, it requires qsfp14 cables, "normal" 40 gbit ethernet is using qsfp+. Qsfp14 transceiver build into dacs use a higher clock than the qsfp+ cables.does FDR need a different cable ?
Would normal ipoib benefit from a 56G link vs a 40G link or would it max out at the regular "lower" speed e.g. 19/20/21 Gbit ?Yes, it requires qsfp14 cables, "normal" 40 gbit ethernet is using qsfp+. Qsfp14 transceiver build into dacs use a higher clock than the qsfp+ cables.
Yes, if your application can max out 40gbit/sWould normal ipoib benefit from a 56G link vs a 40G link
I found a great article about iSCSI, iSer and SRP performance over ETH, IB etciSer uses, but will not be limited by ipoib speed because of RDMA
Long time lurker - 1st time poster...So I have just recently installed HP 649281-B21 into a Linux/Debian and FreeBSD server, direct link, no switch.
I followed the flashing tutorial here:https://forums.servethehome.com/ind...x-3-to-arista-7050-no-link.18369/#post-178015
And I have the following cable: Mellanox MFS4R12CB-003 Infiniband Cables
But I only get 10G, not 40G
Is this is firmware config error or did I get the wrong QSFP cable?
EDIT: Flash tutorial is outdated.
Use this https://forums.servethehome.com/ind...net-dual-port-qsfp-adapter.20525/#post-198015