LR4 transceivers usually do not have a minimum length and do not require a damper for short distances.Wonder if the fiber optic cable is not long enough because that Arista QSFP-LR4-40G is meant for Long Range multi kilometer distances.
LR4 transceivers usually do not have a minimum length and do not require a damper for short distances.Wonder if the fiber optic cable is not long enough because that Arista QSFP-LR4-40G is meant for Long Range multi kilometer distances.
Device #1:
----------
Device type: ConnectX3
Device: /dev/mst/mt4099_pci_cr0
Configurations: Next Boot
SRIOV_EN False(0)
NUM_OF_VFS 8
LINK_TYPE_P1 ETH(2)
LINK_TYPE_P2 ETH(2)
LOG_BAR_SIZE 3
BOOT_PKEY_P1 0
BOOT_PKEY_P2 0
BOOT_OPTION_ROM_EN_P1 False(0)
BOOT_VLAN_EN_P1 False(0)
BOOT_RETRY_CNT_P1 0
LEGACY_BOOT_PROTOCOL_P1 None(0)
BOOT_VLAN_P1 1
BOOT_OPTION_ROM_EN_P2 False(0)
BOOT_VLAN_EN_P2 False(0)
BOOT_RETRY_CNT_P2 0
LEGACY_BOOT_PROTOCOL_P2 None(0)
BOOT_VLAN_P2 1
IP_VER_P1 IPv4(0)
IP_VER_P2 IPv4(0)
CQ_TIMESTAMP True(1)
I connected my NAS to the same ports, with the same cables.in my experience the mellanox nics work with everything that you throw at them, and usually it's the other side or the cables that are problematic.
how did you test the ports on the ubiquiti switch? -> can you use the other server to test a direct link with the mellanox card with the cable?
I was able to get it working I had to get QSFP to SFP adapters and use SFP's, per this article ( Using a 40GbE (QSFP+) NIC with a 10GbE Switch (SFP+) (servethehome.com). I am not terribly familiar with 40gb Ethernet, however I was surprised that they don't auto negotiate back to 10gb.Try loop linking your ports to see if ethernet indeed works?
In linux you can use namespaces and assign one port to a separate namespace which will allow you to have them both on the same subnet to test with something like iperf.
Only the first 10GbE channel will link up, the rest won't work because the CX3 doesn't support breaking out 40GbE into 4 seperate 10GbE links.With these cards if I buy a QSFP+ breakout cable to 4 SFP's, how do the connections work? Do I setup a lagg bond on the switch with the four SFP's? Does the NIC see all four SFP breakouts as separate connections? I don't have a cable yet, and have never used one so forgive my ignorance.
Thanks, I didn't know that. I knew it consists of 4x10 but faced the same problem trying to connect various transceivers, MM and SM in ruckus 7150(10gbe) to arista 40gbe-univ transceiver in mlx354. While the reason I couldn't is likely due to fact that those have pretty unique frequencies and probably only link up together, even if they would have been perfectly compatible, maybe they wouldn't link up because of that. I do have a QSFP+-SFP+ adaptor now but already solved it the other way using longer loop but still pairing arista to arista transceivers. Thanks for the insight.Only the first 10GbE channel will link up, the rest won't work because the CX3 doesn't support breaking out 40GbE into 4 seperate 10GbE links.
They can work like that in switches, but not NICs.With these cards if I buy a QSFP+ breakout cable to 4 SFP's, how do the connections work? Do I setup a lagg bond on the switch with the four SFP's? Does the NIC see all four SFP breakouts as separate connections? I don't have a cable yet, and have never used one so forgive my ignorance.
PCI-E gen 3 is 8 Gb/s per lane, so right there you're down to 32 Gb/s max theoretical. And even then it won't be that high due to a little bit of coding overhead.
I don't recommend changing the MTU unless it's a direct computer to computer link.
I also recommend being honest with what your NAS can do read/write wise. Unless it can really do more than 16 Gb/s (2 GB/s) then don't worry about your link performance - it's good enough. ;-)
Hiya.. well i am running LinuxOS routerOS on both machines.. and doing bandwidth test cannot go over 16gbps aggregate on both sides.. as if it was limiting hardware somewhere.. attached pictures below.. on TCP cannot get past 16gbps aggregate barrier RX and TX togheter.. but on UDP i can get over 19Gbps aggregate.. as pic2 belowIPerf on windows is not the same as iperf
That's because the windows implementation of iperf is not optimized.
Boot linux from a live cd/usb on the windows host and try iperf or boot windows on both machines and use ntttcp for network testing.
PCIe 3.0 is 8gbps *per lane*. x8 (which these cards are) is 8 lanes. that's 64gbps, and PCIe is full duplex, so that's 64gbps in and out simultaneouslyso i confess i am now confused... if the slot PCIe 3.0 X16 is limited hardware to 16Gbs how can we achieve 40Gbs on this cards?
HiyaJust a guess, you may have one card in a system's pci-e slot that's only gen 2 x 4 lanes which would net you a max of 16 Gb/s even though the 40Gb link is active.
hiya strange.. i can only get 32gbps now.. only 16gbs down and 16gbps upload same time.. only this cards plugged on pcie 3.0 x16 no other fiber nics cads.. will try to play with mtu now will raise to 9000 mtu just to checkPCIe 3.0 is 8gbps *per lane*. x8 (which these cards are) is 8 lanes. that's 64gbps, and PCIe is full duplex, so that's 64gbps in and out simultaneously
I am confused.. because on docs i have found only stated 16Gbps is that per lane? where do i find out how many lanes i have is it based on CPU model? or based on Server hardware spec slots?PCIe 3.0 is 8gbps *per lane*. x8 (which these cards are) is 8 lanes. that's 64gbps, and PCIe is full duplex, so that's 64gbps in and out simultaneously
You are calculating based on GIGABYTES on the pcie lanws, pcie 3.0 is 8GIGABITS per laneI am confused.. because on docs i have found only stated 16Gbps is that per lane? where do i find out how many lanes i have is it based on CPU model? or based on Server hardware spec slots?
Hi DaveLTX , thanks i will give it another go tomorrow, as i am using routerOS mikrotik installed on both servers.. I have made all the custom BIOS options available, I have upgraded to the latest BIOS on both servers, i have removed all the other NICs from the servers just left installed the 100G MCX455 on the R620 and the MCX354A connected on the other R420 serverYou are calculating based on GIGABYTES on the pcie lanws, pcie 3.0 is 8GIGABITS per lane
All networking equipment is based on GIGABITS
View attachment 32682
x16 is 16GB/s, a 40GbE two port needs at most a x8 slot. There is no limitation to the PCIe slots, its something else
hope that actually clears it up for you
2697 V2 has 40x pcie lanes, how its split up depends on the motherboard. 2470v2 on the other hand is only 24 lanes.but both systems have plenty wide pcie lanes with dual cpus (2x individual cpus)