Dual port 10GbE PCIe cards for $88 OBO, free ship

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Great reference here: http://blogs.technet.com/b/josebda/...tx-3-using-10gbe-40gbe-roce-step-by-step.aspx
Blog is for ConnectX-3 but if ConnectX-2 supports RDMA it should be the same.

Summary:

Windows RDMA over Converged Ethernet:

- Update to latest Mellanox firmware on both ends
- RDMA is only supported on Windows Server OS. You get no RDMA if either server or client is Windows 7/8/8.1.
- You must have Ethernet Flow Control enabled on the adapter cards and on the Ethernet Switch.
- (a technicality) You need Data Center Bridging support on your switch
----> it works without it but Ethernet Flow Control may cause performance issues for non-RDMA traffic w/out DCB
- Use powershell to confirm Network Direct (RDMA) is active globally and on the network adapter:

Get-NetOffloadGlobalSetting | Select NetworkDirect

Get-NetAdapterRDMA

Get-NetAdapterHardwareInfo
Transfer some files and check your windows performance counters 'RDMA Activity' and 'SMB Direct Connection'
 
Last edited:

rum1k

New Member
Jan 10, 2014
8
0
0
I picked up couple cards to try 10g speed and so far everything are working thanks to this post. Only thing I cannot confirm is if RDMA working on them. Anyone know if RDMA supported with this model?
Yes they have RDMA support, but...

1) Connetx-2 cards did not fully support Priority Flow Control (PFC) in Windows 2012 (win2008 is OK)

2) SMB Direct RoCE does not work without DCB/PFC

3) This means that you need switch with DCB for RDMA even if you have Connectx-3 cards.
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Yes they have RDMA support, but...

1) Connetx-2 cards did not fully support Priority Flow Control (PFC) in Windows 2012 (win2008 is OK)

2) SMB Direct RoCE does not work (right) without DCB/PFC

3) This means that you (probably want) switch with DCB for RDMA even if you have Connectx-3 cards.
Quote edited slightly. RoCE does not work (right) without DCB - correct - but having DCB in your switch is not enforced by win 2008/2012/2012R2. So you can run RoCE without it.

What do you risk: DCB implements per flow flow-control over Ethernet. If your switch does not support DCB then the per-flow flow-controls are ignored and your host might be allowed to transmit a packet that cannot be delivered due to queue overflows in the switch. This translates loosely into "packet loss" and RDMA is more impacted by packet loss than normal IP traffic. It still works - but you may see performance problems on a network that otherwise looks to be operating "normally".

In a small-ish network or lab the chances of link congestion in your (probably single layer-2 switch) Ethernet is quite low. So low as to be ignorable for the most part and RoCE will work normally even without a DCB switch. Its almost guaranteed to work right in some of the 10Gbe point-to-point configurations being tested by STH readers. I wouldn't recommend this for production or any kind of production system or even complex lab -but the blanket statement that "it won't work without DCB" is misleading.

See here for a supporting view: SMB Direct RoCE Does Not Work Without DCB/PFC | Working Hard In IT
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
It is like FCoE , the protocol SAS/sCSI was not designed to lose packets, it is the wrong stack to be losing packets, and with vlan's there is no other way to setup CLASS OF SERVICE per VLAN to guarantee your data will get to where it is going whilst another vlan is transmitting slowly jamming up the buffers.

DCBx is mandatory for protocols that need guaranteed lossless ethernet/packets. SCSI is an area where this is important. ISCSI let's the IP handle packet loss/retransmit so you end up with crappy performance whilst your data is being put back together,congestion management, but even with DCBx your ISCSI will go much faster.

per VLAN flow control is just not possible without DCBx, and having port-based flow control is not going to work out so hot with multiple vlan's!
 

nickmade

New Member
Mar 8, 2011
6
0
1
Appreciates all the feedbacks. From the outputs below I don't think RDMA is working for me. This is direct connect between two 2012 R2 server. With disk I gets @330 MB/s and ATTO with ramdisk it average @500 MB/s. Not sure if this is normal or should it be higher with 10g.

PS C:\Users\Administrator> Get-SmbServerNetworkInterface

Scope Name Interface Index RSS Capable RDMA Capable Speed IpAddress
---------- --------------- ----------- ------------ ----- ---------
* 20 True False 10 Gbps 192.168.10.2
* 19 True False 18446744073.7096... 192.168.10.1


PS C:\Users\Administrator> Get-SmbConnection

ServerName ShareName UserName Credential Dialect NumOpens
---------- --------- -------- ---------- ------- --------
192.168.10.4 e$ FILER\Administrator FILER\administrator 3.02 3
192.168.10.4 ramdisk FILER\Administrator FILER\administrator 3.02 1


PS C:\Users\Administrator> Get-SmbMultichannelConnection

Server Name Selected Client IP Server IP Client Server Client RSS Client RDMA
Interface Interface Capable Capable
Index Index
----------- -------- --------- --------- -------------- -------------- -------------- --------------
192.168.10.4 True 192.168.10.2 192.168.10.4 20 22 True False


PS C:\Users\Administrator> Get-NetOffloadGlobalSetting | Select NetworkDirect

NetworkDirect
-------------
Enabled
PS C:\Users\Administrator> Get-NetAdapterRDMA

Name InterfaceDescription Enabled
---- -------------------- -------
Ethernet 4 Mellanox ConnectX-2 Ethernet Adapter True
Ethernet 5 Mellanox ConnectX-2 Ethernet Adapter #2 True


PS C:\Users\Administrator> Get-NetAdapterHardwareInfo

Name Segment Bus Device Function Slot NumaNode PcieLinkSpeed PcieLinkWidth Version
---- ------- --- ------ -------- ---- -------- ------------- ------------- -------
Ethernet 4 0 2 0 0 5.0 GT/s 4 1.1
Ethernet 5 0 2 0 0 5.0 GT/s 4 1.1