Brocade 1020 CNA 10GbE PCIe Cards

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
It seems thats an alternative to me as well.
But I'm not into the cable stuff.

I would like to establish a direct connection between 2 Server 2012 R2 Systems.
Is it possible to use this cables? I'm not sure. It reads like a Card to Switch solution.

QSFP to SFP+?
QSFP is a Quad SFP+ in one port. To extend my post above, where SFP is good for 1G and SFP+ 10G, QSFP gets you a 40G link. I have heard that you can get QSFP breakout cables to split a single 40G port into 4 10G ports but have never seen or used one. That also may be a feature supported by relatively few devices.
 

PGlover

Active Member
Nov 8, 2014
499
64
28
57
Since you are using Windows 2008 and Windows 7. This shouldn't be the issue anymore. Multichannel is in 2012 and Windows 8. For me during the test i simply just disable it.
I will have some Windows 8 machines. Can you provide instructions on how to disable multichannel.
 

PGlover

Active Member
Nov 8, 2014
499
64
28
57
Need help.. I can't find any article on the internet on how to change the TCP Window size for Server 2008 R2 and Windows 7.
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Need help.. I can't find any article on the internet on how to change the TCP Window size for Server 2008 R2 and Windows 7.
If im not mistaken, the window size is determined per application. You can only enforce the max size in windows, not force it for all applications. I.e. you dont fix windows receive window size, you fix it for cifs/smb/nfs etc.
 

PGlover

Active Member
Nov 8, 2014
499
64
28
57
If im not mistaken, the window size is determined per application. You can only enforce the max size in windows, not force it for all applications. I.e. you dont fix windows receive window size, you fix it for cifs/smb/nfs etc.
So how do I fix it for the things you mentioned...
 

ServeMe

New Member
Nov 9, 2014
9
1
3
45
Berlin, Germany
I had a Brocade Twinax Cable 10Gig 3M Active FCoE 58-1000027-01 that worked just fine on the Mellanox cards. So a longer Brocade cable may be a viable option.

The reason I bought the Mellanox cable was just in case the Brocade cable didn't work when the Mellanox cards came in.

Thank you. This helps a lot. :)

What can you tell me about the active/passive cables?
 

ServeMe

New Member
Nov 9, 2014
9
1
3
45
Berlin, Germany
QSFP is a Quad SFP+ in one port. To extend my post above, where SFP is good for 1G and SFP+ 10G, QSFP gets you a 40G link. I have heard that you can get QSFP breakout cables to split a single 40G port into 4 10G ports but have never seen or used one. That also may be a feature supported by relatively few devices.

I found some of these cables within the mellanox online shop. It seems there are some switches out there, as you mentioned, they can take up the 4 pipes and aggregate it/failover between switches.
To much for me at the moment. :)
 

ServeMe

New Member
Nov 9, 2014
9
1
3
45
Berlin, Germany
Hey guys,

Thanx 2 u and this thread I bought two cards and connected them directly with an originial active Brocade cable .
Brocade 10G 5M Twinax Active FCoE 5m Cable 58-1000023-01

I just tested a little bit but I came up to 650 MB/s between 2 Windows 20012 R2 servers.
It's maybe not best but the smaller Server is only a small HP Microserver Gen8 and not a high perfromance machine.

That's enough for me for the moment. It's only a poc for my small testing environment.

Update:

With iperf I could get closely 10GB/s

C:\iperf-2.0.5-2-win32>iperf.exe -c 192.168.x.x -P 30 -f MBytes -t 1000000



I'm going to hunt some Cisco ACU Cables with more than 5 meters. Probably 7 and 10.
Let you know what comes around.

As I understood Kristian, they should do it.

Btw.

For a rough test you should also consider to use netio from Kai Uwe Rommel.
It's similiar to iperf and it's worth to take as second shot/opinion. ;)
 
Last edited:

ServeMe

New Member
Nov 9, 2014
9
1
3
45
Berlin, Germany
Can you please explain your comment about "not to mix with 1Gbps nic, with multipath there were more restriction in speed". I am using the 1Gbps onboard nic in my server as well as the 10Gbps 1020 card as well. What can be done so that the 2 NICs (1 and 10Gbps) can coexist?

What he probably meant, is not using multipath via iSCSI for example, with different speeds.

Usually, if you take two pipes, there is a simple round robin in use.
For mixed speed you should change that to fail over. So the primary active card is the fast pipe, the 10G and the secondary aka failover link will be the 1G pipe.

For SMB Multichannel it usually doesn't matter. Especially with Server 2012 R2 the System will always take the biggest pipe and drop the slowest connections.
There were heaps of demos going around as guys putted 10G connections to a 1G connection during simple file transfer tests and you could clearly see that. :)
 

Dk3

Member
Jan 10, 2014
67
21
8
SG
What he probably meant, is not using multipath via iSCSI for example, with different speeds.

Usually, if you take two pipes, there is a simple round robin in use.
For mixed speed you should change that to fail over. So the primary active card is the fast pipe, the 10G and the secondary aka failover link will be the 1G pipe.

For SMB Multichannel it usually doesn't matter. Especially with Server 2012 R2 the System will always take the biggest pipe and drop the slowest connections.
There were heaps of demos going around as guys putted 10G connections to a 1G connection during simple file transfer tests and you could clearly see that. :)
I remember previously tested with virtual switch and result in smb going to 1G Nic speed even in 10G nic. As IIRC vEthernet does not support either rdma or rss, therefore it will not pass through the largest pipe only.

Please do correct if i'm wrong. As i perform the test long ago. For now i'm only using it to run iscsi.
 

ServeMe

New Member
Nov 9, 2014
9
1
3
45
Berlin, Germany
I remember previously tested with virtual switch and result in smb going to 1G Nic speed even in 10G nic. As IIRC vEthernet does not support either rdma or rss, therefore it will not pass through the largest pipe only.

Please do correct if i'm wrong. As i perform the test long ago. For now i'm only using it to run iscsi.

You propably right. I could seen it by myself with the brocade cards yesterday .
Maybe it's about the rdma but i'm guessing right now.
But there have to be circumstances where the big pipe wins at all.
Can't get it when it happens.

Update:

After playing around a bit with 10G / 1G combinations I can say it's as I described above.

The SMB Multichannel takes the biggest pipe sooner or later.

I have a couple of servers with different connection speeds and every time I transfer some big stuff from one to another, there's a point where the transfer changes (if it isn't already) to the 10G connection.

The exact time between the handover varies but happened.

technical:

2 x Intel 1G connections
1 x Intel X540/X520 or Brocade 1020 10G Pipes

non of them have (k)RDMA features.
 
Last edited:
  • Like
Reactions: Dk3

PGlover

Active Member
Nov 8, 2014
499
64
28
57
I am about to give up on the Brocade 1020 cards. Based on my iperf tests, in order to get any performance near 10G, I need to change the TCP Window size. When I do that, I am getting 4 to 6G. Running the iperf test with the default window size (64K), I am getting terrible results.

What to do next? I have spent so much time and money trying to a 10G connection from my VM Hosts to my SAN server. Infiniband did not work out for either. Need help.
 

ServeMe

New Member
Nov 9, 2014
9
1
3
45
Berlin, Germany
I am about to give up on the Brocade 1020 cards. Based on my iperf tests, in order to get any performance near 10G, I need to change the TCP Window size. When I do that, I am getting 4 to 6G. Running the iperf test with the default window size (64K), I am getting terrible results.

What to do next? I have spent so much time and money trying to a 10G connection from my VM Hosts to my SAN server. Infiniband did not work out for either. Need help.

As someone within the thread already mentioned, you should consider to run your iperf tests with more threads.
I'm not sure but at the screenshot you delivered I can see that you only used standard (1 thread). Isn't it? maybe I'm wrong.

I always have to take around 10,20 or more threads (the -p option for iperf) to get close to 10G but without changing TCP windows sizes at all.
I can't give you sources but I'm pretty sure changing windows sizes isn't so useful as you might think.

It's doesn't matter what kind of card/technique I was using. 10G Intel Ethernet stuff vs. Brocade.


the lasts test i run with the brocade cards between 2 Windows Server 2012 R2 i chose this to get 10G.

C:\iperf-2.0.5-2-win32>iperf.exe -c 192.168.x.x -P 30 -f MBytes -t 1000000


Let us know what happened.


Update:


You're using 2008 R2 and Windows 7?

Maybe there are limitations.

Another problem can be the hardware itself.

Maybe they're are limitations of the PCIe slot or RAM or CPU at you systems.

PCIe 2.0 with minimum 4x should be fast enough but.
The Brocades work with PCIe 2.0 and 8 lanes. Total throughput is above 10G.
Windows 7 works with more energy saving options.
Did you tried to turn everything off on hardware and Windows?

maximum power on windows and hardware speedsteps for example?

It would be interesting what you exactly use on both sides.
 
Last edited:

bleomycin

Member
Nov 22, 2014
54
6
8
37
I'm very interested in picking up some of these cards for use between my nas (Ubuntu currently, but I don't care what distro it ends up being so long as ZoL runs fine) and my desktop (Windows 8.1). I want to verify that others have been able to run this combo with SFP+ over fiber between the two without issue and good performance? I've noticed that a lot of you are requiring many iperf threads to achieve near 10Gbe speeds, that doesn't sound ideal especially since samba between my windows client and nas is going to be single threaded?

I see that this sfp has been recommended: Brocade 10G-SFPP-SR 10GBASE-SR SFP+ 850nm 300m Transceiver - $18

As well as this sfp: SFP Transceivers Brocade IBM Compatible 57 0000075 01 10G850NM500M 2yrs Warraty | eBay

I'm more interested in the ebay model because fiberstore takes so long to get here due to shipping. Thanks for any help!
 

DarkOrb

New Member
Jul 24, 2014
10
4
3
38
Wellington, New Zealand
I'm very interested in picking up some of these cards for use between my nas (Ubuntu currently, but I don't care what distro it ends up being so long as ZoL runs fine) and my desktop (Windows 8.1). I want to verify that others have been able to run this combo with SFP+ over fiber between the two without issue and good performance? I've noticed that a lot of you are requiring many iperf threads to achieve near 10Gbe speeds, that doesn't sound ideal especially since samba between my windows client and nas is going to be single threaded?
To be honest, the limiting factor is likely to be how quick your disks can shift information to you and the CPU grunt of the device you are using for the NAS. The samba service isn't the best for CPU util (in my opinion) and given that 10Gbps is around 1,250 MB/sec you'd need a bunch of disks that could keep up with it at those speeds.

Just making sure the expectations are set right before you get disappointed and I don't recall seeing that mentioned before in the thread (but it has been a while since I've last read over it) :)
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
10GB ethernet is basically a great way to make many small pipes (vm's) go faster than gigabit, but not at all easy to make 1 single pipe go 10gbit!
 
  • Like
Reactions: Dk3

Dk3

Member
Jan 10, 2014
67
21
8
SG
To me dual port 10gb at this price definitely more worth than getting a quad port 1gb nic, especially for storage. For my lab, i simply getting 2 brocade on my san which has 4 port supporting 4 esxi hosts.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
10GB ethernet is basically a great way to make many small pipes (vm's) go faster than gigabit, but not at all easy to make 1 single pipe go 10gbit!
It is hard to make a single pipe go 10G - there are a lot of other factors that could limit your performance. Probably storage bandwidth for a lot of us in here but it could be other things too.

But as the price keeps coming down, it is becoming cost effective to use a 10G card to simply make a single pipe go more than 1G. Bonding multiple connections together adds complexity and often still has limits on the way the traffic is spread across the multiple links. A single 10G link is by far the easiest solution even if you only need 2G or 4G or whatever bandwidth between a few things.