Brocade 1020 CNA 10GbE PCIe Cards

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mattlach

Active Member
Aug 1, 2014
343
97
28
Actually, the paravirtual adapter may even have lower overhead than a passthrough adapter in some cases and scenarios. Would be interesting to test.
I have never tried with the brocade adapter, but I have a ton of Intel Pro/1000 PT dual and quad adapters I have used both ways. I have found that for these adapters, just using simple pings to test, a native adapter always had the lowest latency, but a Direct I/O forwarded adapter was almost as good. Using the VMWare e1000 emulated driver to connect to a vswitch and then through the Intel adapter added about 100μs on average. (Note, Microseconds, not Milliseconds, so ~0.1ms). This is about the same amount of latency I have found that a good quality gigabit switch will add.

So I guess, unless your application is EXTREMELY sensitive to latency, either configuration will probably work fine, provided the Brocade adapter doesn't behave very differently when direct I/O forwarded as compared to the Intel gigabit adapters, which it very well might.
 

mattlach

Active Member
Aug 1, 2014
343
97
28
So it looks like this may be a problem with my install, probably due to how I upgraded it.

Mint recommends against doing distribution upgrades, and insists instead on clean installs. (IMHO the biggest downside to Mint, otherwise it would be perfect)

I upgraded it from 16 ->17 -> 17.1 Something was obviously left behind.

When I booted the 17.1 mint live install CD, the Brocade just worked, like others reported, but my install behaves like Ubuntu 13.04 and 13.10.

Seems like it might be time for a fresh install.
Silly me. Seems like I misunderstood how Mint selects kernel versions. I was still on the 3.11 kernel, which explains it.
 

snazy2000

New Member
Jan 28, 2014
22
14
3
Ive been looking at these cards for while everytime i saw some cheap didnt have funds to get them :( just want to make sure that what im thinking is possible. I dont have a switch and dont want a 10gb switch (well i would like one but to expensive lol) i have 3 servers and i want to basicly have a ring effect. So. Hv1 -> fileserver -> Hv2 -> Hv1. Would this work with these cards and is it easy to setup ? Thanks
 

firenity

Member
Jun 29, 2014
51
8
8
Ive been looking at these cards for while everytime i saw some cheap didnt have funds to get them :( just want to make sure that what im thinking is possible. I dont have a switch and dont want a 10gb switch (well i would like one but to expensive lol) i have 3 servers and i want to basicly have a ring effect. So. Hv1 -> fileserver -> Hv2 -> Hv1. Would this work with these cards and is it easy to setup ? Thanks
This should be possible, yes.
It doesn't even have to be a ring network though, with three machines you could basically connect each of the nodes to each other (more reliable).
 

snazy2000

New Member
Jan 28, 2014
22
14
3
This should be possible, yes.
It doesn't even have to be a ring network though, with three machines you could basically connect each of the nodes to each other (more reliable).
I wanted it like that because was going to have two different sets of traffic going between different servers so from

HV1 -> HV2 would be migration traffic
HV1 -> F1 would be iScsi or what ever
HV2 -> F1 would be Iscsi or what ever
 

nry

Active Member
Feb 22, 2013
312
61
28
I have been doing some testing of the Brocade 1020 with my existing kit thanks to @TallGraham for lending me a Brocade 1020 and Brocade 3M active 58-1000027-01 leads!

Here are the cables/SFP+ modules I tried with the Brocade 1020:
- Brocade 3M active 58-1000027-01 - OK
- Dell SFP-H10G DAC - FAIL
- Intel AFBR-703SDZ-IN2 SFP+ module - FAIL
- Myricom 10G-SFP-SR SFP+ module - FAIL
- HP 10Gb SR SFP+ module 455885-001 - FAIL

Was slightly disappointed that the SFP-H10G leads failed to work with the NIC as I was really hoping to pick a bunch of these cards up and stick to Dell DAC leads only throughout my whole setup.

Also tested the Brocade 3M active 58-1000027-01 with various kit with no issues:
- Intel X520-DA2 - OK
- Myricom PCIe - OK
- Dell 8024F - OK
- Dell 5524 - OK
- HP NC552SFP - OK

Performance wise (out of the box configuration) was on par with the Intel X520 NICs
Intel x520 (linux) > Intel SFP+ > Fibre 7M > Intel SFP+ > 8024F > Brocade 3M > Intel x520 (win 7) - 2.66Gbits/sec
Intel x520 (linux) > Intel SFP+ > Fibre 7M > Intel SFP+ > 8024F > Brocade 3M > Intel x520 (linux) - 9.41Gbits/sec
Intel x520 (linux) > Intel SFP+ > Fibre 7M > Intel SFP+ > 8024F > Dell DAC 3M > Intel x520 (linux) - 9.47Gbits/sec
Intel x520 (linux) > Intel SFP+ > Fibre 7M > Intel SFP+ > 8024F > Brocade 3M > Brocade 1020 (linux) - 9.41Gbits/sec
 

snazy2000

New Member
Jan 28, 2014
22
14
3
I have been doing some testing of the Brocade 1020 with my existing kit thanks to @TallGraham for lending me a Brocade 1020 and Brocade 3M active 58-1000027-01 leads!

Here are the cables/SFP+ modules I tried with the Brocade 1020:
- Brocade 3M active 58-1000027-01 - OK
- Dell SFP-H10G DAC - FAIL
- Intel AFBR-703SDZ-IN2 SFP+ module - FAIL
- Myricom 10G-SFP-SR SFP+ module - FAIL
- HP 10Gb SR SFP+ module 455885-001 - FAIL

Was slightly disappointed that the SFP-H10G leads failed to work with the NIC as I was really hoping to pick a bunch of these cards up and stick to Dell DAC leads only throughout my whole setup.

Also tested the Brocade 3M active 58-1000027-01 with various kit with no issues:
- Intel X520-DA2 - OK
- Myricom PCIe - OK
- Dell 8024F - OK
- Dell 5524 - OK
- HP NC552SFP - OK

Performance wise (out of the box configuration) was on par with the Intel X520 NICs
Intel x520 (linux) > Intel SFP+ > Fibre 7M > Intel SFP+ > 8024F > Brocade 3M > Intel x520 (win 7) - 2.66Gbits/sec
Intel x520 (linux) > Intel SFP+ > Fibre 7M > Intel SFP+ > 8024F > Brocade 3M > Intel x520 (linux) - 9.41Gbits/sec
Intel x520 (linux) > Intel SFP+ > Fibre 7M > Intel SFP+ > 8024F > Dell DAC 3M > Intel x520 (linux) - 9.47Gbits/sec
Intel x520 (linux) > Intel SFP+ > Fibre 7M > Intel SFP+ > 8024F > Brocade 3M > Brocade 1020 (linux) - 9.41Gbits/sec

Any idea why the windows 7 was such poor performance?
 

nry

Active Member
Feb 22, 2013
312
61
28
Any idea why the windows 7 was such poor performance?
I'm not too sure to be honest, note that the test there is with a X520 card not the Brocade 1020. Was simply testing the Brocade 3M active Twinax lead compatibility with the X520 cards.

Also going back to the Brocade's compatibility with Cisco/Dell DAC leads, looking at this http://www.qlogic.com/Resources/Doc.../brocade-adapters-interoperability-matrix.pdf (page 2) it seems that the Cisco 10Gb SFP+ direct attach (Twinax) cable, 7m (SFP-H10GB-ACU7M) and Cisco 10Gb SFP+ direct attach (Twinax) cable, 10m (SFP-H10GB-ACU10M)* it seems the active twinax cables are supported but the passive ones which I use are not. What a pain!
 

mattlach

Active Member
Aug 1, 2014
343
97
28
I'm not too sure to be honest, note that the test there is with a X520 card not the Brocade 1020. Was simply testing the Brocade 3M active Twinax lead compatibility with the X520 cards.

Also going back to the Brocade's compatibility with Cisco/Dell DAC leads, looking at this http://www.qlogic.com/Resources/Doc.../brocade-adapters-interoperability-matrix.pdf (page 2) it seems that the Cisco 10Gb SFP+ direct attach (Twinax) cable, 7m (SFP-H10GB-ACU7M) and Cisco 10Gb SFP+ direct attach (Twinax) cable, 10m (SFP-H10GB-ACU10M)* it seems the active twinax cables are supported but the passive ones which I use are not. What a pain!

The thing is, the twinax cables are surprisingly pricy.

With SFP+ transceivers for these things at $18 a piece at fiberstore.com and fiber cables as cheap as they are, might as well just go the fiber route.

I bough two transceivers and two 15M (50ft) LC-LC duplex OM3 cables for a total of $50 + shipping. (Shipping is a little higher, as it is coming from china, but still the total is very competitive with twinax, and you get much longer max lengths out of it.)

This way you can pick a transceiver that is guaranteed to work with the brocade adapter, and one that is guaranteed to work with Cisco/Dell.
 
Last edited:

mattlach

Active Member
Aug 1, 2014
343
97
28
My cables I lent @nry for the test only cost me £20 each with free delivery from eBay.

They are the proper Brocade Active Twinax ones too
Wow, you found them a lot cheaper than I have seen them anywhere, but granted, I was pricing mine out in the 5 to 7M range. Were they the 3M (length, not company :p ) versions?

As they get longer they get more expensive. I was seeing prices at the $150 level for those, in which case the transceivers and fiber were much cheaper.
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Wow, you found them a lot cheaper than I have seen them anywhere, but granted, I was pricing mine out in the 5 to 7M range. Were they the 3M (length, not company :p ) versions?

As they get longer they get more expensive. I was seeing prices at the $150 level for those, in which case the transceivers and fiber were much cheaper.
This is where I got mine. Offered £20 each for 3 and was accepted.

Brocade 58-1000027-01 10G SFP FCoE 3m FCoE Active Cable | eBay
 

nry

Active Member
Feb 22, 2013
312
61
28
I find over here in the UK going the SFP+ module and fibre route is way more expensive. On eBay I have picked up SFP-H10G leads ranging from £5-15 both new and used.
 

mattlach

Active Member
Aug 1, 2014
343
97
28
Still haven't received my transceivers or fibre.

Apparently (and I didn't realize this when ordering) they build at least the fiber cabling to order.

Anyway, I figured I'd ask just to make sure. The transceivers in these brocade adapters are hot swappable, right?

Don't want to shut the server down to pop one of them in, unless I absolutely have to.
 

mattlach

Active Member
Aug 1, 2014
343
97
28
I've never seen an SFP(+) module that didn't support hot-swap.
I figured as much, but I just wanted to make sure, as these will be my first fiber transceivers (gigabit OR 10gig) so I don't have any experience with them.

My Procurve managed switch has a couple of mini-GBIC gigabit fiber "dual personality" ports, but I've never used them, as there is no speed advantage over copper gigabit, and everything I have run is within the 100M restriction of copper.

(I'm a home hobbyist, not an IT professional)
 

mattlach

Active Member
Aug 1, 2014
343
97
28
Alright, Finally got my fibers and transceivers from the Fiber Store.

I may have slightly kinked the fiber when I was running it through my hole to the basement, but it doesn't seem to be impacting it (at least not much). I have never dealt with these fibers before, so I have no idea how sensitive they are to kinking.

Anyway, I ran it between my Linux Mint 17.1 workstation on one end and my ESXi 5.5U2 server on the other. Set up as below. Vswitch1 is a dedicated network for storage traffic, so I keep it away from everything else. This is where I connected the brocade. All of the guests connected to this vswitch are using vmxnet3.



So I just popped in the transceivers, connected the fiber and everything just worked. No trouble shooting or anything. Off to the races. I wish my other current project (MythTV Backend with XBMC/Kodi Frontend) were behaving this well.

First test, ssh into my Ubuntu server guest to start up an iperf in server mode and then test from my workstation:

Code:
Client connecting to xxx.xxx.xxx.xxx, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local xxx.xxx.xxx.xxx port 54545 connected with xxx.xxx.xxx.xxx port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  10.76 GBytes  8.02 Gbits/sec
Not quite where I had hoped, but not too shabby for my first try, using default settings (no -P 30)

Tried again with -P30 and 120 seconds instead.

Code:
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-119.0 sec  4.46 GBytes   322 Mbits/sec
[  5]  0.0-119.0 sec  2.13 GBytes   153 Mbits/sec
[  6]  0.0-119.0 sec  4.96 GBytes   358 Mbits/sec
[  9]  0.0-119.0 sec  4.96 GBytes   358 Mbits/sec
[ 10]  0.0-119.0 sec  4.62 GBytes   333 Mbits/sec
[ 11]  0.0-119.0 sec  1.98 GBytes   143 Mbits/sec
[ 19]  0.0-119.0 sec  1.96 GBytes   142 Mbits/sec
[ 20]  0.0-119.0 sec  3.95 GBytes   285 Mbits/sec
[ 23]  0.0-119.0 sec  4.19 GBytes   303 Mbits/sec
[ 27]  0.0-119.0 sec  5.51 GBytes   398 Mbits/sec
[  8]  0.0-119.0 sec  2.07 GBytes   149 Mbits/sec
[ 26]  0.0-119.0 sec  1.64 GBytes   118 Mbits/sec
[ 21]  0.0-119.0 sec  2.02 GBytes   146 Mbits/sec
[ 22]  0.0-119.1 sec  1.97 GBytes   142 Mbits/sec
[ 24]  0.0-119.1 sec  2.12 GBytes   153 Mbits/sec
[ 12]  0.0-119.1 sec  4.31 GBytes   311 Mbits/sec
[ 25]  0.0-119.1 sec  1.79 GBytes   129 Mbits/sec
[  7]  0.0-119.1 sec  1.83 GBytes   132 Mbits/sec
[  4]  0.0-119.1 sec  1.81 GBytes   131 Mbits/sec
[ 14]  0.0-120.0 sec  4.88 GBytes   350 Mbits/sec
[ 13]  0.0-120.0 sec  4.80 GBytes   343 Mbits/sec
[ 16]  0.0-120.0 sec  4.99 GBytes   357 Mbits/sec
[ 18]  0.0-120.0 sec  4.66 GBytes   333 Mbits/sec
[ 17]  0.0-120.0 sec  2.55 GBytes   182 Mbits/sec
[ 30]  0.0-120.0 sec  5.04 GBytes   361 Mbits/sec
[ 28]  0.0-120.0 sec  5.49 GBytes   393 Mbits/sec
[ 31]  0.0-120.0 sec  1.91 GBytes   137 Mbits/sec
[ 29]  0.0-120.0 sec  5.37 GBytes   384 Mbits/sec
[ 32]  0.0-120.0 sec  4.82 GBytes   345 Mbits/sec
[ 15]  0.0-120.1 sec  2.15 GBytes   154 Mbits/sec
[SUM]  0.0-120.1 sec   105 GBytes  7.51 Gbits/sec
Hmm, a little bit puzzling and a little bit disappointing, but I have other (albeit light) traffic going over the network so that could be throwing it off, or I could just be reaching the capacity of my hardware, or maybe that kind damaged the fiber after all (I do have another I can try with.)

Then I did iperf against my FreeNAS guest, on the same network on the same ESXi box and this is where it gets interesting.

Code:
------------------------------------------------------------
Client connecting to xxx.xxx.xxx.xxx, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local xxx.xxx.xxx.xxx port 36620 connected with xxx.xxx.xxx.xxx port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  3.46 GBytes  2.97 Gbits/sec
OK so this is a little alarming, especially since NFS transfers to FreeNAS was one of the main reasons I got the adapters, transceivers and fiber.

Tried again with -P 30 and -t 120

Code:
[ ID] Interval       Transfer     Bandwidth
[ 14]  0.0-120.0 sec  1.78 GBytes   127 Mbits/sec
[  3]  0.0-120.0 sec  1.38 GBytes  98.6 Mbits/sec
[  5]  0.0-120.0 sec  1.27 GBytes  90.7 Mbits/sec
[  4]  0.0-120.0 sec  1.37 GBytes  98.1 Mbits/sec
[  7]  0.0-120.0 sec  1.23 GBytes  88.2 Mbits/sec
[  6]  0.0-120.0 sec  1.82 GBytes   130 Mbits/sec
[ 10]  0.0-120.0 sec  1.34 GBytes  95.8 Mbits/sec
[ 15]  0.0-120.0 sec  1.77 GBytes   127 Mbits/sec
[ 17]  0.0-120.0 sec  1.25 GBytes  89.4 Mbits/sec
[ 19]  0.0-120.0 sec  1.32 GBytes  94.2 Mbits/sec
[ 20]  0.0-120.0 sec  1.33 GBytes  95.1 Mbits/sec
[ 22]  0.0-120.0 sec  1.30 GBytes  92.9 Mbits/sec
[ 27]  0.0-120.0 sec  1.22 GBytes  87.5 Mbits/sec
[ 31]  0.0-120.0 sec  1.13 GBytes  81.0 Mbits/sec
[  9]  0.0-120.0 sec  1.67 GBytes   120 Mbits/sec
[ 11]  0.0-120.0 sec  1.29 GBytes  92.5 Mbits/sec
[ 13]  0.0-120.0 sec  1.29 GBytes  92.7 Mbits/sec
[ 16]  0.0-120.0 sec  1.29 GBytes  92.5 Mbits/sec
[ 23]  0.0-120.0 sec  1.77 GBytes   127 Mbits/sec
[ 25]  0.0-120.0 sec  1.19 GBytes  84.9 Mbits/sec
[ 26]  0.0-120.0 sec  1.28 GBytes  91.5 Mbits/sec
[ 28]  0.0-120.0 sec  1.36 GBytes  97.4 Mbits/sec
[ 30]  0.0-120.0 sec  1.50 GBytes   107 Mbits/sec
[ 32]  0.0-120.0 sec  1.33 GBytes  94.9 Mbits/sec
[  8]  0.0-120.0 sec  1.33 GBytes  95.3 Mbits/sec
[ 18]  0.0-120.0 sec  1.32 GBytes  94.1 Mbits/sec
[ 24]  0.0-120.0 sec  1.29 GBytes  92.3 Mbits/sec
[ 29]  0.0-120.0 sec  1.32 GBytes  94.2 Mbits/sec
[ 12]  0.0-120.0 sec  1.28 GBytes  91.8 Mbits/sec
[ 21]  0.0-120.0 sec  1.29 GBytes  92.6 Mbits/sec
[SUM]  0.0-120.0 sec  41.3 GBytes  2.96 Gbits/sec
Hmm. Rather disappointing and very surprising. considering last week (before I got the fiber) I ran an iperf between the same Ubuntu Server and FreeNAS boxes (vmxnet3 -> vswitch -> vmxnet3) and got 18Gbit/s with a single connection...

Just to rule out any brocade/transceiver/fiber problems I did the same again, Ubuntu Server to FreeNAS.

Code:
------------------------------------------------------------
Client connecting to xxx.xxx.xxx.xxx, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local xxx.xxx.xxx.xxx port 45525 connected with xxx.xxx.xxx.xxx port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  3.21 GBytes  2.75 Gbits/sec
Yep, definitely something up with FreeNAS, not the brocade.

So strange, since the results just last week were phenomenal, and I don't think I've changed anything.


As a final test, I DD:ed a virtual box drive image stored on FreeNAS via NFS to /dev/null on my workstation:
Code:
$ dd if=./Windows\ 7\ x64.vdi of=/dev/null
65120744+0 records in
65120744+0 records out
33341820928 bytes (33 GB) copied, 78.3183 s, 406 MB/s
Yeah, not meeting my expectations to FreeNAS at this point, though it is definitely faster than gigabit, so I guess that's not bad.

Off to troubleshoot FreeNAS and figure out what went wrong since last week, I guess....
 
Last edited:

mattlach

Active Member
Aug 1, 2014
343
97
28
Turns out my issues are due to FreeNAS behaving oddly under ESXi.

When my linux worstation is the server, I practically max out the 10GBASE-SR.

Code:
------------------------------------------------------------
Client connecting to xxx.xxx.xxx.xxx, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local xxx.xxx.xxx.xxx port 62948 connected with xxx.xxx.xxx.xxx port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  10.8 GBytes  9.30 Gbits/sec
But when FreeNAS is the server I'm back down to slow speeds.

Code:
------------------------------------------------------------
Client connecting to xxx.xxx.xxx.xxx, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local xxx.xxx.xxx.xxx port 40512 connected with xxx.xxx.xxx.xxx port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  3.64 GBytes  3.13 Gbits/sec
I see the same thing when I do test runs using iperf back and forth between my Ubuntu Server Guest and FreeNas Guest on the ESXi server using Vmxnet3.

It's odd, and I can't explain it. And apparently I am not the only one. Somoene on the HardForums has the exact same issue with FreeNAS under ESXi.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Sounds like maybe the problem is in FreeNAS's vmxnet3 driver. Have you tried using a different type of NIC in the FreeNas guest (say e1000)?
 

mattlach

Active Member
Aug 1, 2014
343
97
28
Sounds like maybe the problem is in FreeNAS's vmxnet3 driver. Have you tried using a different type of NIC in the FreeNas guest (say e1000)?
That's next on mt list of trouble shooting. I've been avoiding e1000 due to lower inefficiencies, but if vmxnet3 doesn't work, then I may have to use it.

It's odd, because the driver I am currently using, I manually added the vmxnet3.ko module from the VMWare tools FreeBSD package, so if there is a bug in the driver, it is a current VMWare thing, not a "whats included with FreeNAS" thing.

Another potential is that I keep hearing that FreeBSD is rather poorly suited to being a a guest in a virtual environment. (I also hear a lot of this has been fixed in FreeBSD 10, but FreeNAS just released their version based on 9.3)

Something about years/decades of machine specific code and hacks working their way into the system that don't play nice with virtualization.

It's possible that this is somehow related to that.

This is kind of a home "production" system if you will, so I haven't had the down time (when people are out of the house and I'm there) to trouble shoot. When I do, I will test and report here for posterity.