Brocade 1020 CNA 10GbE PCIe Cards

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kirlcheah

Member
Jan 24, 2015
65
2
8
41
Hi,

I have written this in the Vmware thread. https://forums.servethehome.com/index.php?threads/best-way-to-share-nfs-for-esxi.5321/
I was wondering how can I run this card to use 10gbe for NFS?

I have been wanting to share ESXI to ZFS natively using NFS. I have 2 box of X8SIL-F and wanted to connect to a datastore using Brocade 1020 cards. I have them connected at the following way.
A, B = X8SIL-F (ESXI)
C = ESXI, VM Passthrough, IBM M1015 to 24 HDD, running OmniOS)

A (ESXI) - B (ESXI)
A (ESXI) - C (ESXI, ZFS)
B (ESXI) - C (ESXI, ZFS)

All these are connected using Active Twinax Cables. Part no 58-1000026-01. 1 meter active twinax.

Is this configuration supported? How do I get the ESXI in Box A and B to get the data from C as using Twinax does not have IP for the VMNet3.
Thanks.

Regards.
 

mattlach

Active Member
Aug 1, 2014
323
89
28
Hi,

I have written this in the Vmware thread. https://forums.servethehome.com/index.php?threads/best-way-to-share-nfs-for-esxi.5321/
I was wondering how can I run this card to use 10gbe for NFS?

I have been wanting to share ESXI to ZFS natively using NFS. I have 2 box of X8SIL-F and wanted to connect to a datastore using Brocade 1020 cards. I have them connected at the following way.
A, B = X8SIL-F (ESXI)
C = ESXI, VM Passthrough, IBM M1015 to 24 HDD, running OmniOS)

A (ESXI) - B (ESXI)
A (ESXI) - C (ESXI, ZFS)
B (ESXI) - C (ESXI, ZFS)

All these are connected using Active Twinax Cables. Part no 58-1000026-01. 1 meter active twinax.

Is this configuration supported? How do I get the ESXI in Box A and B to get the data from C as using Twinax does not have IP for the VMNet3.
Thanks.

Regards.

Yes, you can use NFS shares using this adapter.

The Brocade BR-1020's are what they call CNA adapters, or converged network adapters. That means can be both a traditional 10Gbit Ethernet adapter AND an FCoE storage adapter at the same time.

For what you want to do, you want to skip the second part and just treat them like ethernet.

Statically assign dome IP addresses to the adapters fro your storage, and then set up NFS shares.

As for your specific setup, I can't tell exactly what it is you are trying to do from your description.

A popular way of using ZFS-type storage, shared via NFS with ESXi is by running a bare metal ZFS appliance (like FreeNAS for instance) as your storage, and sharing it with ESXi via NFS. The problem with the brocade adapters - however - is that best I can tell, there is no BSD support at all for them, and FreeNAS is based on BSD.

Another appliance to do this is often Napp-IT which runs on OmniOS (a Solaris/OpenIndiana derivative). I am not sure if the brocade adapters have any drivers for OmniOS. They might though. Someone else here may be able to comment.

If you want to go this route, you could just use a bare metal Linux install for your storage. the Brocade driver is included in the Linux kernel, and it works well. You'd have to set up your ZFS pools from the and configure your NFS share from the command line manually, as I don't think NappIT or any other GUI configuration tool will run on top of linux (though I could be wrong here as well) but it really is not that difficult.
 

kirlcheah

Member
Jan 24, 2015
65
2
8
41
Hi Mattlach,

Thanks for the advice. I have only seems able to fix half the problem.

I have done the following;

Reinstall ESXI A,B and C and put in the correct drivers and not running fibre channel. Now I can get IP on them.

Here is what I have in the VMKernel Port. (Means added switch)
ESXIA - VMKernel Nic 1 - 172.16.1.5 (for Vmotion)
ESXIA - VMKernel Nic 2 - 172.16.1.4 (purely NFS)

ESXIB - VMkernel Nic 1 -172.16.1.3 (for Vmotion)
ESXIB - VMkernel Nic 2 - 172.16.1.2 (purely NFS)

ESXIC - Standard Switch 1 - 172.16.1.6. Connected to Napp-IT vmxnet3s0
ESXIC - Standard Switch 2 - 172.16.1.7. Connected to Napp-IT vmxnet3s1

Now, my problem is, after ESXIA manage to load the NFS as my datastore with connected 172.16.1.7 /esxi/esxi and datastore name is test. My ESXIB cannot load the same datastore with connected 172.16.1.6 /exsi/esxi and datastore name is test no matter run as Read Only or With Read Write.

Have I done anything wrong? Thought NFS datastore can be shared?

Regards.
 

mattlach

Active Member
Aug 1, 2014
323
89
28
Hi Mattlach,

Thanks for the advice. I have only seems able to fix half the problem.

I have done the following;

Reinstall ESXI A,B and C and put in the correct drivers and not running fibre channel. Now I can get IP on them.

Here is what I have in the VMKernel Port. (Means added switch)
ESXIA - VMKernel Nic 1 - 172.16.1.5 (for Vmotion)
ESXIA - VMKernel Nic 2 - 172.16.1.4 (purely NFS)

ESXIB - VMkernel Nic 1 -172.16.1.3 (for Vmotion)
ESXIB - VMkernel Nic 2 - 172.16.1.2 (purely NFS)

ESXIC - Standard Switch 1 - 172.16.1.6. Connected to Napp-IT vmxnet3s0
ESXIC - Standard Switch 2 - 172.16.1.7. Connected to Napp-IT vmxnet3s1

Now, my problem is, after ESXIA manage to load the NFS as my datastore with connected 172.16.1.7 /esxi/esxi and datastore name is test. My ESXIB cannot load the same datastore with connected 172.16.1.6 /exsi/esxi and datastore name is test no matter run as Read Only or With Read Write.

Have I done anything wrong? Thought NFS datastore can be shared?

Regards.
Ahh, so you are running your storage OS as a guest in ESXi. That changes things a little.

Firstly, make sure you have direct access to your drives (direct I/O forwarded SAS/SATA controller is one way to do this). using ZFS with virtual drive images is a good way to ensure catastrophic data loss.

As far as network configuration goes, I have lots of FreeNAS experience, but very little Napp-IT experience, so I may have to defer to someone with NappIT experience.

That being said, I can think of two things to check:

1.) Make sure all of your adapters are on the same subnet. Can they ping eachother? This will help determine if you have a network, or NFS problem.

2.) Storage server file system user and permissions. NFS relies on the local file system for permissions. If the user account NFS is accessing the system as is different from the permissions of the files, everything can stop working. Might be a good idea to go in and chown / chmod the files and directories to make sure this is not the case.
 

kirlcheah

Member
Jan 24, 2015
65
2
8
41
Hmm... Seems that I can't ping. Only can ping from the ESXIA to 172.16.1.7...
Strange. I only can use this.

But all my others is written connected.

Hmm... Anymore people can recommend me to test other things?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
You must not set the two nics from 172.16.1.6 and 172.16.1.7 to the same subnet.
Use two different networks like

172.16.1.6
172.17.1.6

to have a single route for each nic
 

kirlcheah

Member
Jan 24, 2015
65
2
8
41
Aha, Thanks Gea. I managed to change the networks and it's working perfectly.
Now to stress test the box with NFS.
 

Kristian

Active Member
Jun 1, 2013
347
84
28
Thank you for your help, The price is very good. very pleasant shopping!
+1

I have purchased 2x4 transceivers @fibrestore and both transactions were really impressive smooth.
In fact I can say that I have never been more impressed with the kind and supportive way that my sales representative managed both orders.
 

Fitsum Taye

New Member
Apr 28, 2015
6
0
1
34
can any one help me?
I have Dell PowerEdge R720 Server installed with ESXi 5.5. And I also have Brocade-1020 with Dell branded SFP "FTLX8571D3BCL". When I run the command
~#esxcli brocade bcu --command="port --query 1/0"
The output is show like this

function id: 1/0/0
port type: 10G Eth
port mode: CNA
port instance: 0
port name:
Media: Unsupported SFP
Speed: ---
CNA/DCB state: Linkdown
Beacon status: Off
FCoE:
MAC: 8c:7c:ff:70:78:14
PWWN: 10:00:8c:7c:ff:70:78:14
NWWN: 20:00:8c:7c:ff:70:78:14
state: Linkdown
supported classes: Class-3
symbolic name: Brocade-1020 | 3.2.3.0 | | |
maximum frame size: 2112
receive bb credits: 48
transmit bb credits:1
QOS: Disabled
TRL: Disabled
TRL default speed: 1G
SCSI queue Depth: 0
Vlan: 0
Eth:
MAC: 8c:7c:ff:70:78:16
Factory MAC: 8c:7c:ff:70:78:16
state: Linkdown
OS Eth Device: vmnic4

please anyone help
 

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
+1 to fiberstore (and Shirley), I had a few issues with my SFPs (mainly a couple were DOA) and they were always happy to swap them out and help diagnose issues etc. Brocade cards seem to be very picky and if things are not quite right it won't work.

@Fisum Taye with the latest firmware it should work with unsupported SFPs, IIRC it will still show up at unsupported but will bring up a link. Make sure they are working on both ends.
 

Axel Mertes

New Member
Apr 30, 2015
2
0
1
Germany
I picked up a few of these as well. Can confirm they don't support passive DACs. I grabbed a few of the previously mentioned 3m Brocade active cables from Brocade 10g SFP Fcoe 3M Fcoe Active Cable 58 1000027 01 | eBay for 35$ a piece (so far the best price that I've seen).

I'll let you know if they work when they get here. So far the cables are installed and happy with Win 7 and 8. Like s0lid, I had no luck with Ubuntu so far, and no love on FreeBSD.

Once I get these working I'm probably going to move my FreeNAS setup to ESXi. I always love how experiments grow so naturally out of each other!
Hi Jan,

just to get this right:

You are using the Brocade CNA 1020B along with Windows 7 and Windows 8 / 8.1 systems and it runs well?
I am considering buying a few of those and want to be sure beforehand that we can use them not only on Windows server OS, but also Windows 7 and Windows 8 / 8.1.
Did you need to install drivers?
If so, where do you get them?
The QLogic driver page does not list Windows 7 and Windows 8, so you can't get a driver there. And Brocades pages are gone. So do the right drivers come from Windows itself?

I'd be glad if you could share some experience here!

Best regards
Axel Mertes
 

kirlcheah

Member
Jan 24, 2015
65
2
8
41
It seems that I have spoken too soon...

The NFS storage is working fine now but I cannot make it to work to have both ESXI box to share the same NFS in a datacluster.

ESXI A - 172.16.1.1, 50.50.50.1 (255.255.255.252)
ESXI B - 172.17.1.1, 50.50.50.2 (255.255.255.252)
ESXI C - 172.16.1.2 and 172.17.1.2 (255.255.255.252) all the 2 is connected via Vmkernel and direct into Napp-It-15a.

All this is confirmed working in Napp-It. in Napp-It, i can ping 172.16.1.1 / 172.16.1.2 / 172.17.1.1 / 172.17.1.2 / 50.50.50.1 and 50.50.50.2.

When i want to share the NFS, it can't do it but have 2 names of the NFS datastore.

Anyway i can circumvent this?

Thanks
 

Psycho_Robotico

Active Member
Nov 23, 2014
111
39
28
Hi Jan,

just to get this right:

You are using the Brocade CNA 1020B along with Windows 7 and Windows 8 / 8.1 systems and it runs well?
I am considering buying a few of those and want to be sure beforehand that we can use them not only on Windows server OS, but also Windows 7 and Windows 8 / 8.1.
Did you need to install drivers?
If so, where do you get them?
The QLogic driver page does not list Windows 7 and Windows 8, so you can't get a driver there. And Brocades pages are gone. So do the right drivers come from Windows itself?

I'd be glad if you could share some experience here!

Best regards
Axel Mertes

In my setup there is a Brocade/QLogic BR 1020 with Fiberstore SFP+ modules running smoothly under Win 7 x64 Professional. As far as I remember Win 7 didn't include any drivers but those supplied for Windows Server 2012 R2 are functional. However, I can't comment on how well Fiberchannel works as my use case doesn't require this functionality.
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Hi All

I have three of these cards that I am trying to use in a triangle sort of setup.

I am using Windows Server 2012 R2 as an ISCSI target. And using ESXi 5.5 to try to connect to it as the initiator. I have one card in the Windows server and on card in the ESXi server. Both are set up properly with IP and can ping each other. I then went to try to configure the software iSCSI adapter in ESXi but just cannot get it to connect to the Windows server iSCSI targets.

If I look in the events tab I see the following message:-

"Login to iSCSI target <iqn address> on vmhba37 @ vmk1 failed. The iSCSI initiator could not establish a network connection to the target."

I know the iSCSI targets are set up properly on the Windows server as I can connect to them via a Windows 7 box. I also know that the iSCSI link between ESXi and the Windows server works if I use another HP 1Gbe NIC that I have in the server with the software iSCSI adapter. So it has got to be something to do with the Brocade BR-1020 card. I just can't figure out what. Any help would be greatly appreciated


Many Thanks in advance
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Hi All

I have three of these cards that I am trying to use in a triangle sort of setup.

I am using Windows Server 2012 R2 as an ISCSI target. And using ESXi 5.5 to try to connect to it as the initiator. I have one card in the Windows server and on card in the ESXi server. Both are set up properly with IP and can ping each other. I then went to try to configure the software iSCSI adapter in ESXi but just cannot get it to connect to the Windows server iSCSI targets.

If I look in the events tab I see the following message:-

"Login to iSCSI target <iqn address> on vmhba37 @ vmk1 failed. The iSCSI initiator could not establish a network connection to the target."

I know the iSCSI targets are set up properly on the Windows server as I can connect to them via a Windows 7 box. I also know that the iSCSI link between ESXi and the Windows server works if I use another HP 1Gbe NIC that I have in the server with the software iSCSI adapter. So it has got to be something to do with the Brocade BR-1020 card. I just can't figure out what. Any help would be greatly appreciated


Many Thanks in advance

Well, I was playing about with some things trying to work out what was going on. Following this VMware KB for a start.

VMware KB: Connection not made to iSCSI storage target

This then led me on to this KB VMware KB: Troubleshooting iSCSI LUN connectivity issues on ESX/ESXi hosts

I was testing with the pin and vmkping commands and all seemed fine. So I tried pinging with the MTU set high. It didn't work. I had the MTU set in both Windows Server and ESXI 5.5 to 9000. I dropped it on both ends to the lowest possible size of 1514 and all of a sudden the BR-1020 card using the software iSCSI adapter can see the LUNs on the Windows Server. Super happy about that :)

But obviously I don't want to be running iSCSI over a 10Gbe network connection with an MTU of 1514.

has anybody else got this running? What is the maximum MTU you have found to work properly?
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
A little further update. I have noticed that every time I start up the servers the storage is not visible on the ESXi host. However if I change the MTU setting to any other figure on the Windows 2012 R2 server then rescan on the ESXi host the LUNs appear. However changing the MTU settings on the ESXi host make no difference at all. It is almost like I have to "wake up" the Brocade BR-1020 in the Windows Server before it realises it should be on. Has anybody else come across this please? I am running the 3.2.5.0 QLogic driver on the Windows Server 2012 R2 box
 

mattlach

Active Member
Aug 1, 2014
323
89
28
Has anyone upgraded an ESXi host from 5.5u2 to 6.0 with these installed?

I'm trying to figure out if I should expect everything to just work, or expect problems.

Thanks,
Matt