Bridging question for solaris/omnios?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dswartz

Active Member
Jul 14, 2011
610
79
28
So I have two vsphere hosts connected to a new storage server. At the moment, that is OmniOS. Each server has a 2-port X520-DA2 card. I have no switch yet, so each vsphere will connect to a separate port on the storage server. I had been playing with linux, where I could use brctl to create a bridge device, and put the two ixgbe NICs into the bridge, and give the bridge an IP address. Apparently you can't do that with solaris bridges (I can't find docs that say you can or can't, but attempting to create an address on the newly created network bridge throws an error.) Giving each ixgbe NIC an IP is not good, since the vsphere hosts have to pick an IP to use, and if the other vsphere host is down (maintenance, crash, whatever), that 10gbe link will be down, and the IP on the NIC won't respond to traffic coming through the bridge :( I'm wondering what the best way to address (no pun intended) is? One possibility seems to be to add the IP I want to the 1gbe management NIC as an alias, and put all 3 NICs in the bridge. Does this seem reasonable, or is there a better way? Thanks!
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I'm wondering what the best way to address (no pun intended) is?
Add a 10G switch, bwa hahah j/k I KNOW that was not the answer you were looking for or wanted to hear :-D wink wink

You're charting into unknown waters bud, I gave on on direct connect when I pushed past two hosts a couple years back and boned up for the switch.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
I have not played with the Solaris internal bridging and etherstub setups but would expect
- a bridge is an osi level 2 device just like a hub or a switch, you cannot assign an ip to a bridge
- a bridge enables traffic between two links, you should be able to assign ip adresses to the links

- beside a bridge between two links you can create vnics and etherstubs (virtual switches) in Solaris. Check for Crossbow. This should allow to create a virtual switch in Solaris with nics and vnics as a member - similar to the ESXi virtual networking concept.

Bridging Overview - Managing Oracle Solaris 11.1 Network Performance
Configuring Components of Network Virtualization in Oracle Solaris - Oracle Solaris Administration: Network Interfaces and Network Virtualization
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Thanks, I was reading some of those docs. Like I said, Linux bridges are full-fledged devices, so you can assign them IP addresses. Also, I did originally have an IP on each side, but for some reason, if one link goes down, that NIC will not respond to traffic entering it from the bridge (whereas traffic from the host is seen.) example 'ping 192.168.1.100' from the host itself does work, even though the NIC is unplugged, but if the ping enters from the other NIC (say 192.168.1.101), and goes through the bridge, it will NOT respond :( I may try the etherstub trick. I also have a spare 1gb interface on the motherboard, and a stub ethernet cable that loops back, to trick the interface into being up. Might give that a try too. Thanks...
 

dswartz

Active Member
Jul 14, 2011
610
79
28
What. A. Freaking. Disaster. I don't know if this is the fault of OmniOS or vsphere 6.0, but I got about half the VMs svmotioned to the NFS datastore and then... APD! And all kinds of messages being spewed from vsphere about NFS lock problems. And also, the datastore going constantly in and out of APD state. I finally got the NFS stuff to settle down by only mounting it on one of the hosts, and am svmotioning everything back to the iSCSI datastore. Grrrr....
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Not sure what happened, but it seemed eerily similar to this thread:

ESXi 6.0 NFS with OmniOS Unstable - consistent APD on VM power off

although in my case, the OmniOS NFS server was NOT virtualized. I do suspect something at fault with OmniOS, since I have a backup datastore that IS on a VM with passed-through HBA, but running CentOS 7. It's been doing fairly constant work for months now, and never a hiccup. Oh well...
 

dswartz

Active Member
Jul 14, 2011
610
79
28
The other disadvantage of NFS was that I can't have redundant access to the datastore (not that I know of, anyway.) With iSCSI, each host has a 10gbe link to the storage server. In vsphere, I set up the software iSCSI initiator to access the LUN via 192.168.2.x (the 10gbe storage subnet) and also 10.0.0.x (LAN subnet). I use 'vmware fixed' mode, with the 10gbe path 'preferred'. I have spent a lot of time in google and haven't seen any way to do this with NFS, but if someone does know, any tips gratefully accepted :)