I ordered one. We'll see if it helps. Looks like it would.Yep, that's the way mine is mounted. No more floppy switch, lol.
I ordered one. We'll see if it helps. Looks like it would.Yep, that's the way mine is mounted. No more floppy switch, lol.
I was considering the same switch for vSAN as well. Were you able to overcome the latency issues with it?I wish that was it... but no one thing I found was the ups was being over loaded and making the switch reboot very 3 hours... well since that been resolved, I decide to trunk my vm hosts to a vSwitch. and vlan throttle the networks. I gave vSan 5gs to max 8gd with a burst for the whole switch. One things I learned for there this provisioning setting for cold storage moves (not vmotion) and I turned that on. Well with all that speed from a 10g switch.. and basically I am still not seeing any improvements at all on the back plane for the systems. I was thinking moving away from FC and using iSCSI for the backup NAS but I am not sure at this time I could recommend. Does VMWARE off any tools to diagnose the bottlenecks?
Were you ever able to get vSAN working with this switch?I have learned that Broadcom offered two different versions of the FASTPATH software, FASTPATH-SMB and FASTPATH-ENT. Some documents also refer to the "Advanced" feature set. I am not sure if that means FASTPATH-ENT or not. So far, I have not seen any keys or information on how to upgrade from -SMB to -ENT, or to add individual features. It seems like you get one or the other. Did all LB6M switches come with the same FASTPATH feature set, or are there different switch versions?
For vSAN I really just need Multicast support. Has anyone besides me tried to set up vSAN on one of these switches? Could upgrading the firmware help my cause?
Has anyone had any luck with 1G copper SFPs in the SFP+ slots? I have tried Finisar FCLF-8521-3, eNet GLC-T-ENC, and Mikrotik S-RJ01, but none of them are working.
Admin Physical Physical Link Link LACP Actor
Intf Type Mode Mode Status Status Trap Mode Timeout
--------- ------ --------- ---------- ---------- ------ ------- ------ --------
0/8 Enable 10G Full Down Enable Enable long
Interestingly I never see any Phy mode other than 10G Full. And the usual Brocade "speed-duplex 1000-full" command is not allowed in interface.
"interface speed" only lists 10 and 100, and those are not valid modes for the 10G ports.
I have some 10G DAC cables that are working fine. But I have some older gear as well I'd like to connect at 1G.
Edit: Went back and re-read the entire thread. Looks like nobody else has gotten this working either.
I have learned that Broadcom offered two different versions of the FASTPATH software, FASTPATH-SMB and FASTPATH-ENT. Some documents also refer to the "Advanced" feature set. I am not sure if that means FASTPATH-ENT or not. So far, I have not seen any keys or information on how to upgrade from -SMB to -ENT, or to add individual features. It seems like you get one or the other. Did all LB6M switches come with the same FASTPATH feature set, or are there different switch versions?
For vSAN I really just need Multicast support. Has anyone besides me tried to set up vSAN on one of these switches? Could upgrading the firmware help my cause?
I have VSAN up and running with this switch, both VSAN 6.2 and VSAN 6.5. My setup appears to have the same feature set yours does:I have learned that Broadcom offered two different versions of the FASTPATH software, FASTPATH-SMB and FASTPATH-ENT. Some documents also refer to the "Advanced" feature set. I am not sure if that means FASTPATH-ENT or not. So far, I have not seen any keys or information on how to upgrade from -SMB to -ENT, or to add individual features. It seems like you get one or the other. Did all LB6M switches come with the same FASTPATH feature set, or are there different switch versions?
For vSAN I really just need Multicast support. Has anyone besides me tried to set up vSAN on one of these switches? Could upgrading the firmware help my cause?
Sorry...got a little click happy with my first post ;-)Were you trying to say something? :-/
That's awesome to hear it indeed works! If you don't mine, what type of performance are you seeing from vSAN in your proactive tests in vCenter?I have VSAN up and running with this switch, both VSAN 6.2 and VSAN 6.5. My setup appears to have the same feature set yours does:
Additional Packages............................ FASTPATH QOS
FASTPATH Routing
I enabled IGMP querier on my VSAN VLAN, and confirmed multicast functioned as expected using tcpdump-uw (vSphere 6.5 has a nifty VSAN tester in the GUI that will test multicast too)
[ESXHOST:~] tcpdump-uw -i vmk3 -n -s0 -t -c 20 udp port 12345 <<- Command to verify the vmKernel portI have VSAN up and running with this switch, both VSAN 6.2 and VSAN 6.5. My setup appears to have the same feature set yours does:
Additional Packages............................ FASTPATH QOS
FASTPATH Routing
I enabled IGMP querier on my VSAN VLAN, and confirmed multicast functioned as expected using tcpdump-uw (vSphere 6.5 has a nifty VSAN tester in the GUI that will test multicast too)
My issue currently is that I'm using an unsupported SAS controller, and only a single 10Gb NIC in each host, So when I apply a load, with other VLAN's on the same port i notice a spike in write latency, read latency is almost always in the 1ms or less range (I'm running an all flash setup), but i've seen write latency hit 100ms. The only place i really feel it is in a couple of VMs running docker containers that consume storage via NFS across that same port. I have 7 active VLAN's going in and out of the single port.That's awesome to hear it indeed works! If you don't mine, what type of performance are you seeing from vSAN in your proactive tests in vCenter?
Sent from my Nexus 6P using Tapatalk
My issue currently is that I'm using an unsupported SAS controller, and only a single 10Gb NIC in each host, So when I apply a load, with other VLAN's on the same port i notice a spike in write latency, read latency is almost always in the 1ms or less range (I'm running an all flash setup), but i've seen write latency hit 100ms. The only place i really feel it is in a couple of VMs running docker containers that consume storage via NFS across that same port. I have 7 active VLAN's going in and out of the single port.
I have ordered additional multiport 10GbE adapters, but have not installed yet to see if the issue is resolved. Local disk latency isn't a problem, thats why I've focused on the single NIC port as the culprit. I've been able to measure up to 1GB/s (~8Gb/s) read and write into and out of the VSAN datastore, so sequential access (storage vMotion) can easily saturate the link.
BTW, this is just a lab setup.
Nice that it's $260 and you can say thatSo this switch, I can safely say is NOT a bottleneck for a VSAN setup.