recommendations for 10gbe pci card for esxi 7+

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

forx197

New Member
Feb 18, 2023
10
3
3
found that my connectx cards i have wont work if i upgrade to 7. Wondering if anyone has any recommendation of a single or dual card that works well?

im still running old HP z600 dual xeon workstations in my lab. on my first attempt upgrading it didnt recognize any of my NICs...onboard or other quad card....nor did it like my procs.....nor did it recognize my storage controller or any drives. Its been a pretty crazy time.
I ended up replacing my quad 1gbe nic with another from the hcl i found at a good price and my storage controller was on the hcl but for some reason didnt work in AHCI mode. I changed that in bios and my drives came back...and the procs was addressed with Shift+o at boot and appending the allow legacy cpu=true.

The only think that is hanging me up a bit is my 10gb nic.

IM starting to think it might be time to upgrade. I have heard of folks having good luck using dell optiplex 5050/5060 machines. I bet those small form factor modern units use way less power than these z600s....and probably take up a lot less space. Maybe i should start looking at something like that soon. Still need a 10gbe card option though. Bummed my old connectx cards wont work. I got a great deal on those quite a few years back.
 

CyklonDX

Well-Known Member
Nov 8, 2022
824
267
63
intel x520 da2 is a cheap option;
(x710 even newer, and more expensive.)

broadcom bcm57810s
or P210P.

(for desktop use i prefer broadcom since it has fans - tho in servers i always had problems with broadcom nics loosing packets - RTTP usecases, normal use cases should be fine.)
 

forx197

New Member
Feb 18, 2023
10
3
3
thanks! That is a great deal. Ill look into all these. Cant wait to be up and running again.
 

forx197

New Member
Feb 18, 2023
10
3
3
i picked up a few of the super micro aoc-sgn-I1S cards.

They worked fine in 2 hosts but the third one wont pick it up in esxi no matter what i do. i have pulled a working one from one of the other hosts and same result. I have reset the bios to factory defauts...same result. All 3 hosts are the same make and model server. I have tried all the open slots in the machine. Same result.

If i issue a lspci -v | grep -A1 -i ethernet
It seems to see the card just fine as:
Intel(R) 82599 10 Gigabit Network Connection [vmnic0]

If i issue a esxcfg-nics -l it will only see the 4x1gb vmnic1-4 ports on my broadcom card.

This doesnt make sense. I have never heard of all pcie slots going bad without the whole board going bad so that cant be it.

I can see the intel boot option on start up and the nic has link when its plugged in.

The DCUI F2 interface under configure network also only shows vmnic1-4 which are the 1gb broadcom ports.
 

zack$

Well-Known Member
Aug 16, 2018
704
323
63
Is the management network configured to use to the aoc-sgn-I1S on the host with the problem?
 

CyklonDX

Well-Known Member
Nov 8, 2022
824
267
63
those are sfp cards; make sure you are using correct sfp's compatible with your card, and switch. (also make sure its not in fcoe mode)
 

forx197

New Member
Feb 18, 2023
10
3
3
The management network is bound to another vmnic so no...but its interesting that it shows as vmnic0 in the lspci -v | grep -A1 -i ethernet command. Vmnic0 used to be the old realtek onboard driver which is no longer supported (ESXi7)
I guess im not sure how to change modes on those cards. SFP-wise all hosts and cards are using the same twinax cables and i am getting link so i know the card is getting power. IF i swap a card from a working machine into this machine....same result so the problem follows the host....not the card. I guess i can try to reload from scratch and see if it picks it up.
 
Last edited:

zack$

Well-Known Member
Aug 16, 2018
704
323
63
The management network is bound to another vmnic so no...but its interesting that it shows as vmnic0 in the lspci -v | grep -A1 -i ethernet command. Vmnic0 used to be the old realtek onboard driver which is no longer supported (ESXi7)
I guess im not sure how to change modes on those cards. SFP-wise all hosts and cards are using the same twinax cables and i am getting link so i know the card is getting power. IF i swap a card from a working machine into this machine....same result so the problem follows the host....not the card. I guess i can try to reload from scratch and see if it picks it up.
I'm stumped.. You only need supported hardware if your running it on the hypervisor. If your running hardware through the hypervisor (as pass through to a VM) then, as far as I am aware, there is no real issue with compatibility (except for the VM or limitations on the card for pass through as we have had with certain NVIDIA GPUs).

Why did you need the new 10gbe cards if you were not gonna use them for the ESXI management network?
 

forx197

New Member
Feb 18, 2023
10
3
3
i didnt need them however i wanted to have my storage, vmotion and vm traffic on the 10gbe interface and keep the mgmt traffic on a separate vnic (1gb) isolated to its own vswitch. I guess i could put them all on the 10gbe interface if the card would show up.
 
  • Like
Reactions: zack$

zack$

Well-Known Member
Aug 16, 2018
704
323
63
Maybe others on the forum can advise as to the issues with heading in this direction...but from my own experience, I haven't had any issues with various VMs using this particular card (at the same time) when offered as a vnic.