ESXi server- 4 Port Card NIC recommendation?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

thecoffeeguy

Member
Mar 10, 2016
119
1
18
48
hey folks

So i was just going to go out and buy a Intel 4 port NIC card for my ESXi server, but I had this feeling that I should ask first.

last one i bought was a I350-T4 off of ebay, works just fine. I might buy another one, but before I do, should i consider something else?

I dont see the need for 10g at my house atthe moment (dont even have a 10g switch). This is just to add more NICs to my esxi for some VDS and DPG stuff.

Just wanted to double check.
Thanks folks

TCG
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I350 is the newest 1G card you will mostly see, I would just stick to that
 

spali

Member
Nov 4, 2018
32
3
8
I dont see the need for 10g at my house atthe moment (dont even have a 10g switch). This is just to add more NICs to my esxi for some VDS and DPG stuff.
If you are sure to keep with one server then you should be fine, but from my own experience, as soon you have more you would do anything for 10Gbit (at least between the servers). I ran my 3 servers with 6Gbit (2 onboard and 4 in a I210). Access to the servers from the clients were good, but between the servers (VM migration, replication, backup etc.) it was horrible.
I just upgraded my servers with 40Gbit (even 10Gbit would be enough) and it's a dream. Now I'm on buying a switch for these, just for flexibility and to simplify the setup.

But if you stay with one server, keep going and take a I350. Or if you have at least 1 uplink port on your switch that is 10Gbit, I would go for a cheap 10Gbit card and use this uplink for now. But then you would be ready on the server side to a network upgrade.
 

thecoffeeguy

Member
Mar 10, 2016
119
1
18
48
If you are sure to keep with one server then you should be fine, but from my own experience, as soon you have more you would do anything for 10Gbit (at least between the servers). I ran my 3 servers with 6Gbit (2 onboard and 4 in a I210). Access to the servers from the clients were good, but between the servers (VM migration, replication, backup etc.) it was horrible.
I just upgraded my servers with 40Gbit (even 10Gbit would be enough) and it's a dream. Now I'm on buying a switch for these, just for flexibility and to simplify the setup.

But if you stay with one server, keep going and take a I350. Or if you have at least 1 uplink port on your switch that is 10Gbit, I would go for a cheap 10Gbit card and use this uplink for now. But then you would be ready on the server side to a network upgrade.
Thanks everyone for the help.
Kind of funny you mention that. I have (2) ESXi hosts and it seems to be growing. Currrently setting at 196gb of RAM between the two and have been working on the storage part at the same time (locally attached for VM's, working on a NAS for backups)

The 10gig stuff keeps hovering around in my mind. Like i should strongly think about it, dont just forget about it....

EDIT: That said, what type of card should I be looking at? Recommendations? Thx