Buy a $100 1G NIC, or try go 10G?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

VirtualBacon

Member
Aug 21, 2017
95
23
8
29
First post here, the forum looks great so I thought this was a good place to ask!

Backstory:

I just switched from my Virtual WS2016/Stablebit NAS, to a Synology DS1817. This means that my VM's can no longer access files at vmxnet3 10G speeds internally, and they all have to go over the gigabit physical network. I need to increase the available bandwidth between the NAS and the network, and the ESXi server and the network

Hardware involved:

NAS: Synology DS1817 - 2 x 10G ports and 2 x 1G ports
Switch: Cisco SG300-28P - All 1G, no 10G
ESXi Server: ESXi 6.5 Enterprise, Supermicro board with 2 x 1G LAN + Dedicated IPMI

The question comes to whether I should drop $100 on an Intel I350-T4, and setup LACP between my ESXi server and the switch. Then also setup LACP on the 4 NICS in the Synology giving me 4Gb of total bandwidth between the server and the switch, and the NAS and the switch. The total cost here is $100 (Yes, I know its a 4 lane gigabit highway, and not a 4 gigabit single road)

Or, should I try and go 10G?

I could get a 10G NIC and to a point-to-point with the NAS, and then bond the other three NAS connections left over with the switch. This would give me even more bandwidth to the ESXi server, while still leaving enough bandwidth for the NAS on the 1G network, and I would be ready for when a decent 10G switch comes into a decent price range. I would then bond the two built 1G ports in ESXi for the rest of the network

I guess the other question, is whether there are any power efficient 10G switches out there that don't cost crazy amounts? if there is then just going 10G all out is probably a good idea
 

cheezehead

Active Member
Sep 23, 2012
723
175
43
Midwest, US
Imo, get the 10gb nic for your VMware and direct connect it to the array. Unless your workloads have changed, you had 1GB links before for client connectivity this wouldn't change but would shift any bandwidth direct to the VM's over to the 10gb interfaces on the Synology box.

10GB switches are out there and can be picked up sub-$150 (some sub-$100) if you don't mind noise or power draws. More power efficient models run like $200-500ish.
 
  • Like
Reactions: realtomatoes

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
Go 10Gb - you will have issues bonding multiple interfaces together using normal LACP as the traffic isn't distributed across all 4 equally so you won't see the expected benefit w/o a lot of hassle. Also sets you up better for the future when you want more things connected at 10G.
 

VirtualBacon

Member
Aug 21, 2017
95
23
8
29
Imo, get the 10gb nic for your VMware and direct connect it to the array. Unless your workloads have changed, you had 1GB links before for client connectivity this wouldn't change but would shift any bandwidth direct to the VM's over to the 10gb interfaces on the Synology box.

10GB switches are out there and can be picked up sub-$150 (some sub-$100) if you don't mind noise or power draws. More power efficient models run like $200-500ish.
Which switches are you seeing for 200-500? I know the Quanta switches are out there for cheap but they just use too much power. The power itself is not a concern for me, but my entire lab sits in a closet with no active cooling, so I really need to watch what I put in there or it will end up too warm (I would LOVE to move it somewhere better, but I just don't have the space in my apartment)

Go 10Gb - you will have issues bonding multiple interfaces together using normal LACP as the traffic isn't distributed across all 4 equally so you won't see the expected benefit w/o a lot of hassle. Also sets you up better for the future when you want more things connected at 10G.
I have never had a problem with LACP before, what kind of problems do you see?

SMB file copying in Windows 8+ can even utilize the extra links and give you over 1Gb on a single file copy. I have never done 4 before in ESXi with meaningful traffic. The last time I did it with 2, and I really didn't have a whole lot going on so it wasn't really needed for fully utilized

My main concern is that if something is pulling 1Gb through ESXi currently, there is no bandwidth left. Having some extra lanes would help with that
 

cheezehead

Active Member
Sep 23, 2012
723
175
43
Midwest, US
Which switches are you seeing for 200-500? I know the Quanta switches are out there for cheap but they just use too much power. The power itself is not a concern for me, but my entire lab sits in a closet with no active cooling, so I really need to watch what I put in there or it will end up too warm (I would LOVE to move it somewhere better, but I just don't have the space in my apartment)
There's quite a few, 10gb ports can often be added in via modules.

MikroTik CRS226-24G-2S+IN
Procurve 3500yl (via module)
Cisco 3750E (via module)
Cisco Nexus 5k
PowerConnect 5524/5524P
Netgear GS752TXS
Avaya/Nortel 5650TFD (2x XFP ports)
Avaya/Nortel 4526gtx/4526GTX-PWR (2x XFP ports)

There's also another thread of people finding all 10gb switches under $550 - Gigabit + 10Gb Switches under $550
 

Craash

Active Member
Apr 7, 2017
160
27
28
Might look at the TP-Link JetStream 24-Port Gigabit Ethernet Smart Switch with 4-10GE SFP+ Slots. I have it and it's great. 4 10GE ports and 24 regular 1GB ports. The best thing? It's fanless. The NEXT best thing? It can be had for under $300 - to your door - if you are patient. It's setting at $375 right now, but just last week it was $308, shipped, no tax. Link:
 

Michael Hall

Member
Oct 9, 2015
40
10
8
No need to spend $100 on a quad gig NIC. I've got new and used cards for sale over here, for C$35 and C$25, plus shipping, respectively. They're based on the older Intel 82571EB chip, but they work just fine.