Silly question

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gaking

New Member
Mar 25, 2011
11
0
0
So, I'm just starting to build a home server based on proper server components rather than consumer components. One thing I have noticed on this site is that the test servers and server motherboards often have more than one NIC, generally 2 on the motherboard and in some instances a few extra via add-in cards.

Whilst I can understand using 2 different networks in a live environment, e.g. splitting external / internet access vs internal file-sharing, I'm battling to understand why anyone would want so many NIC's in one server. Can someone please enlighten me as to their purpose and how they are used in practice? Right now I'm battling just two use 2 and may land up teaming them or just disabling one. However, I've also got a third (add-in PRO/1000PT single-port) now gathering dust, so if there's a way to use even 3 connections beneficially I'd like to do so - but how to do it? I understand that some of the Windows Server flavours don't support more than one NIC anyway (from what I've read on the 'net).
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Its not a silly question. Its actually quite a good question.

I think most of the multi-NIC examples you've seen here and on other server-related forums are related to running virtual machines on your server. Both ESXi and Hyper-v recommend as a best practice giving a separate physical NIC to each VM. They both completely support sharing NICs across multiple VMs, so this is really more of an enterprise/datacenter management practice than something required by the hypervisor. These practices are related to reliability and simplification of management (i.e., if you use separate physical NICs per VM then you don't need to worry about one VM taking up all the bandwidth and impacting the application running in another, etc). Not exactly major concerns for most home applications.

Outside of VM best practices, I see no real advantage to running multiple NICs in a home server environment other than simple teaming. In practice, even teaming is of limited value because you don't really get a bandwidth increase unless you have multiple clients hitting your server - something rather rare at home or even SOHO. You do get a bit of a reliability gain from running two NICs on the same LAN (in case one fails) - but really, when was the last time a NIC failed on your server class motherboard? And if it did - does it matter at home? You can always just walk out and move the cable to the other rj45 jack...
 
Last edited:

gaking

New Member
Mar 25, 2011
11
0
0
Thanks, that makes sense. I haven't played around with VM's yet, so perhaps I can save those spare NIC's for if and when I get that far.
 

riya

New Member
May 25, 2013
1
0
1
I really like your way of expressing the opinion and sharing the information.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
flow control is PER PORT or with PFC 802.1qbb you can do per vlan (DCB) pausing.

So if you had 10 gigabit nic's - and one vm - you'd never come close to the speed of 1 10gbe or infiniband port. For most folks, you'd get 1 gigabit speed at best without a spiff physical switch and a very spiffy virtual switch (ie nexus 100V dvswitch with esxi).

Plus 10gb /infiniband is too expensive for most folks to run around the house cable/switch wise.


the switch market is super inflated in pricing - those 10gb switches are really only worth $1000 tops - they sell for nearly $10K.