10G NIC Bonding Revisited

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sonoracomm

New Member
Feb 10, 2017
7
0
1
66
Hello and thanks in advance for all comments.

Background
==========

I have two dedicated Napp-IT + OmniOS storage servers with free (unused) Intel 10GbE Ethernet ports. Both servers are fully up to date (OmniOS and Napp-IT).

One storage server is for various backup duties, and the other is almost entirely used for XCP-ng (XenServer) virtual disk storage, all for multiple customers.

Both storage servers are on Supermicro enterprise hardware, located in a datacenter, and have Intel X540-T2 10GbE NICs with Jumbo Frames implemented.

There is a dedicated 10GbE storage network plus OOB management.

I had initially planned to use one 10GbE NIC for NFS and the other for iSCSI. However, I never actually implemented iSCSI because NFS is just so easy. So, I'm left with unused 10GbE ports.

On the backup storage server, I have very successfully run 4x1GbE Intel NICs bonded in OmniOS for years. Perfectly stable, AFAIK.

Question
=======

In the past GEA has not recommended bonding 10GbE NICs. Is there any change to this recommendation, for critical VHD storage?

Is anyone successfully using bonded 10GbE NICs for primary VHD storage in production?

The reliability of my primary storage system is paramount.

Thanks in advance,

G
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
In the past, use boding for SMB and NFS, use MPIO for iSCSI on non-bounded interfaces.
However, with SMB 3 multi-channel and NFS 4.1, MPIO rule applies to both NFS and SMB as well as long as they are supported on client side.