Hello and thanks in advance for all comments.
Background
==========
I have two dedicated Napp-IT + OmniOS storage servers with free (unused) Intel 10GbE Ethernet ports. Both servers are fully up to date (OmniOS and Napp-IT).
One storage server is for various backup duties, and the other is almost entirely used for XCP-ng (XenServer) virtual disk storage, all for multiple customers.
Both storage servers are on Supermicro enterprise hardware, located in a datacenter, and have Intel X540-T2 10GbE NICs with Jumbo Frames implemented.
There is a dedicated 10GbE storage network plus OOB management.
I had initially planned to use one 10GbE NIC for NFS and the other for iSCSI. However, I never actually implemented iSCSI because NFS is just so easy. So, I'm left with unused 10GbE ports.
On the backup storage server, I have very successfully run 4x1GbE Intel NICs bonded in OmniOS for years. Perfectly stable, AFAIK.
Question
=======
In the past GEA has not recommended bonding 10GbE NICs. Is there any change to this recommendation, for critical VHD storage?
Is anyone successfully using bonded 10GbE NICs for primary VHD storage in production?
The reliability of my primary storage system is paramount.
Thanks in advance,
G
Background
==========
I have two dedicated Napp-IT + OmniOS storage servers with free (unused) Intel 10GbE Ethernet ports. Both servers are fully up to date (OmniOS and Napp-IT).
One storage server is for various backup duties, and the other is almost entirely used for XCP-ng (XenServer) virtual disk storage, all for multiple customers.
Both storage servers are on Supermicro enterprise hardware, located in a datacenter, and have Intel X540-T2 10GbE NICs with Jumbo Frames implemented.
There is a dedicated 10GbE storage network plus OOB management.
I had initially planned to use one 10GbE NIC for NFS and the other for iSCSI. However, I never actually implemented iSCSI because NFS is just so easy. So, I'm left with unused 10GbE ports.
On the backup storage server, I have very successfully run 4x1GbE Intel NICs bonded in OmniOS for years. Perfectly stable, AFAIK.
Question
=======
In the past GEA has not recommended bonding 10GbE NICs. Is there any change to this recommendation, for critical VHD storage?
Is anyone successfully using bonded 10GbE NICs for primary VHD storage in production?
The reliability of my primary storage system is paramount.
Thanks in advance,
G