Questions on 10GbE vs Infiniband for ESXi SAN

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Jun 10, 2015
33
0
6
34
Toronto, ON, CAN
Hello all,

I've been doing some research on 10GbE and Infiniband lately and was hoping that I can get some better some better advice from anyone on the forums. Currently I have two VMware ESXi 5.5 servers, with about 15-20 VM's between both, and have been using an NFS share from my NAS to store copies of my VM's. I've been looking at setting up a small SAN so that I can run my VM's off of a shared network location to centralize management, and prevent the need to move VM's between hosts when resources get low. From my understanding, 10GbE costs a hell of a lot more than IB. Doing a general search for Mellanox Connectx-3 cards on ebay shows me that a the cards can run from $150-$400 depending on the seller and card model.

I would like to know what the advantages and disadvantages of using 10GbE and Infiniband, as well as the cost of setting up either system. If I go 10GbE would it be best to buy a prebuilt system such as the QNAP TS-470 Pro, or do a DIY system? Preferably I would like a system that can start with a minimum of 4 drives and then grow as I add more storage to it. I would also like to know if using on board SATA III vs a dedicated HBA card would be beneficial since I will be running the VM's off of the networked location.

My requirements are:
-Solution must be supported by ESXi 5.5 and 6.0
-Must be networked storage that can be monitored from a web interface
-Solution must support SSD caching (512-1TB SSD will be used)

Current Setup:
2 ESXi 5.5 hosts
-Tyan S5512GM4NR
-Xeon E3-1230 V2
-32GB Kingston ECC UDIMM
-1 Crucial M500 960Gb SSD (one on each host)
-1 Intel ET dual port networking card (one on each host)

1 QNAP TS-469L (with NFS folder shared)
-4 WD 3TB RED's in a RAID 5 config

1 Zyxel GS1910-24 switch

Will be adding:
1 Juniper EX2200-48T-4G (virtualization networking will be moved to this switch once installed)

EDIT 1: Forgot to mention this is for a home lab environment.

EDIT 2: Forgot to also mention I'd like this to operate as quick as possible when running the VM's as some of them are also accessed by friends from their homes or schools.
 
Last edited:

EluRex

Active Member
Apr 28, 2015
218
78
28
Los Angeles, CA
zfs pool+infiniband sounds suits your needs and your requirements. SAN should operate at fc level or rdma level, translation to tcpip protocol really wasted lot of performance down in the drain through it was quick and easy using iscsi. so I would advice 10 gbe solutions... however, infiniband is harder to set up and also ib switch supports subnet manager is more expensive and the driver support/availability is limited compared to 10gbe.
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
You have not told if this should be a cheap homesystem or if you need some sort of a more professional solution. It also depends whether you look for a cheap homemade solution (any cheap IB) or a solution that just works.

If you mainly look for a centralized storage that is quite easy to use,
I would avoid block-storage like FC, FB or iSCSI but stay with NFS and Ethernet.

some suggestions with high quality ZFS storage
- a NFS NAS with 16-32 GB RAM (skip the idea of SSD caching, you need RAM caching for performance but use SSDs for your VMs) like HP Microserver G8 with an additional 2 Port 10 GbE Intel adapter

or a new system based on this highend server class mainboard
Supermicro | Products | Motherboards | Xeon® Boards | X9SRH-7TF
This board offers 2 x 10 GbE, a highend HBA disk controller with up to 512 GB RAM

Add a 10 GbE Adapter like Intel X540-T1 to your VM machines and connect both directly via 10 GbE to your storage. Use a webmanaged ZFS appliance software, ex FreNAS or my napp-it based on Oracle Solaris or a free fork like OmniOS.

or use a virtual SAN on both of your VM server.
This requires an additional HBA like a LSI 9207 (or a reflashed IBM 1015/IT firmware).
In such a case, you can use shared SAN storage locally or from any other machine.

Both machines are a backup option for the other or add a dedicated NFS backup system.
 
Last edited:
Jun 10, 2015
33
0
6
34
Toronto, ON, CAN
You have not told if this should be a cheap homesystem or if you need some sort of a more professional solution. It also depends whether you look for a cheap homemade solution (any cheap IB) or a solution that just works.

If you mainly look for a centralized storage that is quite easy to use,
I would avoid block-storage like FC, FB or iSCSI but stay with NFS and Ethernet.

some suggestions with high quality ZFS storage
- a NFS NAS with 16-32 GB RAM (skip the idea of SSD caching, you need RAM caching for performance but use SSDs for your VMs) like HP Microserver G8 with an additional 2 Port 10 GbE Intel adapter

or a new system based on this highend server class mainboard
Supermicro | Products | Motherboards | Xeon® Boards | X9SRH-7TF
This board offers 2 x 10 GbE, a highend HBA disk controller with up to 512 GB RAM

Add a 10 GbE Adapter like Intel X540-T1 to your VM machines and connect both directly via 10 GbE to your storage. Use a webmanaged ZFS appliance software, ex FreNAS or my napp-it based on Oracle Solaris or a free fork like OmniOS.

or use a virtual SAN on both of your VM server.
This requires an additional HBA like a LSI 9207 (or a reflashed IBM 1015/IT firmware).
In such a case, you can use shared SAN storage locally or from any other machine.

Both machines are a backup option for the other or add a dedicated NFS backup system.
This setup is for a home lab environment. I had not thought about using NFS as the connection method for the SAN. I use to work at a Data Centre in Toronto and as far as I can remember most of the SAN connections they used for ESXi was either FC, or iSCSI on 10GbE. What is the difference between using NFS over iSCSI for running the VM's?
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
NFS is a filesharing protocol what means that more than one computer can connect concurrently.
FC or iSCSI are methods to offer disc space to a single server that is treated like a local disk.
Concurrent use is only possible with a cluster software.

Performance wise, NFS, iSCSI or FC are quite similar with comparable write settings
(sync write setting with NFS and write back cache setting with iSCSI when using ZFS)

btw
why do you want to buy a quite expensive 1G Switch?
I would buy a 10G capable switch like a HP 4800 (12 or 24 x 1G, 4 x 10G SFP+) or a Netgear XS 708 (8 x 10G) or at home a D-Link DS 1510 (22 x 1G, 2 x 10G SFP+)
 
Jun 10, 2015
33
0
6
34
Toronto, ON, CAN
The Juniper EX2200 is for additional 1GbE equipment. Currently my 24 port Zyxel is maxed out and I have some additional devices waiting for a free port to be opened up for them. Initially I'd plan on using a direct connect method from my two ESXi hosts to the SAN, and then move to adding a switch in when I add additional hosts to the SAN network. The benefit of using the Juniper is that I'm somewhat familiar with the Junos software.