Custom Made SAN for VMware (Linux/FC)

Hordak

New Member
May 26, 2017
4
0
1
Hi!

I want to build a 3-node VMware Essentials cluster for our developers as a playgorund. So it's commercial use but non-mission critical.

While we have the VMware licenses left and also some Dell Servers for the ESXi's and a Dell server for a SAN incl. flash devices I'm thinking about which software to use and the connection type. I only have experience with iSCSI andI would like to use other technologies for this project.

I thought about FC because I can get those HBAs for a decent price and a plain Linux for the SAN. But - as I said - I dont't have any experience with this technology. My dump thought is to buy enough HBAs and make direct connections from each ESXi to the SAN server which will run a plain linux system - maybe multipath which will require four 2-port HBAs on the SAN side.

I know this is an uncommon topology but because of the budget I can't invest in FC Switches nor a FC capable SAN software for commerical use.

What do you think about this and further are there any special requirements for the target HBAs (e.g. additional licenses to enable target mode?).

Regards
 

Lix

Member
Aug 6, 2017
36
5
8
38
Can you live with iSCSI/NFS?

TrueNAS Core or a Windows server with Starwind SAN.
 

Hordak

New Member
May 26, 2017
4
0
1
Maybe I didn't get my intention across clearly enough but I will not use iSCSI for this project. I have enough experience with this technology and we are already running EMC iSCSI SANs and Starwind iSCSI Clusters for years.

NFS doesn't sound as an alternative - just an other TCP/IP technology.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
942
380
63
Maybe I didn't get my intention across clearly enough but I will not use iSCSI for this project. I have enough experience with this technology and we are already running EMC iSCSI SANs and Starwind iSCSI Clusters for years.

NFS doesn't sound as an alternative - just an other TCP/IP technology.
The reason to use FC vs. IP-based transport is to lower the storage latency, but it adds significant complexity in managing and the initial implementation. Using regular ethernet, you could achieve similar latency reduction benefits with iWarp/RoCE or FCoE. I'd also say that the latency reduction solutions make little sense, unless you have NVMe based flash storage (vs SATA/SAS)

Speaking of essentials, without Essentials plus license, you don't get vmotion/ha, so you will be missing out on the most significant benefits of building a VMware vSphere cluster.

Consider the alternative, Nutanix CE - it's 100% free for internal business operations and non-production use. I believe using as developer's playground qualifies. The most significant things to note are:

a) Nutanix CE is limited to 4 nodes max. Max storage isn't limited despite what Nutanix outlines vaguely state.
b) Nutanix controllers (CVMs) are fairly resource-hungry, especially on memory. Expect to dedicate at least 16-24GB ram on each host just for them.
c) You won't be using VMware to run virtual machines, but Acropolis KVM-based hypervisor supports running the same loads as VMWare.

With Nutanix (similar to vSAN), you won't need a dedicated "SAN/NAS" box and instead be using the host's local storage in a software-defined cluster.

.
 
Last edited:

audiophonicz

Member
Jan 11, 2021
67
30
18
Maybe I didn't get my intention across clearly enough but I will not use iSCSI for this project. I have enough experience with this technology and we are already running EMC iSCSI SANs and Starwind iSCSI Clusters for years.

NFS doesn't sound as an alternative - just an other TCP/IP technology.
You did, but youre not getting the answer you want because you're asking about HW requirements and building a network topology without the network... in a software sub-forum. Your questions and terminology used show your experience level with such things. Going into specifics is going to make me come off as being a dick, so I'll just say I dont think you're going to get the answers you want. You should look into the super common, thoroughly vetted, and very widely available NAS softwares, since thats essentially what youre using now, and what your budget can support.
 

LaMerk

Member
Jun 13, 2017
37
7
8
32
Maybe I didn't get my intention across clearly enough but I will not use iSCSI for this project. I have enough experience with this technology and we are already running EMC iSCSI SANs and Starwind iSCSI Clusters for years.
Starwind added support of FC - StarWind Virtual SAN for vSphere Configuring Fibre Channel - Resource Library . Such setup eliminates SPOF compared to standalone Linux storage server. With replicated storage servers you’ll need: dual port HBA in each compute host; 3 x FC ports in each storage server and 10/25GbE ethernet for synchronization between storage servers.
 
  • Like
Reactions: Stephan