I have added a sample configuration for a 2000 Euro/USD ZFS 6TB SAS HA Cluster (whole hard and software) with a step by step setup manuals to http://www.napp-it.org/doc/downloads/z-raid.pdf
Cluster supports HA iSCSI failover (beside NFS and SMB)
Cluster agent: sync/restore user and Comstar to/from HA pool zraid-1
Comstar save/ restore all settings without reboot (Cluster aware)
Cluster control functionality can be activated on a backup server
Appliance group add: allows to add a Cluster (allows replication from a HA ip)
Solaris 11.4: beadm destroy -s problem fixed
This sounds like a fun project that I would love to play with. However, what is the benefit of having this virtualized on one piece of hardware?
My understanding is that the HA setup is meant to mitigate hardware failures, but with all VMs runing on the same hardware, that advantage is moot. The only thing that comes to my mind that can me optimized with this is the downtime during an update. Is there any other advantage?
A ZFS Cluster consists of two servers (active/passive head) and a common shared ZFS pool. Hardware failures (beside disks) for what you can use a HA Cluster for a better availability are one in many years. Software updates either a new stable OmniOS version (every 6 months) or bug or security fixes (every few weeks) happen more often and each may introduce problems.
If you mainly want to test updates or evaluate new features prior production use with a failover time after evaluation of 20s, you can virtualise the two servers to reduce hardware and costs.
If you want a HA Cluster to allow a full storage server failure, you need two hardware/barebone servers. If you additionally want to allow a full disk jbod failure you need two jbods (each connected to both heads) in a HA/mirror setup.
The outage is short enough to avoid a storage timeout. If there are applications with a file lock on a file, this would not be valid after the failover. In general the behaviour after a failover is quite the same like after a NFS (or SMB) service restart on a normal NFS system.
Support for minIO is planned. Until then you can start it now with the post failover script functionality.
I would expect ESXi to to behave uncritical after a NFS service restart and this is what happens from ESXi view. A different question is if you have VMs where special operations may rely on a file locking mechanism but I would see this as a very special case.
In fact I use virtualization much more under KVM.
And I don't know if virtual machines use special operations that rely on a file locking mechanism.
Is it possible on the head1 and head2 nodes as well as the cluster control to have an interface dedicated to the LAN and another to management, and therefore to have a dedicated network for management?
This is the default setup. All three nodes are connected via a management interface. The two heads have an additional interface connected to the lan either via a static ip or a ha failover ip.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.