ZFS vCluster in a Box

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
info
napp-it 18.12 p2 (next free) is ready and includes all latest fixes
including Cluster auto failover service.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
napp-it 19.06 Pro

Cluster supports HA iSCSI failover (beside NFS and SMB)
Cluster agent: sync/restore user and Comstar to/from HA pool zraid-1
Comstar save/ restore all settings without reboot (Cluster aware)
Cluster control functionality can be activated on a backup server
Appliance group add: allows to add a Cluster (allows replication from a HA ip)
Solaris 11.4: beadm destroy -s problem fixed
 
  • Like
Reactions: Evan

Alestrix

New Member
Jan 29, 2020
1
0
1
This sounds like a fun project that I would love to play with. However, what is the benefit of having this virtualized on one piece of hardware?
My understanding is that the HA setup is meant to mitigate hardware failures, but with all VMs runing on the same hardware, that advantage is moot. The only thing that comes to my mind that can me optimized with this is the downtime during an update. Is there any other advantage?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
It depends.

A ZFS Cluster consists of two servers (active/passive head) and a common shared ZFS pool. Hardware failures (beside disks) for what you can use a HA Cluster for a better availability are one in many years. Software updates either a new stable OmniOS version (every 6 months) or bug or security fixes (every few weeks) happen more often and each may introduce problems.

If you mainly want to test updates or evaluate new features prior production use with a failover time after evaluation of 20s, you can virtualise the two servers to reduce hardware and costs.

If you want a HA Cluster to allow a full storage server failure, you need two hardware/barebone servers. If you additionally want to allow a full disk jbod failure you need two jbods (each connected to both heads) in a HA/mirror setup.
 
  • Like
Reactions: SRussell

rchristophe

New Member
Aug 29, 2016
29
0
1
Hello. What behavior should we expect from a vm having its disk on an NFS share during the 20 seconds break related to server failover?

Are there any plans to add support for MinIO?
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
The outage is short enough to avoid a storage timeout. If there are applications with a file lock on a file, this would not be valid after the failover. In general the behaviour after a failover is quite the same like after a NFS (or SMB) service restart on a normal NFS system.

Support for minIO is planned. Until then you can start it now with the post failover script functionality.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
I would expect ESXi to to behave uncritical after a NFS service restart and this is what happens from ESXi view. A different question is if you have VMs where special operations may rely on a file locking mechanism but I would see this as a very special case.
 

rchristophe

New Member
Aug 29, 2016
29
0
1
In fact I use virtualization much more under KVM.
And I don't know if virtual machines use special operations that rely on a file locking mechanism.

Is it possible on the head1 and head2 nodes as well as the cluster control to have an interface dedicated to the LAN and another to management, and therefore to have a dedicated network for management?
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
This is the default setup. All three nodes are connected via a management interface. The two heads have an additional interface connected to the lan either via a static ip or a ha failover ip.

A setup with one nic per node is possible.