First post on here so hi everyone!
I have tried quite hard to get such a redundant storage solution for VMware and couldn't make it work on NFS on Linux. Failover would fail because of open handles and network multipathing is not possible (afaik). I also tried with NFSv4/multi-sessions and couldn't get the Linux NFS server to play nicely with ESXi. My next option would have been nfs-ganesha.
I finally went down the iscsi path with SCST and custom Pacemaker resource agents for zfs pools and iscsi targets (happy to share).
I also first tried to export zvol block devices and performance was abysmal. I finally went for simple files on ZFS datasets exported using fileio and will probably never look back.
Finally, network multipathing is trivial with iscsi.
I have been running this on CentOS 7 / ZFS 0.7.x / SCST 3.3 for years. I am now in the process of building the next iteration on Rocky Linux 8.4 / ZFS 2.0 (or 2.1) and the latest SCST.
I am therefore also looking for optimizations.
Cheers. Patrick
I have tried quite hard to get such a redundant storage solution for VMware and couldn't make it work on NFS on Linux. Failover would fail because of open handles and network multipathing is not possible (afaik). I also tried with NFSv4/multi-sessions and couldn't get the Linux NFS server to play nicely with ESXi. My next option would have been nfs-ganesha.
I finally went down the iscsi path with SCST and custom Pacemaker resource agents for zfs pools and iscsi targets (happy to share).
I also first tried to export zvol block devices and performance was abysmal. I finally went for simple files on ZFS datasets exported using fileio and will probably never look back.
Finally, network multipathing is trivial with iscsi.
I have been running this on CentOS 7 / ZFS 0.7.x / SCST 3.3 for years. I am now in the process of building the next iteration on Rocky Linux 8.4 / ZFS 2.0 (or 2.1) and the latest SCST.
I am therefore also looking for optimizations.
Cheers. Patrick