My configuration:
Server: Supermicro X9SCL+-F, E31240V2, 16GB RAM
USB stick: ESXi
2 x 160GB 2,5" disks: 20GB VM datastore on each, OMNIOS installed in one and then mirrored
2 x 320GB 2,5" disks: on separate disk controller mounted in OMNIOS as ZFS mirror, one file system, stores all the vm images, mounted in ESXi via NFS
2 x 3TB 3,5" disks: datastore, used by various applications
on ESXi there are several servers installed, all having their datastore on the NFS mounted volume
This morning no ESXi machine was running except the OMNIOS. On all the others it showed "not available". I tried to console into omnios, was able to, but did not get any answer from the zfs filesystems. (I was trying to "ls /data" and didn't get a response where "/data" sits on the 3TB ZFS pool.)
I rebooted the OMNIOS and then found out that one of the 320GB disks, which hold the vm datastore is unavailable. Prior to the reboot I had no access to the web interface.
My question now is:
Shouldn't this setup prevent exactly that kind of thing???
And, is there a log file somewhere where we can maybe find out what went wrong?
Server: Supermicro X9SCL+-F, E31240V2, 16GB RAM
USB stick: ESXi
2 x 160GB 2,5" disks: 20GB VM datastore on each, OMNIOS installed in one and then mirrored
2 x 320GB 2,5" disks: on separate disk controller mounted in OMNIOS as ZFS mirror, one file system, stores all the vm images, mounted in ESXi via NFS
2 x 3TB 3,5" disks: datastore, used by various applications
on ESXi there are several servers installed, all having their datastore on the NFS mounted volume
This morning no ESXi machine was running except the OMNIOS. On all the others it showed "not available". I tried to console into omnios, was able to, but did not get any answer from the zfs filesystems. (I was trying to "ls /data" and didn't get a response where "/data" sits on the 3TB ZFS pool.)
I rebooted the OMNIOS and then found out that one of the 320GB disks, which hold the vm datastore is unavailable. Prior to the reboot I had no access to the web interface.
My question now is:
Shouldn't this setup prevent exactly that kind of thing???
And, is there a log file somewhere where we can maybe find out what went wrong?