Scenario:
-iSCSI Target is provided by a VM
-then mounted as a host's Datastore
-ESXi gets rebooted and because target is not present during ESXi start-up (as that VM has not even booted yet) the iSCSI Datastore never gets mounted
/*I know the set-up is less than ideal, but for home environment must do for now. iSCSI datastore is for testing and less critical stuff. Main Datastore is local NVME*/
Surely in that scenario the system behavior is as expected, but not desired. Manual re-scanning and re-mounting is not feasible.
iSCSI Advanced Parameters tuning (timeouts, retries) are useless IMHO. Done some testing on that.
The solution which works for me is to send
esxcli storage core adapter rescan --all
from iSCSI Target VM (when this fully boots) to ESXi host. Then Datastore gets re-mounted and everybody's happy.
Any of you using similar setup having any tricks to re-mount iSCSI datastore?
-iSCSI Target is provided by a VM
-then mounted as a host's Datastore
-ESXi gets rebooted and because target is not present during ESXi start-up (as that VM has not even booted yet) the iSCSI Datastore never gets mounted
/*I know the set-up is less than ideal, but for home environment must do for now. iSCSI datastore is for testing and less critical stuff. Main Datastore is local NVME*/
Surely in that scenario the system behavior is as expected, but not desired. Manual re-scanning and re-mounting is not feasible.
iSCSI Advanced Parameters tuning (timeouts, retries) are useless IMHO. Done some testing on that.
The solution which works for me is to send
esxcli storage core adapter rescan --all
from iSCSI Target VM (when this fully boots) to ESXi host. Then Datastore gets re-mounted and everybody's happy.
Any of you using similar setup having any tricks to re-mount iSCSI datastore?