HA iSER Target with ESXi Test Lab using StarWind Free vSAN

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

indigo

New Member
Dec 9, 2015
4
8
3
49
HA iSER Target with ESXi Test Lab using StarWind Free vSAN - Part1

Hi STH

I am writing here to share my experience building iSER storage a few months ago.
I hope it helps STH users.

Test Lab System

1. Storage Specifications (2x HA Storage Box)

  • CPU : Intel Xeon E3-1240v6
  • MB : Supermicro X11SSM-F
  • RAM : 4x16GB DDR4-2133 ECC/U (Total 64GB)
  • RAID Card : LSI SAS 9271-4i with Cache Protection
  • Boot Disk : ADATA 128GB M.2 SATA SSD
  • Data Disk : 24x Segate ST3000DM0019 (3TB)
  • FlashCache Disk : Samsung 960 EVO 1TB NVMe M.2
  • Chassis : Supermicro SC846E16-R920
  • NIC : 2x Mellanox ConnectX-3 VPI(Dual Ports) - using Ethernet Mode
  • Etc : PCI-E x4 M.2 Converter
  • OS : Windows Server 2012 R2

Note : Supermicro X11SSM-F Mobo has 4 expansion slot. 2 slot support PCI-E 3.0 x8 speed, and 2 slot support PCI-E 3.0 x4 speed. So I'm using this expansion slot as follows.

  • PCI-E x8 : LSI RAID Card
  • PCI-E x8 : Mellanox ConnectX-3 VPI (iSCSI Channel-A, iSCSI Channel-B)
  • PCI-E x4 : Mellanox ConnectX-3 VPI (Sync Channel-A, Sync Channel-B)
  • PCI-E x4 : PCI-E x4 M.2 Connverter

front side : 2x M.2 SATA SSD


rear side : 1x M.2 NVMe SSD

2. Host Specifications
  • CPU : 2x Intel Xeon E5-2670
  • MB : Supermicro X9DRi-F
  • RAM : 16x16GB DDR3-1600 ECC/REG (Total 256GB)
  • Boot Disk : None
  • Chassis : Supermicro SC825TQ-R740
  • NIC : 2 x Mellanox ConnectX-3 VPI(Dual Ports)
  • OS : ESXi 6.0 Update3
Note : ESXi 6.5 is still not support iSER initiator currently

3. Network Specifications

  • 2x Arista DCS-7050QX-32s 40G Switch
  • 2x Dell X1052 1G Switch with 10G Uplink

Prepare Hosts

1. Remove Inbox driver using ESXi SSH Shell

- list inbox drivers
[root@x9dri1:~] esxcli software vib list | grep mlx
net-mlx4-core 1.9.7.0-1vmw.600.0.0.2494585 VMware VMwareCertified 2017-06-10
net-mlx4-en 1.9.7.0-1vmw.600.0.0.2494585 VMware VMwareCertified 2017-06-10
nmlx4-core 3.0.0.0-1vmw.600.0.0.2494585 VMware VMwareCertified 2017-06-10
nmlx4-en 3.0.0.0-1vmw.600.0.0.2494585 VMware VMwareCertified 2017-06-10
nmlx4-rdma 3.0.0.0-1vmw.600.0.0.2494585 VMware VMwareCertified 2017-06-10

- remove inbox drivers (you must follow the removal order)
[root@x9dri1:~] esxcli software vib remove -n net-mlx4-en
Removal Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed:
VIBs Removed: VMware_bootbank_net-mlx4-en_1.9.7.0-1vmw.600.0.0.2494585
VIBs Skipped:
[root@x9dri1:~] esxcli software vib remove -n net-mlx4-core
Removal Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed:
VIBs Removed: VMware_bootbank_net-mlx4-core_1.9.7.0-1vmw.600.0.0.2494585
VIBs Skipped:
[root@x9dri1:~] esxcli software vib remove -n nmlx4-rdma
Removal Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed:
VIBs Removed: VMware_bootbank_nmlx4-rdma_3.0.0.0-1vmw.600.0.0.2494585
VIBs Skipped:
[root@x9dri1:~] esxcli software vib remove -n nmlx4-en
Removal Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed:
VIBs Removed: VMware_bootbank_nmlx4-en_3.0.0.0-1vmw.600.0.0.2494585
VIBs Skipped:
[root@x9dri1:~] esxcli software vib remove -n nmlx4-core
Removal Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed:
VIBs Removed: VMware_bootbank_nmlx4-core_3.0.0.0-1vmw.600.0.0.2494585
VIBs Skipped:

2. Reboot host and Download the Mellanox iSER driver (MLNX-OFED-ESX-1.9.10.5)

LINK : http://www.mellanox.com/page/products_dyn?product_family=29

[root@x9dri1:~] esxcli software vib install -d /tmp/MLNX-OFED-ESX-1.9.10.5-10EM-600.0.0.2494585.zip
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: MEL_bootbank_net-ib-addr_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_net-ib-cm_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_net-ib-core_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_net-ib-mad_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_net-ib-sa_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_net-ib-umad_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_net-memtrack_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_net-mlx4-core_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_net-mlx4-en_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_net-mlx4-ib_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_net-rdma-cm_1.9.10.5-1OEM.600.0.0.2494585, MEL_bootbank_scsi-ib-iser_1.9.10.5-1OEM.600.0.0.2494585
VIBs Removed:
VIBs Skipped:

3. Reboot host and identify driver install

[root@x9dri1:~] esxcli software vib list | grep mlx
net-mlx4-core 1.9.10.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2017-06-10
net-mlx4-en 1.9.10.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2017-06-10
net-mlx4-ib 1.9.10.5-1OEM.600.0.0.2494585 MEL PartnerSupported 2017-06-10

[root@x9dri1:~] esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
------------ ------------ ------- ------------ ----------- ----- ------ ----------------- ---- -------------------------------------------------
vmnic0 0000:02:00.0 igb Up Up 1000 Full 00:00:00:00:00:00 1500 Intel Corporation I350 Gigabit Network Connection
vmnic1 0000:02:00.1 igb Up Up 1000 Full 00:00:00:00:00:00 1500 Intel Corporation I350 Gigabit Network Connection
vmnic1000202 0000:83:00.0 mlx4_en Up Up 40000 Full 00:00:00:00:00:00 1500 Mellanox Technologies MT27500 Family [ConnectX-3]
vmnic1000302 0000:04:00.0 mlx4_en Up Up 40000 Full 00:00:00:00:00:00 1500 Mellanox Technologies MT27500 Family [ConnectX-3]
vmnic2 0000:83:00.0 mlx4_en Up Up 40000 Full 00:00:00:00:00:00 1500 Mellanox Technologies MT27500 Family [ConnectX-3]
vmnic3 0000:04:00.0 mlx4_en Up Up 40000 Full 00:00:00:00:00:00 1500 Mellanox Technologies MT27500 Family [ConnectX-3]

Prepare Storage

1. Install Windows Server OS and update system

2. Download the Mellanox WinOF driver (MLNX-VPI-WinOF-5.35) and install it

LINK : http://www.mellanox.com/page/products_dyn?product_family=32

*** Important Note ***

RoCE encapsulates IB transport in one of following Ethernet packet
  • RoCE MAC based - dedicated ether type (0x8195) (Called RoCEv1 in WinOF)
  • RoCE IP based - IP and dedicate IP protocol (0xfe) (Called RoCEv1.25 in WinOF)
  • RoCE over IP (Called RoCEv1.5 in WinOF)
  • RoCEv2 - UDP and dedicated UDP port (1021)
* ESXi 1.9.10.5 Driver use only RoCEv1 (MAC Based), so storage target must use MAC based RoCEv1

* Starting from WinOF 5.20 driver, the default RoCE Mode be RoCEv2, So you have to change RoCE mode to RoCEv1 for use with ESXi iSER initiator

* Mellanox ConnectX-3 VPI only support RoCEv1 and ConnectX-3 Pro VPI can support RoCEv1 and RoCEv2




3. Modify RoCE Mode with PowerShell command

PS C:\Users\Administrator> Set-MlnxDriverCoreSetting -RoceMode 1

PS C:\Users\Administrator> Get-MlnxDriverCoreSetting

 
Last edited:

indigo

New Member
Dec 9, 2015
4
8
3
49
HA iSER Target with ESXi Test Lab using StarWind Free vSAN - Part2

4. Assign IP Address and Config driver parameters

After changed RoCE mode by PowerShell command, assign IP Address

In my case, I set it up as follows

  • iSCSI-A (192.168.110.xx/24)
  • iSCSI-B (192.168.120.xx/24)
  • Sync-A (192.168.130.xx/24)
  • Sync-B (192.168.140.xx/24)


Config device driver properties.

In general, the defaults are used and the options related to virtualization are disabled

  • SR-IOV : Disabled
  • Virtual Machine Queues : Disabled
  • VMQ VLAN Filtering : Disabled




5. Download StarWind vSAN Free version and Install

LINK : Software Defined Storage for the HCI • StarWind Virtual SAN ® Free


StarWind vSAN Free version has no restrictions except VTL and Management capabilities

Management Console is available for 30 days, and you can management StarWind vSAN with PowerShell command permanently


Refer this document : https://www.starwindsoftware.com/whitepapers/starwind-virtual-san-free-vs-paid.pdf

Once the installation is complete, reboot system and run StarWind management console, add server



After connected server, go to [Configration] Tab and Click Network Section on left side.

Select IP Address of NIC, then [Modify] botton. You can check [Enable iSER for this Interface] checkbox.


For the rest of the procedure, refer to the following document to create a vDisk


By PowerShell Command line : Free version (permanently use)

LINK : https://www.starwindsoftware.com/te...-ha-device-with-starwind-virtual-san-free.pdf


By StarWind Management Console : Free version can use 30 days

LINK : https://www.starwindsoftware.com/te...ating_HA_device_with_StarWind_Virtual_SAN.pdf



5. Configuring network setting on the ESXi Host

Log into ESXi host, go to Configration Tab >> Networking >> Add Networking >> Select VMkernel

Create vSwitch and VMkernel Port group for iSER, Add VMkernel Interface, Config IP addresses, Assign Network Adapters



This process is identical to the normal iSCSI setup process for ESXi



Go to Configration Tab >> Storage Adapter

If you have successfully installed the driver, you can view the iSER interfaces as follows


Select the device under Mellanox iSCSI over RDMA (iSER) Adapter, and click [Properties]


On the Network Configration tab, click Add to create a new VMkernel Port Binding.


On the Dynamic Discovery tab, Add StarWind iSER interface IP addresses.

If you have implemented HA with two storage, enter both IP addresses.

If you run rescan after all the process, you can see the StarWind iSER Device.

 

SIlviu

Member
May 27, 2016
83
8
8
Hi, how did you connect the LSI SAS 9271-4i to the backplane, expander ? Or am I missing something...
 

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
These are pretty amazing numbers! 5+ GB/sec with a sub-20% CPU & 30 ms response time.

Any chance to see 4KB 100% random reads & 4KB 100% random writes?

BTW, iSCSI vs iSER face-to-face would be nice to have as well :)

P.S. Great job!!!
 

balkony

New Member
Oct 19, 2017
2
0
1
35
And what's the name of that M.2 to PCIe adapter?
I wonder if X11SSM can boot from it.