Windows Server 2012 R2 and iSCSI Target and MPIO

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tjk

Active Member
Mar 3, 2013
481
199
43
I'm doing a bunch of lab testing with Windows Server 2012 R2 as a storage server, attached to VMW clusters with 40Gb IPoIB.

So far have tested NFS with caching and such, looking for guidance on using 2012 R2 as an iSCSI target with MPIO, since you cannot team IPoIB nics. Is there anything special I need to do to get MPIO running on the target side? I know the VMW side and have MPIO going to a bunch of other targets, just not finding much for the Windows Target side.

Any help much appreciated!

Tom
 

tjk

Active Member
Mar 3, 2013
481
199
43
Oh, o_O
Put your Windows 2012r2 server iSCSI network on two subnet.
Got that, for example target side is 10.11.12.100 and 10.11.13.100; on the initiator side, do I have to put in each target address? Do I have to do anything special on the target side?

For example, I use a lot of EqualLogics, and I have a floating address of say 10.10.10.10, and then each of the interfaces are 10.10.10.1, .2, .3, .4, and for the initiator, I tell it to connect to target 10.10.10.10, which then load balances to the .1, .2, .3 and .4 addresses.

Tom
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Got that, for example target side is 10.11.12.100 and 10.11.13.100; on the initiator side, do I have to put in each target address? Do I have to do anything special on the target side?

For example, I use a lot of EqualLogics, and I have a floating address of say 10.10.10.10, and then each of the interfaces are 10.10.10.1, .2, .3, .4, and for the initiator, I tell it to connect to target 10.10.10.10, which then load balances to the .1, .2, .3 and .4 addresses.

Tom
Proper SCSI MPIO (whether iSCSI, FC, FCoE, or some other transport) you should be adding every target to every initiator. What your EqualLogic is doing (and many other iSCSI arrays, like the P4000/LeftHand I deal with) is iSCSI Login Redirect, where the target (usually from a floating virtual IP) redirects to one of its other addresses on connect.
 

tjk

Active Member
Mar 3, 2013
481
199
43
Proper SCSI MPIO (whether iSCSI, FC, FCoE, or some other transport) you should be adding every target to every initiator. What your EqualLogic is doing (and many other iSCSI arrays, like the P4000/LeftHand I deal with) is iSCSI Login Redirect, where the target (usually from a floating virtual IP) redirects to one of its other addresses on connect.
TuxDude,

Thanks for the reply, a couple points/questions if I may.

So on the target side of 2012 R2, nothing special, each interface on a different subnet. On the VMW side, enter each target address, and then enable MPIO on the VMW side, that simple?

On another note, I used LeftHand before HP bought them out, I used to love that stuff, we ran mostly the VSA's. Are you running hardware or VSA's, and how do you like them now, many years later?

On the licensing side, I know they license by capacity usable, or used to. Can I run as many VSA's as I want and just license the capacity or do I have to have a license for each VSA still with the necessary capacity?

Thanks again!
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
TuxDude,

Thanks for the reply, a couple points/questions if I may.
No problem.

So on the target side of 2012 R2, nothing special, each interface on a different subnet. On the VMW side, enter each target address, and then enable MPIO on the VMW side, that simple?
Afraid I can't really help you on the Windows target part. I've never setup a MS-based iSCSI target and most of the time do my best to not use any of their products at all. As for the VMware side MPIO is always on, you just have to do whatever setup is required for it to find all of the available paths to the storage. If you are using full hardware-offload iSCSI cards/CNAs (they show up as storage adapters in the VI client/web client), then on each of them you have to enter all of the target addresses. If you are using the software iSCSI client, then you need to setup one vmkernel interface for every physical NIC that you want to use for iSCSI, each connected to a vswitch/port-group that is configured to use only that pNIC, and then add all of them to the SW-iSCSI vmhba. Once all of that is complete you only need to add each target once to the vmhba and the paths out through all of the vmknics will be discovered. I can write up a guide on setting up ESXi software iSCSI with NIC-binding if there's demand for it, I just converted a 6-host cluster to use it at work (previously it was Emulex CNA's in iSCSI personality, now the CNA's are FCoE personality and using SW-iSCSI on top)

I should also note that you probably don't have to have your interfaces in different subnets (possibly required for correct routing on MS platform). I had two subnets when I started with iSCSI but ended up changing the subnet mask to a /23 and putting it all into the same subnet. The thing about the P4000/LeftHand platform is that each node only gets one IP plus the one floating IP and they all have to be in the same subnet. So then both initiators on each host need to be in the same subnet to get redundancy/MPIO to the P4000. It goes against most documentation, but our P2000G3 arrays kept working just fine when I reconfigured all of their IPs to be in the same subnet too. It's really a conflict between arrays that like to use SCSI MPIO for path failover/load-balancing wanting a pair of subnets (analogous to a pair of FC fabrics or dual-path SAS) VS arrays that like to use ethernet technologies (NIC-bonding, virtual/floating IPs, etc.) for their failover/load-balancing. It gets interesting when you need to have a single server simultaneously accessing both types of arrays at the same time in a redundant way.

On another note, I used LeftHand before HP bought them out, I used to love that stuff, we ran mostly the VSA's. Are you running hardware or VSA's, and how do you like them now, many years later?
We've had a pair of VSA's at work for 2, maybe 3 years now. I'm not a huge fan - layering the network-raid (which has to be mirroring for us with only 2 nodes) on top of each nodes local raid results in really low storage capacity. And as a storage guy with way more experience on the SCSI/FC side of things I really don't like the whole floating/virtual IP way of doing node failover/multipath.

On the licensing side, I know they license by capacity usable, or used to. Can I run as many VSA's as I want and just license the capacity or do I have to have a license for each VSA still with the necessary capacity?

Thanks again!
I'm honestly not entirely sure. I believe you need to have a license for each VSA but might be wrong.
 

Darkytoo

Member
Jan 2, 2014
106
4
18
Marsh, thanks! I think this is for setting up MPIO on the initiator/client side, I'm trying to see if I have to do anything special on the Target/server side since I am using multiple interfaces on the Target/server side, and you cannot team Infiniband/IPoIB interfaces.

Tom
1. Make sure and unbind everything except for TCP/IP4 on all iSCSI interfaces
2. Disable "nagle algorithm"
3. Jumbo packets
 

Radian

Member
Mar 1, 2011
59
5
8
Having problems with redundant iscsi setup, I thought it was correctly configured until we lost a iscsi switch and all VM's went down, even though Cluster validation passed and both servers could see duplicate paths to the storage. Given that it passed and Mpio was showing two connections to the target, I didn't think otherwise.

All ip's are addressable on both iscsi switches, so why would vm's fail on loss of one switch?
 

Radian

Member
Mar 1, 2011
59
5
8
turns out on one of my cluster nodes, a third connection to the target had been configured. Removed the third connection and now failover is as expected.