Solaris-based OSs as iSCSI initiators?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jtreble

Member
Apr 16, 2013
93
10
8
Ottawa, Canada
Question: can Solaris-based OSs (e.g., OpenIndiana ...) act as iSCSI initiators to other Solaris-based iSCSI storage targets on other nodes? I'm curious.

John Treble
Ottawa, Canada
 

jtreble

Member
Apr 16, 2013
93
10
8
Ottawa, Canada
If that's the case why would one not scale a ZFS iSCSI file system using an "one head node + high bandwidth storage network (e.g., 10 GbE, InfiniBand ...) + multiple iSCSI nodes" approach as opposed to a "one head node + HBAs + multiple directly connected JBOD expansion nodes" approach? Is it just the cost or is there more to it than that?

John Treble
Ottawa, Canada
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
like Windows 2012 or HP StoreVirtual (lefthand) you could just network raid some ISCSI nodes and scale out performance.

Most folks put windows machines in front of ISCSI. Hell you could probably even run windows 2012 in a VM next to OI/nexenta.

Most people run nexenta in a VM and use VT-d to directly connect the LSI controllers to the VM.. I just assumed it was because the esxi drivers were far better than that which came with solaris.

Then run a VM of windows 2012 to mount the iscsi and flip it back as compliant SMB3 with full ntfs attributes. I believe that is the primary weakness of solaris - incompatible with the latest 2012 smb3 metadata.
 

jtreble

Member
Apr 16, 2013
93
10
8
Ottawa, Canada
mrkrad,

We currently run both Windows Server 2012 and Nexenta (ZFS iSCSI backend) as VMs. My server, however, is now capacity bound. The decision I'm struggling with is how to scale the backend going forward. The options I'm looking at are: (i) adding HBAs and external enclosures to the existing server or (ii) upgrading the storage network and adding new iSCSI boxes.

John Treble
Ottawa, Canada
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
like Windows 2012 or HP StoreVirtual (lefthand) you could just network raid some ISCSI nodes and scale out performance.

Most folks put windows machines in front of ISCSI. Hell you could probably even run windows 2012 in a VM next to OI/nexenta.

Most people run nexenta in a VM and use VT-d to directly connect the LSI controllers to the VM.. I just assumed it was because the esxi drivers were far better than that which came with solaris.
I do not think so.
When I introduced this all-in-one idea, my main intention was to include high-end ZFS SAN features (that formerly needs a dedicated SAN server) into the ESXi machine itself with a SAN VM. It is also not a matter of driver because ESXi drivers are not used with pass-through. This was in the beginning mainly a OpenIndiana not a Nexenta solution because of the lack of fast ESXi (vmxnet3) drivers in Nexenta.

Then run a VM of windows 2012 to mount the iscsi and flip it back as compliant SMB3 with full ntfs attributes. I believe that is the primary weakness of solaris - incompatible with the latest 2012 smb3 metadata.
This is not a question of Solaris but a question when it is included in Samba (any platform) or the Solaris CIFS server. Newest Samba includes this on any platform. But I would not declare this as a "must have now" feature, more a "nice to have in some Windows only environments". I would not move back to a Windows server to get SMB3 and loose the Solaris CIFS server features.

The idea of of a storage head, connected to different nodes is ok if you plan to do some sort of HA or mirrorring. If you only need performance or capacity, you can achiev this with a lot of HBAs and vdevs in one box (My current Chenbro case has 50 bays. Disks are connected to 6 x LSI HBAs with up to 200 TB RAW). If you only need capacity, a simple expander solution may be easier to handle.
 
Last edited:

jtreble

Member
Apr 16, 2013
93
10
8
Ottawa, Canada
... The idea of of a storage head, connected to different nodes is ok if you plan to do some sort of HA or mirrorring. If you only need performance or capacity, you can achiev this with a lot of HBAs and vdevs in one box ...

OK, a "one head node + HBAs + multiple directly connected JBOD expansion nodes" solution it is then. Thanks for the info.


John Treble
Ottawa, Canada
 
Last edited: