iSCSi transport for production SAN storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

maze

Active Member
Apr 27, 2013
576
100
43
We are in the process of designing our new hosting setup. Its not big, its not fancy.. hense why im considering using iscsi over 10g to handle storage.

We will most likely have 4 hosts connecting to a Lenovo v3700 SAN. Im thinking something we'll have some flash for cache and then run the rest on 10k spinners.

Does anyone have experience running a v3700 on 10g fiber with iscsi? - our load/io requirements arent really big. I just wanna do it simple and stable. - And imo using the existing switch infrastructure would be a great option.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
FCoE an option ?
Depends on use case but a preference if it's possible for performance and reliability.

Either way, Do you have redundant network ? Sepeeate vlans and traffic onto different interfaces.
Do Ou have iSCSI accelerated 10g cards or is the host doing all the work ? (Certainly not an issue if the load is Low though)
 

maze

Active Member
Apr 27, 2013
576
100
43
fcoe could be an option yes.

Good point in regards to NIC, should have fcoe accelerator/offloader.

Network isnt really redundant as it is now. Would just do one vlan, connect both nic interfaces to the switches and have dual links between switches. Dont think we have budgeted with redundant switches - sadly.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I always try to get the traffic onto dedicated ports, with if need failover to shared ports.
Most important thing is great reliability , since FCoE costs more spines like iSCSI is what you need.

Lots of success using iSCSI as a transport for ESX (server was netapp mostly). Are your hosts ESX or something else ?
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
If you do not need iSCSI multipath options, NFS is an alternative.
Similar fast but much easier to maintain and you can have additional regular file access ex via SMB.

In case you use a snapshot capable filesystem like ZFS you can then use Windows Previous version for snap access for hot clone/ backup or restore
 

bds1904

Active Member
Aug 30, 2013
271
76
28
In a production environment FCoE is much easier to manage than iSCSI. ESXi handles FCoE better too. Typically I get far lower latency on FC or FCoE because of the hardware offloading handled by the card.

FC and FCoE also bring virtual adapters to the table. Giving a VM "Direct" access to a raw target can be quite nice.
 

wildchild

Active Member
Feb 4, 2014
389
57
28
In a production environment FCoE is much easier to manage than iSCSI. ESXi handles FCoE better too. Typically I get far lower latency on FC or FCoE because of the hardware offloading handled by the card.

FC and FCoE also bring virtual adapters to the table. Giving a VM "Direct" access to a raw target can be quite nice.
Beg to differ, corp we actually removed the fiber channel infra about 4 years ago, as it was much more expensive, hard to manage properly, and had no additional value over properly sized and maintaned ISCSI
 

bds1904

Active Member
Aug 30, 2013
271
76
28
Beg to differ, corp we actually removed the fiber channel infra about 4 years ago, as it was much more expensive, hard to manage properly, and had no additional value over properly sized and maintaned ISCSI
Well to each their own. We manage 1500 servers with 60,000 VM's with everything from virtual desktops to OTT content. FCoE and FC is really simple for us because it's configured on the target side instead of the client side. With our setup this is easier because there's no IP to enter for the target. As you can imagine we have multiple SAN's and things can get real complicated really quickly.
 

wildchild

Active Member
Feb 4, 2014
389
57
28
Well we manage about 7 dc's globally on 5 continents, about the same number of hosts and vm's, but lets not get into a pissing contest :)

Mainly use autoprovisioning for the hosts, takes some time to do well, but especially in south america and asia it's easier to get hands on guys that understand ip :)

Indeed to each their own, but we for one are very happy with our current setup, and vmware supports their latest and greatest just aswell on iscsi as they do fc..
Nfs is a bit different
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
VMware supports well which is why I asked what the OP was running, iSCSI is mostly as good as anything in this instance.

Outside of that if you want to SAN boot enterprise unix systems running Oracle or whatever you run FC :)
At least in our experience and lots of testing things different FC still is the best performing, most reliable etc.

One think not mentioned here, you would never run iSCSI across a security boundary, it's after all an ethernet network and easy to traverse.
FC I would happily support storage to an internet DMZ or internal server at the same time with more or less no security concerns.
 
  • Like
Reactions: NetWise

vrod

Active Member
Jan 18, 2015
241
43
28
31
iSCSI should do fine for you, but go get VMFS6 (Esxi 6.5+) as it will reclaim space automatically for your VM's on the iSCSI array (given that it has VAAI).
 

maze

Active Member
Apr 27, 2013
576
100
43
Okay, so I think we'v narrowed it down a bit.

Gonna go with iscsi via 10g. Going to go with hpe flexfabric 10g switches. Gives a good 40x 10g ports and possible for 40g uplinks - so we'll have plenty of space to grow! :)

Will hopefully get this approved within the next 2 weeks so we can start grabbing gear :)