Suggested appliance / linux as SAN+NAS solution?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

voodooFX

Active Member
Jan 26, 2014
247
52
28
I'm replacing my Synology RS814 with this more powerful and flexible solution

- Supermicro X10SLH-F
- Core i3-4130T
- 16GB DDR3 1333 ECC
- 4x WD RE4 1TB (RAID10) (ore more..)
- 2x Crucial MX100 512GB (RAID1)
- Mellanox ConnectX2 IB (40Gigabit)

The Hard Drives will store all my "NAS" data (docs/videos/music/iso etc.) and should be exported as samba shares and NFS shares for some folders.
The SolidState Drives will be an iSCSI targget for my ESXi cluster and should be exported via IB at 40 or 10 Gbit (I don't need more than 10)

That said, I'm pretty confused how I could accomplish my requirements
Should I go with FreeNas? Does it support IB or IBoverIP or something will give me at least 10Gbit?
OpenFiler looks discontinued, is it?

And what about a "pure" linux installation and manual service (iSCSI, NFS, SMB..) deployment?
 

Mike

Member
May 29, 2012
482
16
18
EU
I would run it on Linux as it allows you to align it within your update- and configuration management, and have more options than the typical web-gui would expose. Appliances always end up as the oddballs in your network, but that's just my opinion.

Having said that, Openfiler really has an odd base and has lacked updates over the past few years. Running without these updates is potentially(very) insecure.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,811
113
I went through this debate when I did the initial configuration of a new storage/ backup server for the colo. I ended up installing nas4free because it had the ConnectX-2 EN drivers built-in. The good news is that there are lots of "free" ZFS solutions out there to play around with.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I've always been a fan of doing it all myself from scratch. You can always add a web gui or something on top of it if you want, though it seems the easy-to-use gui's never expose the full functionality and flexibility that the underlying OS/app can provide.

I'm not sure on getting clients setup to access it as I have no experience with IB at all, but I can tell you that Linux kernel 3.10 got ISER target support (iSCSI using RDMA over IB or 10GbE) Linux SCSI Target - ISCSI Extensions for RDMA and iSCSI target has been in the kernel since 3.1. The LIO stack will also give you full MPIO and VAAI support for your ESX hosts.
 

voodooFX

Active Member
Jan 26, 2014
247
52
28
Thank you TuxDude
This is very interesting!

Just one question: the LIO stack is a direct part of the linux iSCSI implementation? This means I will get VAAI and MPIO automagically just because my iSCSI target is provided by Linux with 3.10 (or newer) kernel?!

P.S. suggestions about the linux distribution?
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Thank you TuxDude
This is very interesting!

Just one question: the LIO stack is a direct part of the linux iSCSI implementation? This means I will get VAAI and MPIO automagically just because my iSCSI target is provided by Linux with 3.10 (or newer) kernel?!

P.S. suggestions about the linux distribution?
There are multiple iSCSI targets available on Linux, some have more features or better support than others, and different distributions still bundle different ones as well. LIO is one of the newer options and is the one that was chosen to be merged into the mainline kernel. It is also the software stack used by at least QNAP (I went hacking around in a qnap to verify it) and I'm sure many other NAS/SAN vendors who have paid to get their devices certified with VMware and MS. You won't get the certification on a home-built device - but it should work as its the same software.

Just having a 3.1 (or newer) kernel does not guarantee you are using the LIO iSCSI target. Like most of the rest of the kernel it is an option that can be toggled at configuration/compile time, and likely some distributions will disable it to keep using the same software and management interface they had previously used with other iSCSI target software. I know from experience that RHEL6 and clones used a different stack - I just did a quick check and a CentOS7 box I have here (kernel 3.10) does have the LIO iSCSI target enabled.

Also, VAAI is a feature that was added in kernel 3.12, so if you want that you need to be slightly newer yet again.

As for distribution to use, I like Gentoo, but I'm not normal. Especially if you have little to no Linux experience, Gentoo will either teach you a damn lot or drive you back to MS OS's. You get to pick your kernel yourself, configure it yourself, and compile it yourself - if its your first time plan a couple days just for OS install for Gentoo. My next choice would be Fedora - it will give you a leading/bleeding edge system in terms of software updates (my Fedora 20 desktop is now on kernel 3.17), it has the LIO stack enabled in the kernel, but it will require you to upgrade the OS more often which is not quite optimal for a NAS appliance. It's also targeted at power-users and not the most friendly distribution out there. Fedora 21 was also just released in the last couple of days and for the first time has a 'Server' install option - I haven't played with that yet. I haven't used an Ubuntu system in a very long time now, but 14.04 will be on a kernel new enough to have all the features (assuming they are enabled, I'm not sure there), and is a long-term-support release so you can let your NAS just sit and run for 5 years without having to do a major upgrade, just the odd security patch most of which won't require rebooting. Ubuntu will be the most newbie-friendly distribution. The only reason I don't have CentOS/RHEL near the top of the list is the slighly older kernel - but if VAAI support is not an issue I would take it over Fedora for a server role.

Also, here is a chart from their wiki on features by kernel version, might come in handy. It appears to be a bit out of date now as 3.18 was released 4 days ago.

LIO™ and fabric modules have gone upstream into the Linux kernel as follows:
  • Linux 2.6.38 (2011-03-14[5]): LIO™ engine[6]
  • Linux 2.6.39 (2011-05-18[7]): tcm_loop (SCSI support on top of any raw hardware)
  • Linux 3.0 (2011-07-21[8]): FCoE (by Cisco)
  • Linux 3.1 (2011-10-24[9]): iSCSI[10]
  • Linux 3.3 (2012-03-18[11]): InfiniBand/SRP[12] (Mellanox HCAs)
  • Linux 3.5 (2012-07-21[13]): Fibre Channel (QLogic HBAs),[14] USB Gadget[15] and IEEE 1394[16]
  • Linux 3.6 (2012-10-01[17]): vHost (QEMU virtio and virtio-scsi PV guests)[18]
  • Linux 3.9 (2013-04-28[19]): 16 GFC (QLogic HBAs)
  • Linux 3.10 (2013-06-30[20]): InfiniBand/iSER (Mellanox HCAs and CNAs)
  • Linux 3.12 (2013-11-03[21]): VAAI
  • Linux 3.14 (planned): T10 DIF core, T10 Referrals, NPIV
  • Linux 3.15 (planned): T10 DIF iSER, user-space backend
  • Linux 3.16 (planned): Mellanox FCoE support
 

voodooFX

Active Member
Jan 26, 2014
247
52
28
Yesterday I played with LIO iSCSI target on a ubuntu 14.04 test server (kernel 3.13), it's fantastic!
I was able to create pretty easily an hardware accelerated iSCSI target for my esxi cluster :)

The next step will involve the infiniband network; my biggest concern is that I don't have a switch, so i will have a single card (with 2 ports) in the storage server and the same on the esxi nodes.
As I now for now I will need to have two separate subnets (one for each port) on the storage server and I'm not sure this will work as a shared storage for the esxi cluster.
 

Dk3

Member
Jan 10, 2014
67
21
8
SG
It will work but if you need vmotion or ha, you need to use the other port in the esxi host to interconnect together by creating another subnet.

For this direct connection, you may full utilize it by using vlan so that you can have share local storage between vm, management etc. Leaving the iscsi standalone.
 

tjk

Active Member
Mar 3, 2013
481
199
43
Yesterday I played with LIO iSCSI target on a ubuntu 14.04 test server (kernel 3.13), it's fantastic!
I was able to create pretty easily an hardware accelerated iSCSI target for my esxi cluster :)
Assuming you mean VAAI right?

Also, how was the performance? Did you back it via any SSD with bcache etc? Any docs or URLs you followed for the configuration?

I've been using 2012 R2 with iSCSI and MPIO over Infiniband/IPoIB, but there is no VAAI. I'm also using HW raid and not storage spaces, so the performance is OK, however I'd like to find something I can do SSD caching with, and there are a few different options for Linux, but only SS for Windows unless I want to pay big bucks for something commercial.
 

voodooFX

Active Member
Jan 26, 2014
247
52
28
It was a very quick test to practice a little with LIO command line
I'm working on a dedicated storage server with a SSD volume to be exposed via iSCSI on a IPoIB (40G) network to ESXi nodes
will report results asap :)
 
  • Like
Reactions: tjk

tjk

Active Member
Mar 3, 2013
481
199
43
It was a very quick test to practice a little with LIO command line
I'm working on a dedicated storage server with a SSD volume to be exposed via iSCSI on a IPoIB (40G) network to ESXi nodes
will report results asap :)
You plan on doing anything with HA/DRBD/Etc? Or single stand-alone server?
 

voodooFX

Active Member
Jan 26, 2014
247
52
28
this is the plan

esxi1+opensm (hca0,ib0) <------------------------> (hca0,ib0) storage
esxi2+opensm (hca0,ib0) <------------------------> (hca0,ib1) storage

HA+vMotion on ethernet network at least until I get an IB switch..
 

voodooFX

Active Member
Jan 26, 2014
247
52
28
Hey, does anybody know if LIO iSCSI supports MPIO? There is absolutely no info about this in the doc
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I'm reasonably certain it should work, though I can't find any confirmation in the docs either. But the documentation for LIO's ALUA support does mention that it works across all target fabrics (iSCSI, FC, etc.), and you can't have ALUA without MPIO, so I take that as meaning that MPIO is also supported everywhere.
 

voodooFX

Active Member
Jan 26, 2014
247
52
28
It works.
I reached a 230MB/s read/write speed on a simple iSCSI fileio lun (16GB) exposed on two ethernet 1Gigabit links (on different subnets) and a multipath IO configured on the ESXi server.

Result confirmed by nload which was showing about 940mbps on each interface during the benchmark :)

Hardware Acceleration was also marked as "supported" on the esxi host

P.S. I think a will start a blog about things like that because some of them looks very "unexplored", based on the google results..
 
Last edited: