Best NAS software as VM guest to host iSCSI SAN

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
I've got a production Hyperconverged ESXi system, and a fairly loaded (but out of warranty)HP SAN that I'd like to use for Veeam backups.

However, I don't want to direct mount it as a Windows drive..too much risk with Windows. SMB / CIFS would be fine.

it's already an iSCSI datastore...I could just load Freenas and use it as storage, but that seems overkill. This is 10Gbe, so whatever OS I load needs to be quick, and simple to manage. I'd rather have something pre-packaged, since I'm not the only one that will maintain it.
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
A production use iSCSI framework is Comstar on Solaris based OSes
Configuring Storage Devices With COMSTAR - Oracle Solaris Administration: Devices and File Systems

Quite the smallest/ just enough storage systems with no more than FC/iSCSI, NFS and SMB is OmniOS, an OpenSource Solaris fork for production systems. There is a long term stable, bug and security fixes every few weeks, optionally with support. Also a perfect solution for ESXi due its very low resource/memory needs, OmniOS Community Edition

For management you can use my napp-it free or Pro (with support and some extras) with iSCSI sharing as a ZFS filesystem property, more or less iSCSI sharing on/off.
napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris :Downloads

Under ESXi, you can use my free ZFS storage server template with OmniOS.
Just import and use it, https://napp-it.org/doc/downloads/napp-in-one.pdf

For a barebone setup, see napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Manual
 

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
I'm nervous about ZFS in general in a VM. There's all kinds of caveats in that, since I have several layers between the VM and the actual disks, and I can't shutoff caching on the SAN
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
I was quite the first to propagate a virtualized ZFS storage appliance around 10 years ago. Key element of my solution was to use hardware pass-through for a disk controller to give the ZFS appliance native disk access without any driver or cache between.

I remember the furious attacks mainly from some FreeNAS guys who insisted that you cannot virtualise storage. I agree for the disk part but not the OS part so no problem. Now the idea known as All in One is quite common for a wide range of storage appliances especially for those with a small memory need for the storage VM, ex FreeNAS vs. OmniOS / Napp-It | b3n.org

In my serverroom all ESXi machines (around a dozen) are All in One. Each server has its dedicated storage VM for local VMs with access to all other storage VMs and common backup servers over NFS.
 

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
I was quite the first to propagate a virtualized ZFS storage appliance around 10 years ago. Key element of my solution was to use hardware pass-through for a disk controller to give the ZFS appliance native disk access without any driver or cache between.

I remember the furious attacks mainly from some FreeNAS guys who insisted that you cannot virtualise storage. I agree for the disk part but not the OS part so no problem. Now the idea known as All in One is quite common for a wide range of storage appliances especially for those with a small memory need for the storage VM, ex FreeNAS vs. OmniOS / Napp-It | b3n.org

In my serverroom all ESXi machines (around a dozen) are All in One. Each server has its dedicated storage VM for local VMs with access to all other storage VMs and common backup servers over NFS.
This is an already existing iSCSI 10G SAN, I just need some sort of frontend on it to make it SMB / CIFS capable.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Just mount things to a simple centos VM and mount up the iscsi luns and export them over NFS/SMB then.
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
ZFS is called the last word in filesystems.

A ZFS appliance that offers SMB (and iSCSI) directly, with newest features and updates would be my suggestion. If you use currently a hardware raid adapter, you may need to replace with a HBA for ZFS software raid to install a new OS directly.

You can use Windows server as frontend without the superiour ZFS features but with best of all SMB support.

You can of course use your current iSCSI Luns with a ZFS appliance, create a ZFS pool and share it via SMB. Check if ZFS security is affected by your SAN. As a frontend you can use the fast multithreaded Solarish SMB server with best support of ntfs ACL with ZFS ontop of your current Luns. Look at the commercial Solaris with native ZFS or the free Solaris fork OmniOS with Open-ZFS.

You can also use a Free-BSD appliance with Open-ZFS and SAMBA. A lot of features but mostly slower and with restrictions regarding ntfs ACL support or ZFS snaps as Windows previous versions.

You can also use Open- ZFS on Linux with SAMBA but without the "it just works" experience of Solarish or with some restritions Free-BSD.
 
Last edited:

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
ZFS is called the last word in filesystems.

A ZFS appliance that offers SMB (and iSCSI) directly, with newest features and updates would be my suggestion. If you use currently a hardware raid adapter, you may need to replace with a HBA for ZFS software raid to install a new OS directly.

You can use Windows server as frontend without the superiour ZFS features but with best of all SMB support.

You can of course use your current iSCSI Luns with a ZFS appliance, create a ZFS pool and share it via SMB. Check if ZFS security is affected by your SAN. As a frontend you can use the fast multithreaded Solarish SMB server with best support of ntfs ACL with ZFS ontop of your current Luns. Look at the commercial Solaris with native ZFS or the free Solaris fork OmniOS with Open-ZFS.

You can also use a Free-BSD appliance with Open-ZFS and SAMBA. A lot of features but mostly slower and with restrictions regarding ntfs ACL support or ZFS snaps as Windows previous versions.

You can also use Open- ZFS on Linux with SAMBA but without the "it just works" experience of Solarish or with some restritions Free-BSD.
This is a specific re-use case of an existing 40TB SAN that we already own. I'm very familiar with ZFS on HBA based systems. ZFS has all kinds of issues when there's lots of layers in between (SAN / Cache / datastore is alot) I specifically don't want Windows for virus / randsomware protection. I wanted something quick and easy on an iso. Like FreeNAS / NAppIT without the ZFS.
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
iSCSI Luns are like raw disks.
You cannot share raw disks via SMB.

Every solution must mount the Lun with an Initiator to use it like a local disk, put a filesystem on it (apfs, ext4, btrfs, ntfs or ZFS) where you can store your files and where you can share folders.

If you want to avoid ZFS, you do not avoid write cache related issues or other problems. You only avoid (lack) a mechanism to get informed on problems (due data and metadata checksums) or double metadata or superiour read caches or write cache protection on OS level (Slog) with a crash save Copy on Write write behaviour and Ransomware save readonly Snaps.

Using the ESXi initiator, create a VMFS fs and use it via vdisk is of course a bad idea for a storage server. This is why you should use a Lun directly from a storage VM.

If your SAN does not offer its own secure write behaviour to protect its caches ex with a hardware raid + BBU nothing can protect the Lun at the end perfectly. Best remains ZFS as user filesystem. You do not need ZFS Raid, just use the whole SAN as a single LUN, mount the LUN via the included Initiator, create a pool on it and share it. You loose only the self healing feature of ZFS Raid on individual disks.

Just try it ex with my ESXi template that is ready to use within a few minutes, enable the initiator, create a pool and filesystems and share them (via iSCSI/FC, NFS or SMB). If you need a secure sync write behaviour to store VMs on it, enable sync on the share. If your SAN is slow on sync write, add a fast disk (ex Intel Optane) to your storage VM and as an Slog to your pool. This will protect your write cache locally on the storage VM.
 
Last edited:

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
iSCSI Luns are like raw disks.
You cannot share raw disks via SMB.

Every solution must mount the Lun with an Initiator to use it like a local disk, put a filesystem on it (apfs, ext4, btrfs, ntfs or ZFS) where you can store your files and where you can share folders.

If you want to avoid ZFS, you do not avoid write cache related issues or other problems. You only avoid (lack) a mechanism to get informed on problems (due data and metadata checksums) or double metadata or superiour read caches or write cache protection on OS level (Slog) with a crash save Copy on Write write behaviour and Ransomware save readonly Snaps.

Using the ESXi initiator, create a VMFS fs and use it via vdisk is of course a bad idea for a storage server. This is why you should use a Lun directly from a storage VM. I'd specifically disable the ZIL and SLOG

If your SAN does not offer its own secure write behaviour to protect its caches ex with a hardware raid + BBU nothing can protect the Lun at the end perfectly. Best remains ZFS as user filesystem. You do not need ZFS Raid, just use the whole SAN as a single LUN, mount the LUN via the included Initiator, create a pool on it and share it. You loose only the self healing feature of ZFS Raid on individual disks.

Just try it ex with my ESXi template that is ready to use within a few minutes, enable the initiator, create a pool and filesystems and share them (via iSCSI/FC, NFS or SMB). If you need a secure sync write behaviour to store VMs on it, enable sync on the share. If your SAN is slow on sync write, add a fast disk (ex Intel Optane) to your storage VM and as an Slog to your pool. This will protect your write cache locally on the storage VM.
Hence my need for a frontend. My SAN doesn't have SMB/CIFS. It's an enterprise SAN, so does offer cache and cache protection and a BBU.

So yes, I was simply looking for an easy *nix frontend to share out an iSCSI san via SMB/CIFS.
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Have a look at Openmedivault - pretty straight, you would just need to connect to the iscsi target - should be easy, it’s based on Debian.
 

Vit K

Member
Feb 23, 2017
88
30
18
I ran OMV 4 on my nas but never able to setup two features: ssd caching and stable iscsi. May be new OMV5 debian 10 improved.
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
You can use Comstar in initiator mode to use the iSCSI Luns to build a full featured ZFS storage server on top, via napp-it Gui or COMSTAR and iSCSI Technology (Overview) - Oracle Solaris Administration: Devices and File Systems

This would add full ZFS protection and features like caching upon your iSCSI targets and the unique features of the Solarish SMB server (ntfs alike permissions, local SMB groups, working snaps=Windows previous versions etc. You can also reshare not only via SMB but via NFS or S3 (Amazon S3 compatible cloud ex via minIO)
 
Last edited: