Bare-metal, single node iSCSI SAN software with web-based UI that doesn't use ZFS, costs nothing and is actively developed

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nabsltd

Well-Known Member
Jan 26, 2022
423
284
63
Does this exist?

My thoughts:
  1. I can do everything but the "web based UI" using pretty much any Linux distribution and tgtd. This is my current setup, but I would prefer having a GUI, even if all it really adds is easier monitoring.
  2. ZFS sitting below a virtualized block storage layer does not give great performance. In addition, I have very capable hardware RAID which protects my data enough. I don't need the checksum, snapshot, etc., features of ZFS.
  3. Even without ZFS, I don't need something like TrueNAS, etc., because I want nothing but storage. I don't need a hypervisor or containers. I know I could sort of ignore those features, but sometimes those projects assume you are going to use those features, and configure some settings based on that. And, sometimes the project uses those features internally, and you can't disable them.
  4. QuantaStor can do block storage without ZFS, but only in a 3-node Ceph cluster (at least according to the docs). If I'm wrong, this is probably my solution of choice.
  5. StarWind SAN and NAS free version could work, but it's legacy software, which means it might not support the hardware I have (or get in the future).
  6. StarWind Virtual SAN free does not appear to have the option to install on bare metal, although the paid version seems to have this option.
  7. I just learned of XigmaNAS, which I had not heard of before. It seems to be another possibility.
Thanks in advance.
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
So I have a few thoughts on the topic, in no particular order:

I could be (and likely am at least to an extent) wrong here, but it is my understanding that if you are talking about solutions based on FreeBSD and Solaris, your do not have many options when it comes to filesystems (afaik you have ZFS and UFS - the latter to my knowledge isn't great). So if you insist on not using ZFS, you are likely limited to solutions based on Linux and Windows (I guess...)

If you are only looking to have a single node, then I doubt you would want cluster solutions like Ceph or Gluster (the latter I have not heard great things about).

Regarding your #3, I would not be too worried about TrueNAS Core, as it started out not having virtualization, so I assume you can more or less safely ignore the features you don't want. (but it is FreeBSD based, and I am not sure you can avoid ZFS)

To give you some more options, here are a few Linux based suggestions:
  • RHEL/CentOS/Rocky Linux/etc with Cockpit (appears to do iSCSI targets too according to this blog post - UPDATE: I was wrong here, blog post does not mention configuring a system to work as target, but rather to connect to a target...)
  • Unraid
  • Xpenology
I have not tried any of the above, but if I had your requirements, I would probably go with #1. The latter two are more fully fledged NAS solutions.

EDIT: it seems you can also use Cockpit with non RHEL-related distros, such as Debian and Ubuntu.
 
Last edited:
  • Like
Reactions: nabsltd

louie1961

Active Member
May 15, 2023
164
63
28
I believe you can do this with Openmediavault, but I have not personally tried it.

 

nabsltd

Well-Known Member
Jan 26, 2022
423
284
63
EDIT: it seems you can also use Cockpit with non RHEL-related distros, such as Debian and Ubuntu.
I had not considered a "manage the whole system" GUI.

I believe you can do this with Openmediavault, but I have not personally tried it.
From what I have read, the tgtd plugin in openmediavault is not really very complete. In addition, you cannot share block devices directly, but instead create a file on disk, which is then shared as a block device.
 

tsteine

Active Member
May 15, 2019
171
83
28
Does cockpit-storaged support configuring an iscsi target? I thought it was primarily for managing the linux host as an initiator connecting to iscsi targets?
 

ano

Well-Known Member
Nov 7, 2022
654
272
63
really wondering what your hwraid setup is, as most are slower than zfs
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
Does cockpit-storaged support configuring an iscsi target? I thought it was primarily for managing the linux host as an initiator connecting to iscsi targets?
I was going to quote the link I posted earlier, but I must have only skimmed the blog post (it was not about configuring the system as target, but to use a target), so turns out I may well be wrong here, my bad... :(
 

ano

Well-Known Member
Nov 7, 2022
654
272
63
quantastor allows you to iscsi whatever block you have btw, (hwraid)

they just dont reccomend it
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,095
658
113
Stavanger, Norway
olavgg.com
ZFS as a virtual block storage layer performs excellent, I have no idea why you would think otherwise. I use it with TGT, NVMe-oF, virtual machines, vectorized databases, where I seperate ZFS as storage device and the compute nodes which can connect to the ZFS storage over the network and read up 100gbit per second.
 

nabsltd

Well-Known Member
Jan 26, 2022
423
284
63
ZFS as a virtual block storage layer performs excellent, I have no idea why you would think otherwise. I use it with TGT, NVMe-oF, virtual machines, vectorized databases, where I seperate ZFS as storage device and the compute nodes which can connect to the ZFS storage over the network and read up 100gbit per second.
Reading really isn't the test...writing is. And, ZFS is great at that if you can give it a lot of RAM and really fast SSDs, or dozens of spindles of spinning rust, or use nothing but mirror vDevs. Even with all that, though, once a pool gets fragmented, you can lose performance.

For the same hardware, tossing out the ZFS middleman will give you much higher iSCSI performance, and might be able to give you more storage space, as you don't pay much penalty for using a RAID level more complex than mirroring, while RAIDZ tanks performance on simulated block devices.

 

BackupProphet

Well-Known Member
Jul 2, 2014
1,095
658
113
Stavanger, Norway
olavgg.com
I am not sure you will get faster performance tossing out ZFS. The biggest performance benefit compared to other filesystems is compression, and it makes a seriously BIG difference, when you have data that compresses well.

If we're talking about using spinning rust and raid for iscsi, I can tell you that you will get bad performance no matter what. Hardware raid solutions can mitigate this as they perform all io operations in memory. You can use hardware raid with ZFS, just present each disk as a single device. Then you get the performance benefit from a hardware raid card. You can also put metadata on SSD's which will significantly speed up metadata operations which involves transactional modifications of the btree. This will significantly lessen the load on spinning rust.

Using something else, just means, each TB of expensive nvme storage just gets more expensive. Getting 50-100% more disk space just with compression alone is very achiveable. And backups are a lot more painful and resource intensive when you do not have snapshots. Just imagine all the cpu and disk iops wasted on using old wasteful backup solutions. I have a full backup of 100+ VM's with history every 15th minute dating back a year. Hardly costs me any cpu/iops.

ZFS is very fast, with iSCSI, NVMe-oF, or using zvol's with virtual machines. If it slow, it is a user configuration setup issue. Is Ext4 or XFS faster? It depends, but most likely they have similar performance.
 
  • Like
Reactions: Pakna and ano

zer0sum

Well-Known Member
Mar 8, 2013
850
475
63
Why not just choose an OS that you love and then install webmin?

 
  • Like
Reactions: nabsltd

fops

New Member
Jan 29, 2023
4
0
1
StarWind SAN and NAS free version could work, but it's legacy software, which means it might not support the hardware I have (or get in the future).
StarWind Virtual SAN free does not appear to have the option to install on bare metal, although the paid version seems to have this option.
I know this might not be the pure bare-metal way you're trying to configure that, but you can use StarWind VSAN for free with Proxmox: Configuration Guide for Proxmox: StarWind VSAN - Resource Library.
 

nabsltd

Well-Known Member
Jan 26, 2022
423
284
63
Why not just choose an OS that you love and then install webmin?
That could work. Thanks.

I know this might not be the pure bare-metal way you're trying to configure that, but you can use StarWind VSAN for free with Proxmox: Configuration Guide for Proxmox: StarWind VSAN - Resource Library.
StarWind VSAN requires more than one node.

In theory, multiple StarWind VMs on a single Proxmox host accessing local storage as if it was a shared disk could work, but that would mean everything would be virtual.
 

NPS

Active Member
Jan 14, 2021
147
44
28
In theory, multiple StarWind VMs on a single Proxmox host accessing local storage as if it was a shared disk could work, but that would mean everything would be virtual.
That would be just plain idiotic except for learning purposes.
 

tubs-ffm

Active Member
Sep 1, 2013
171
57
28
I don't need something like TrueNAS, etc., because I want nothing but storage. I don't need a hypervisor or containers. I know I could sort of ignore those features, but sometimes those projects assume you are going to use those features, and configure some settings based on that. And, sometimes the project uses those features internally, and you can't disable them.
I personally was looking in the same direction when looking for a storage system on top of my Proxmox VE. No need for virtualization, containers or Apps, as for this I am using Proxmox. I ended up with TrusNAS Scale, a good and easy to manage storage Plattform, and just ignore the features I am not using.
 

LaMerk

Member
Jun 13, 2017
38
7
8
33
That could work. Thanks.


StarWind VSAN requires more than one node.

In theory, multiple StarWind VMs on a single Proxmox host accessing local storage as if it was a shared disk could work, but that would mean everything would be virtual.
It will work even on single node, however, without replication/HA. I was using it in my homelab and you can use a single machine.
 

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
Quite a while ago I was looking for a similar thing for a project, and came to the conclusion that there simply isn't enough interest and engineering capacity for something like this. Almost all projects and implementations that have something to do with SAN or object storage systems instead of block systems assume the same thing: you are moving your data over the network to a dedicated setup because it's important enough to have it be much more durable and available, and often much more manageable. With those assumptions hardware RAID and non-ZFS single-node setups all go out the window instantly. It's also why you'll see most, if not all commercial products require a quorum of 3 since you can't really trust it with fewer nodes.

This leaves you with only two real options:

- Bite the bullet, just use TrueNAS

- Get an end-user NAS and use that, since they forgo almost all durability and availability principles to enable stand-alone mode

The second option also includes doing things like running their software on your own hardware, the result is the same (i.e. Xpenology).

An upcoming project that does this in a more separated way (so not using Synology/QNAP/Asustor) appears to be in the works, mostly based around the ideas of Unraid. Downside is the same als what has been mentioned before: they are all trying to make it look like a hard problem can be solved by cutting corners, but you can't, the corners really are just gone when you cut them off and that has a price (on your data).
 
  • Like
Reactions: Pakna

nabsltd

Well-Known Member
Jan 26, 2022
423
284
63
Almost all projects and implementations that have something to do with SAN or object storage systems instead of block systems assume the same thing: you are moving your data over the network to a dedicated setup because it's important enough to have it be much more durable and available, and often much more manageable.
And, I want some of that, but I don't want the storage device being the one that solves all the problems.

I want to have a pair of computers that each serve iSCSI targets to my 3 ESXi hosts. I want these two "SAN" boxes so that I can migrate VMs to the other one when I need to do maintenance on the underlying OS running the SAN. Unlike NAS where you are serving files, even a few seconds of downtime during a patch isn't something that a running VM likes. I also want every ESXi host to see the SAN so that I can migrate VMs to other hosts to do maintenance on an ESXi host.

Because of this, I do want some kind of redundancy on the SAN disks, so that losing a disk doesn't bring down a VM. I don't really care if that redundancy is hardware RAID, ZFS, etc., but I don't want the underlying redundancy system to require me to think about how I use the storage after the initial creation. For example, when I evacuate all VMs over to one of the SAN boxes, that might push the use of the disk on that box up to 70%. This is something that can reduce performance on ZFS, which means that I have to think about it, so that I get the maintenance on the other box done quickly. Now, I could solve the issue with ZFS slowdown on full pools by just having a lot more raw storage than I really need, but since I'd only need it a few days a year, it seems like a waste of money. Essentially, one of the problems I do want the storage device to quietly take care of is something that ZFS does not do well without the raw/usable storage ratio being 4:1 or higher.

OTOH, I don't really need anything to protect the actual data on an iSCSI LUN. The only thing on the iSCSI LUNs will be VMDK filesystems that contain VMs. All of the VMs are backed up at the VM level using Veeam free version. So, I can restore a copy of the VM if something goes wrong. Yes, ZFS checksums could help in some edge cases with protecting the data so I didn't have to restore, but I suspect that's a pretty rare event.
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,095
658
113
Stavanger, Norway
olavgg.com
I do not understand where you get these numbers from. My ZFS based zvols, even on a pool that is 80% full, exported as a iscsi LUN fully support live migration of VM's without any downtime. I done this myself with KVM. If you need reduntant SAN, I would look at another solution, like Ceph.
 
  • Like
Reactions: rubylaser