My new OSNexus QuantaStor HA SAN

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gcs8

New Member
Sep 27, 2022
15
2
3
So I have always wanted a ZFS HA SAN for the house, and a project at work introduced me to OSNexus QuantaStor, as well as their amazing CEO/CTO Steven Umbehocker who ended up handling my sales call during December a year or so ago while his sales staff was I guess doing holiday stuff. After talking with him about my personal setup he informed me that they have a free/community edition of their full-featured product, this includes HA, FC, scale up and out, and you can get up for 4 licenses to play with it kinda however you want. It is a bit of a learning curve coming from TrueNAS for the past 10+ years but overall, I am very happy with it. The community edition only supports 40T (soon more) of raw storage under a single license, but for home use, that’s probably more than I am going to use for how I have things setup.

Anyway tl;dr, here is a look at my new ZFS HA SAN.

I am using a SuperMicro SBB 2028R-DE2CR24L. It is a dual node server where both nodes have access to all 24 SAS bays up front and even has room for SAS expansion out of the back. I have replaced the 10G NICs that came with it with 25G Broadcom P225P.

I am using 6 Samsung PM1643 3.84T SAS SSDs for the pool.

I have a RAM upgrade on the way to take this from 64G per node to 256G per node and for now, the dual E5-2650 v3’s are enough for the load that I am pushing.

I am running the QuantaStor Technology Preview version of the software so I can use ZSTD compression and be on a rather new build of OpenZFS as well as it being an easy upgrade path from QuantaStor 5 to QuantaStor 6 when it comes out.

Open to questions and comments.

1664329423866.png
1664329437939.png
1664329449919.png
1664329468456.png
1664329478075.png
1664329487049.png
1664329498634.png
1664329510353.png
1664329522197.png
1664329536583.png
1664329546399.png
3,218MB/s Read 3,134MB/s write
1664329577212.png
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I had been looking at OSNexus a while ago, but their alleged performance deficiency (https://forums.servethehome.com/ind...-solution-for-vmware.28463/page-2#post-266824) kept me from diving deeper into it.

How are those 4 vms connected?
I.e. where do they run (Container, VM...), are they connected via SAS directly or nfs/iscsi, is this a distributed setup (3 disks each node + control plane) or hot standby (6 drives on one node, failover on error) ?

Always looking for higher performance :)
 

gcs8

New Member
Sep 27, 2022
15
2
3
I had been looking at OSNexus a while ago, but their alleged performance deficiency (https://forums.servethehome.com/ind...-solution-for-vmware.28463/page-2#post-266824) kept me from diving deeper into it.

How are those 4 vms connected?
I.e. where do they run (Container, VM...), are they connected via SAS directly or nfs/iscsi, is this a distributed setup (3 disks each node + control plane) or hot standby (6 drives on one node, failover on error) ?

Always looking for higher performance :)
I have done a lot of proformance tuning, but it's not much more than I have had to do for a Pure or even PowerMax array, there are some things like messing with cstates and the cpu governor that I can do on here that I can't on a "real" array. Some of the big stuff is replacing the inbox NIC driver with the vendor driver, that about 2x my speed on 25G under iperf3. Other things are changing stuff on ESXi, I setup proper dual paths, did network port binding, changed the multipathing pollacy to round robbin and then changed it to 1 IO per path.
Code:
for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.620000`; do esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=$i; done
The 4 VMs live on 2 iSCSI datastores, I put one fio VM on each ESXi host.

The disks are in a "raid 10" each node has access to all the disks at the same time and QS5 uses IO fencing to take ownership of the drives for the pool on the active node.

1664367521902.png
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
The community edition only supports 40T (soon more) of raw storage under a single license
2x 22TB wd red pro and you're over that limit... This reminds me of nexenta (or how it's called). They have (had?) a limitation of 10TB in their community version.
Kinda weird when they advertise products that rely on the "zetabyte file system" ._.
 
  • Like
Reactions: dswartz

gcs8

New Member
Sep 27, 2022
15
2
3
2x 22TB wd red pro and you're over that limit... This reminds me of nexenta (or how it's called). They have (had?) a limitation of 10TB in their community version.
Kinda weird when they advertise products that rely on the "zetabyte file system" ._.
So talking with Steven Umbehocker his view of the community edition is that you should be able to take 5-6 of the largest drives at the time and make a pool, so expect that change to come with QS6 and be something like 100TiB, but when talking about SSD storage it starts to get cost prohibitive. I have also heard tails that you can just email support and tell them what you have going on and they may cut you a bigger license at no cost.

But yes, for large uncaped storage I would still stick to TrueNAS, but when I need HA and a solid platform, I would pick QS.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
The 4 VMs live on 2 iSCSI datastores, I put one fio VM on each ESXi host.
Are those sync or async writes? On TNC iscsi would run async out of the box which o/c is significantly faster...


I have done a lot of proformance tuning, but it's not much more than I have had to do for a Pure or even PowerMax array, there are some things like messing with cstates and the cpu governor that I can do on here that I can't on a "real" array. Some of the big stuff is replacing the inbox NIC driver with the vendor driver, that about 2x my speed on 25G under
And that's no problem for them? Changing drivers et al? Or do you not mind if this puts you on unsupported?
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Edited another q in since i had not seen u had replied in the meantime.

No idea if 'no delayed ack' is the same as sync to be honest;)
 

gcs8

New Member
Sep 27, 2022
15
2
3
And that's no problem for them? Changing drivers et al? Or do you not mind if this puts you on unsupported?
I was more or less told that "it's just ubuntu" aka don't touch our stuff, but the OS is kinda fair game, though this was in response to me wanting to put the Splunk forwarder on it to get logs off of it.

This is the set of version locked packages that you really really should not touch.
1664377187025.png
 

gcs8

New Member
Sep 27, 2022
15
2
3
Here is a 512byte IO test for iops. ~30-45K per node, think that has more to do with the ESXi NICs though, not all copy pasta HW.

1664378138100.png
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Any there any other limits than size on the community edition?
You happen to know if NVDimm support is limited to HPE Firmware only?
You have an ETA on v6?

Might have least a go at it at some point... will see if it can beat TNC / nfs with NVDimm's as slog... (o/c HA is a compelling factor, but not worth loosing 50% perf)
 

gcs8

New Member
Sep 27, 2022
15
2
3
Any there any other limits than size on the community edition?
You happen to know if NVDimm support is limited to HPE Firmware only?
You have an ETA on v6?

Might have least a go at it at some point... will see if it can beat TNC / nfs with NVDimm's as slog... (o/c HA is a compelling factor, but not worth loosing 50% perf)
Size is the only limit, in QS6 we are getting multitenancy, and a bigger community licence.

In theory, anything that presents in storage direct mode should work?

"by the end of the year" so around then.

I personally use optain and it works great. Though if you just want NFS to be fast, just set sync = disabled

Sooooo, I might interest you in something like this? SSG-2029P-DN2R24L | SuperStorage | Products | Super Micro Computer, Inc. and use optain 2x2 drive so it's shared between the nodes?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
If i have to use Optane I'd need to move to a PCIe4 system to really make use of it;)
But at this point I'd prolly run 2 sm x11 3647 nodes with local nvdimms and a SAS or NVME JBOD

And NFS - fast and secure, not or ;)
 

gcs8

New Member
Sep 27, 2022
15
2
3
If you can make NFS do multipath to really take advantage of a single client, I only get this on NFS.

1664393269972.png
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
If that is from the same data store as the iscsi access it seems like the iscsi is running async - the difference is massive after all.

With ESX/TNC there is no problem runnning NFS4 multipathed (2 works fine, have not tried more)
 

gcs8

New Member
Sep 27, 2022
15
2
3
If that is from the same data store as the iscsi access it seems like the iscsi is running async - the difference is massive after all.

With ESX/TNC there is no problem runnning NFS4 multipathed (2 works fine, have not tried more)
It's the same pool if thats what you mean.

I will have to play with NFSv4 then if it will do multi-path in ESXi/QS
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I am fairly sure hats a sync/async scenario- you could force sync temporarily on your dataset ( or pool if u dont have datasets) to see what iscsi is capable of then....

Now u said u run optane, which one?
Does the slog need to be shared as well between the nodes? If yes that would put a dent in my plans for nvdimm;)
 

gcs8

New Member
Sep 27, 2022
15
2
3
I am fairly sure hats a sync/async scenario- you could force sync temporarily on your dataset ( or pool if u dont have datasets) to see what iscsi is capable of then....

Now u said u run optane, which one?
Does the slog need to be shared as well between the nodes? If yes that would put a dent in my plans for nvdimm;)
1664464655634.png
Sync always is a bit rough, but better than NFS.
1664464890834.png

As far as optane goes I only use it in my TrueNAS pools, I don't have any that do u.2 2x2 atm, so thats a nogo for HA atm.

Yes SLOG/ZIL has to be shared.