Need a SAN upgrade for our FreeNAS & Xen-XCP Cluster - Starwinds VSAN?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Indecided

Active Member
Sep 5, 2015
163
83
28
We're trying to decide if we should move our FreeNAS machines to a HA Starwinds VSAN.

This is a bit of uncharted territory for us as we've historically been a 100% Linux/BSD shop all the way and as our direction now is to have HA for our main cluster, we've been struggling to find a Linux equivalent (software only).

While Starwinds has a converged model on Linux/ESXi, it doesn't work for us as our VM nodes are all built on Xen/XCP, plus we're not really looking to go HC right now.

Eventual setup would be a 16TB, all-flash, HA cluster for storage pushed over 40GBe to a cluster of about 16 VM nodes over 10GBe each.

So this is what we're considering now. We had a trial setup but haven't had much time (ironically given the lockdown) to really tinker with it, but i'm wondering if i'm missing out on any other alternatives.

We're only considering software options at this time - so that puts TrueNAS and other hardware-first solutions out of the ballpark. The significant factors are cost at the end of the day, and we have enough hardware sitting around to pretty much cobble up whatever we really need.

VSAN will run ~10k for our 2 nodes. That's pretty much all I can get approval for budgets right now given that we've been running with 0 software costs for the past 8 years with Xen Standalone - then FreeNAS. But with that all considered, I think a HA SAN solution is what we really need going forward, but i'm just not sure - or perhaps not 100% comfortable with a Wintel platform for a SAN. Probably just need some convincing?
 

Net-Runner

Member
Feb 25, 2016
81
22
8
40
As an MSP, we have a bunch of customers using Starwind products as hyperconverged clusters and as pure storage appliances. Most of them are Hyper-V/Windows-based and work pretty great. The hyperconverged configuration seems a bit more complicated and might have some Microsoft-related caveats. Dedicated storage, if appropriately configured, works like a machinegun. If I remember it correctly, you can use their Linux version instead of Windows-based if you don't like the Wintel platform and want to avoid additional licensing costs. Install the free ESXi on your storage servers and run their Linux version on top of them. We have a customer that currently runs precisely the same scenario for almost half a year already. As far as I know, it works and performs excellently.
 

Indecided

Active Member
Sep 5, 2015
163
83
28
As an MSP, we have a bunch of customers using Starwind products as hyperconverged clusters and as pure storage appliances. Most of them are Hyper-V/Windows-based and work pretty great. The hyperconverged configuration seems a bit more complicated and might have some Microsoft-related caveats. Dedicated storage, if appropriately configured, works like a machinegun. If I remember it correctly, you can use their Linux version instead of Windows-based if you don't like the Wintel platform and want to avoid additional licensing costs. Install the free ESXi on your storage servers and run their Linux version on top of them. We have a customer that currently runs precisely the same scenario for almost half a year already. As far as I know, it works and performs excellently.
I was told by Starwind support that the ESXi version isn't applicable for "storage separate" solutions. Meaning to say, they only support it if you're using it in the HCI model where your VMs are on ESXi and passed-through ala Nutanix, but not as a HA-active-active type iSCSI SAN where compute is separate.

Having said that, i'm probably more favorable about having the SAN software sit directly on the host versus being virtualized-and then passed through, IMHO it feels like another layer of complexity to diagnose when something goes wrong.

I guess i'm just trying to validate that over the long run Starwinds will indeed be the right choice for our SAN "cluster" if you can call it that, or consider any other solid software platforms if there are any. I don't think we can go back to FreeNAS or any non "hot" HA capable software platforms because we can't rely on restoring backups for business continuity.
 

Net-Runner

Member
Feb 25, 2016
81
22
8
40
At first glance, a Windows-based bare-metal installation where virtual SAN software sits directly on the physical level might seem more straightforward. It probably is. If that is a reasonable justification for you to purchase two Windows Server licenses, let it be so. I can only reassure you I have seen multiple configurations of that kind in the wild, and they work great, no worries here.

I personally love a virtualized approach much more. First of all, it gives you an excellent storage abstraction level, which means if one of the storage controllers fails for some reason, it is much simpler and quicker to restart just the storage virtual machine instead of rebooting the whole server. This takes maybe 30 seconds and can be automated in ESXi using a heartbeat sensor, which makes the entire solution even more redundant. ESXi (or KVM) used as a foundation for this approach is much more credible to me than Windows Server. Just think about regular updates and patches that require a server reboot. This is not an issue with Starwinds since it keeps working while one of the controllers is rebooted, but the procedure still requires numerous efforts almost every week.

I know a large ISP here in Germany with whom we cooperate that has this scheme on top of free ESXi currently running and feeding storage to 10+ other hosts with various OSes/hypervisors. You should probably ask starwinds support regarding this case once again.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I assume you are using none of the ZFS inherent goodies like checksumming, snapshots or replication? And also I assume you dont mind changing to hardware Raid?

I mean its all fine, but FreeNas and Starwind are actually quite different beasts in my opinion so its surprising to be so casual about switching over :)
 

Indecided

Active Member
Sep 5, 2015
163
83
28
@Net-Runner : I would have preferred this simply due to familiarity despite the VM passthrough, but they said for my use-case it wasn't possible. I'll speak to them on this.

@Rand__ : Yes, we have used snapshots and tried replication before, but frankly we haven't seen the need to restore _yet_ it over the last several years, which speaks tons for FreeNAS's reliability. However, we recently had several failures not in the SAN layer but at the VM nodes which for some reason did _not_ trigger the Xen HA mechanism and resulted in us needing to manually detach, destroy and rebuild the node itself.

This got us thinking about other parts of the system - that while we could have a cold spare or even a warm FreeNAS spare sitting around using replication, it couldn't give us the expectation that a warm to hot failover could be completed in 5 minutes. Having said that, we would also have to restart the VMs. That's pretty much a hard NO now for us - 5 years ago it was quite different, with less money and less reliability expected or needed.

Exploring Starwinds, we are able to setup iSCSI multipathing to two or more nodes. We've not done super in-depth testing of our POC cluster yet, but our understanding and expectation is that in the event one node goes down, everything should continue to work as Xen/etc will simply pick up the second SAN node via the other iSCSI path.

Basically - we need to build out resilience at every layer - not just spares, but continuity without needing to hop in car at 5am in the morning and drive 45 minutes to our colo to swap something out, or run some takeover scripts. It should simply keep going - with a short pause. Ala HA.

As for hardware raid, i've never been a big fan since we've always been software based RAID people - early on our physical nodes were MD raid based, and the we moved on to FN with ZFS. I couldn't say if there are any big pros to using HW raid (besides being required to do so) and losing all the fancy features of FN - but with our current business alignment, reliability - not speed and features - are considered paramount above all.

So - that's why we're considering a HA SAN solution.
 

tjk

Active Member
Mar 3, 2013
481
199
43
Take a look at TrueNAS appliances, they have some killer pricing specials right now, and you get HA.

I love Starwinds, however I just did a POC using SW, iSER, and VMW 6.7 cluster...in my testing it worked well for about 5 days while I did IO stress testing. I stopped the testing and just let things sit for 2 days, came back and SW had crashed on each node, wasn't able to restart the process, reboots didn't help. Ended up losing the test VM's.

I need to re-create this and get support involved, just haven't had time to play around with this again.
 

Indecided

Active Member
Sep 5, 2015
163
83
28
The main reason why we've looked away from traditional storage & even TrueNAS is as mentioned - we have an essentially ever-ongoing surplus of hardware in our testing labs. So we probably have 3 or 4 of the same machine sitting around.

We would need to go with the X20 as we've moved off spinning rust for VM nodes alltogether. @ 700/TB... I probably will get shouted at as we have several hundred TB of SAS SSDs that we go through every week at a fraction of that price. So it wouldn't quite make sense to purchase new storage or hardware.

Also, our core DC isn't in the US, so that doesn't help with AHR/RMA at all unless the hardware happens to be Dell/HP OEM'd/whitelabel'ed with manufacturer support.

Henceforth a software-only solution is what we're looking at. I guess we really have to give it a full-on POC test-run as right now we're more or less doing hardware burn-in testing while the nodes are idle. I guess we'll spin up some complex workloads and keep on failing the cluster nodes over.

@Rand__ - nope - we will give it a look. That's what we've been looking for, options :)
 

tjk

Active Member
Mar 3, 2013
481
199
43
The main reason why we've looked away from traditional storage & even TrueNAS is as mentioned - we have an essentially ever-ongoing surplus of hardware in our testing labs. So we probably have 3 or 4 of the same machine sitting around.

We would need to go with the X20 as we've moved off spinning rust for VM nodes alltogether. @ 700/TB... I probably will get shouted at as we have several hundred TB of SAS SSDs that we go through every week at a fraction of that price. So it wouldn't quite make sense to purchase new storage or hardware.

Also, our core DC isn't in the US, so that doesn't help with AHR/RMA at all unless the hardware happens to be Dell/HP OEM'd/whitelabel'ed with manufacturer support.

Henceforth a software-only solution is what we're looking at. I guess we really have to give it a full-on POC test-run as right now we're more or less doing hardware burn-in testing while the nodes are idle. I guess we'll spin up some complex workloads and keep on failing the cluster nodes over.

@Rand__ - nope - we will give it a look. That's what we've been looking for, options :)
Let me know if you find anything, I've been on the search for a good bare metal software san for years, was hoping Starwinds was it, but the failure during the POC wasn't comforting.

StorPool is another you way want to look at, and they want you to use flash/SSD for everything. ZetaVault was another decent solution I tested a year ago, but it appears they are dead or at least appear dead now. Open-E also has a ZFS based SW solution, and you can do HA with it. I used the DSS SW as a backup target many years ago (nfs, cheap and deep storage) and it was reliable, this was before they switched to a ZFS core offering.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I used the DSS SW as a backup target many years ago (nfs, cheap and deep storage) and it was reliable, this was before they switched to a ZFS core offering.
Is that good?
Got a license for for that with a pair of servers I bought a while ago but never ran it since I was looking at a ZFS solution instead
 
  • Like
Reactions: tjk

tjk

Active Member
Mar 3, 2013
481
199
43
Is that good?
Got a license for for that with a pair of servers I bought a while ago but never ran it since I was looking at a ZFS solution instead
The DSS (non ZFS) product is good for slow storage, no caching, etc. We used it as a multi PB repo for VM backups for years w/o problems.

Have not spent a lot of time with the ZFS (Jovian) product yet, they have a bunch of different options for HA with Jovian, Eth sync, shared SAS, FC, etc.
 
Last edited:
  • Like
Reactions: Rand__

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
So not particular fast? Thought it might be okish since they use HW Raid iirc
 

LaMerk

Member
Jun 13, 2017
38
7
8
33
@Net-Runner : I would have preferred this simply due to familiarity despite the VM passthrough, but they said for my use-case it wasn't possible. I'll speak to them on this.

@Rand__ : Yes, we have used snapshots and tried replication before, but frankly we haven't seen the need to restore _yet_ it over the last several years, which speaks tons for FreeNAS's reliability. However, we recently had several failures not in the SAN layer but at the VM nodes which for some reason did _not_ trigger the Xen HA mechanism and resulted in us needing to manually detach, destroy and rebuild the node itself.

This got us thinking about other parts of the system - that while we could have a cold spare or even a warm FreeNAS spare sitting around using replication, it couldn't give us the expectation that a warm to hot failover could be completed in 5 minutes. Having said that, we would also have to restart the VMs. That's pretty much a hard NO now for us - 5 years ago it was quite different, with less money and less reliability expected or needed.

Exploring Starwinds, we are able to setup iSCSI multipathing to two or more nodes. We've not done super in-depth testing of our POC cluster yet, but our understanding and expectation is that in the event one node goes down, everything should continue to work as Xen/etc will simply pick up the second SAN node via the other iSCSI path.

Basically - we need to build out resilience at every layer - not just spares, but continuity without needing to hop in car at 5am in the morning and drive 45 minutes to our colo to swap something out, or run some takeover scripts. It should simply keep going - with a short pause. Ala HA.

As for hardware raid, i've never been a big fan since we've always been software based RAID people - early on our physical nodes were MD raid based, and the we moved on to FN with ZFS. I couldn't say if there are any big pros to using HW raid (besides being required to do so) and losing all the fancy features of FN - but with our current business alignment, reliability - not speed and features - are considered paramount above all.

So - that's why we're considering a HA SAN solution.
That's basically how Starwind works. Since their Linux-based version does not support the "storage-separate" converged scenario, I have made V2P and run installed it bare-metal on two separate servers. From what I faced, the storage subsystem performs the same as being a passthrough to Starwind virtual appliance.

I see they added ZFS support to Starwind and allow setting up HA on top of it. I haven't tried it yet, however, planned on the quarter. Check if it would work for you.
 

Indecided

Active Member
Sep 5, 2015
163
83
28
Well, a year or so has went by and still teetering on this edge. We've done a few trials on Starwind, however it appears that the current hold-back - don't know if you would consider it that, is the hypervisor's featureset and not so much the SAN.

Our goal has always been to set up HA shared storage (ala Starwind) with xcp, for which Xen and now XCP has been working merrily for us over the years (albeit not with HA shared storage). Ultimately, when running our trials, we realized there was a rather glaring showstopper.

1) Xen/xcp only supports thick provisioning on iSCSI and 2) doesn't support NFS multipathing (if that would be the correct term for it).
With our current workload of ~50 VMs and thick provisioning, I would need 4x more storage then i currently have to run iSCSI.

So it seems i'm stuck in this quest for the holy grail. It would appear that either I 1) migrate to ESXi or Hyper-V or 2) stick with XCP, TrueNas or equivalent but ideally somehow find a solution that can have a hot spare ready to got for the SAN so I can sleep better at night. I guess- we would be able to tolerate SOME downtime in that scenario (10-30 minutes) to switch a SAN out virtually, but is there such a solution out there that doesn't cost an arm and several legs?
 
Last edited:

Indecided

Active Member
Sep 5, 2015
163
83
28
Ah.. I believe I missed out on the earlier post by Rand__. If I was starting out from scratch, that's probably what I would do - and looking at it now fills me with a bit of regret as that certainly solves one of the bigger "concerns" I'm seeing now, which is a "HA" path for NFS that works with Xen/XCP.

Problem is, I'm stuck with a mixed bag of parts - some dual port SAS SSDs, some nearline SAS SSDs, some SATA SSDS (those S3500/S3510s are still kicking...alive and well) and some.

So, at this juncture it seems I'm going to have to accept not having HA shared storage, and operate two separate TrueNAS machines to run two separate SRs. The XCP heads will then be able to migrate between SRs, it will take a bit more time and it doesn't account for storage node failure, but I guess I can't have my cake and eat it too.

I probably need to action on this node setup out in the next month or so, well.. I guess it's time to put it into production and figure out how to do this business justification for a true HA storage upgrade in... 2022? heh. I guess we will have to wait for storage pricing to drop, Chia seems to have made supply more scarce for disk shelves and good 'ol SAS SSDs.
 

Indecided

Active Member
Sep 5, 2015
163
83
28
Since it would seem that I'm now on a future path to a dual-head JBOD, does anyone have any recommendations for a disk shelf that would be able to support SAS3 (and likely to come off lease/liquidation in the market) in the next 0-12 months? Yes, there are plenty of SAS2 shelves around, but i've getting weary of always running equipment 2-3 generations from current market. I think I'd rather fight for the budget to spend a bit more and get equipment that's a bit more up-to-date and not have to undergo these constant upgrade cycles.