Looking for a good SAN solution for VMware

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hlhjedsfg

Member
Feb 2, 2018
38
8
8
34
well let me say, my (untuned except HW) ZFS filer (no RDMA) has better performance with SAS3 SSDs & SLOG vs my all NVMe vSan (single disk group) for my low user count workload... but to each their own :)
What slog do you use ? 900p ?
 

XeonSam

Active Member
Aug 23, 2018
159
77
28
well let me say, my (untuned except HW) ZFS filer (no RDMA) has better performance with SAS3 SSDs & SLOG vs my all NVMe vSan (single disk group) for my low user count workload... but to each their own :)
Single disk group... How many nodes?
For FreeNas; are you using NFS or iSCSI? Is this 10GbE? Any LACP/multipathing?

I think this really is a matter of tuning because it's amazing how performance can differ so much?
I thought FreeNAS was rubbish for shared storage and always assumed hyper converged would out perform.

Amazing.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
4 nodes, P4800 as cache x, Intel 4510/P3600

I am using NFS at this time, 56GBit, no RDMA unfortunantly or perf would be significantly better
There is next to no tuning on FreeNas since the pool is fast enough - I loose most on Network (async) and slog (sync low QDs)
 

XeonSam

Active Member
Aug 23, 2018
159
77
28
4 nodes, P4800 as cache x, Intel 4510/P3600

I am using NFS at this time, 56GBit, no RDMA unfortunantly or perf would be significantly better
There is next to no tuning on FreeNas since the pool is fast enough - I loose most on Network (async) and slog (sync low QDs)
EXCELLENT!!

How did you get 56GBit to work? Are you using Mellanox ConnectX-3 Pro cards?
As I understand RDMA doesn't work with FreeNAS (or isn't quite supported in the base install).

What sort of use case are you using your NAS box for?
For a small implementation, what are the IOPS like?
Significantly better than 10G link?

We're not interested in sequential; but rather random write speeds which is why we're looking into RDMA.
Thanks
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
56GBit works out of the box iirc?
And correct, Freenas does not do RDMA - nor any other 'Ready made' ZFS applicance (or I havent found one). Only option would have been to roll my own Linux based one, but not feeling comfy with that at this time.
Also vSphere does not do NFS over RDMA and I don't like iSCSI too much.

The NAS box is for my soho environment only, its just me having a high expectation set that drives the requirements;)

If you want me to run a few test let me know (re IOPS), just specify the settings.
I can exceed 10G speeds with my current use case but not by far due to the limited client count, but if I had a different use case it would certainly help. Especially for the price of 56 vs 10G cards...

I also considered 100G (have all the HW) but in the end it was not beneficial since I can't run RDMA nor get a magnitude of more speed from the box with the current setup. YMMV.
 

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
This statement from OSNEXUS means their virtual storage is so fundamentally slow they don't benefit from low latency RDMA networking can provide! VMware vSAN is not much different here, VMware vSAN doesn't do RoCE(v2) / iWARP because it's slow as pig, but... This is V2 of their design and their new version we'll see with vSphere V8 will use RDMA :)

Thanks for all the feedback!

I tried vSAN with all SSD's backed with NVMe with the cache layer. Performance was amazing considering it was a 10G setup.
The IOPS were basically local storage speeds and out performed a regular setup with iSCSI and FC using ZFS.

iSER (RoCEv2) is supported in ESXI which was why I was siding towards a linux distro like OSNEXUS. I contacted their APAC contact and they say they no longer support RDMA as the need didn't justify the investment.


[ ... ]
 

ano

Well-Known Member
Nov 7, 2022
634
259
63
vsan 6.6? 7.0u3g? etc is so slow it doesnt matter if you use all nvme optane or not. you max out at around 2GBs per host span where the vms are in a "raid1", regardless if you run say a fast SAS SSD as cache/capacity or all p4800x for both and have single 100g or lacp bond. it does however scale, so multiple 2GBs available whenever you double the nodes, which is apparant when you change a vsan policy and see the usage on 100g links, and also in vsan reporting, I actually got it to pass 10GBs across the cluster in a 12 node stretch (6+6) the other day when changeing policy.

stretch vsan also takes all the "fun/good" stuff like rdma away

looking forward to testing v8, but all nvme required for it to run properly, and currently have a lot of enterprice sas since most current gen hw doesnt have enough nvme slots, and u.3 stuff has been delayed, honestly u.3 nvme through a controller has also not been that impressive, but with direct lanes and genoa it will be a game changer.

I have tried to order gen11 hpe no luck.. heck they are still delivering us gen10plus and gen10plus v2 (amd) we orderd last year....

I do have supermicro with deliver on genoa this month.. we will see if thatis true :D few days left..

stretch vsan also takes all the "fun/good" stuff like rdma away


when running ZFS on SAS/nvme and iscsi presenting r to vmware, most of the time ZFS is to slow and it eats cpu/ram. there are no winning superfast free/cheap good solutions. ironicly even ZFS will be much faster than most hardware raid solutions on a host directly theese days, as we are only know seeing some sas24/tri mode hwraid kontrollers. even a "cheap" enterprise ssd is 200k iops at 4k "all day" the kontrollers and fs etc... not so much.