ESXi: Do you use NFSv4 or iSCSI for your multipath datastores? What's your stack look like?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Hey,

So I'm wanting to set up some MPIO datastores for ESXi. They'd be for several different workloads: General file storage, video recording on VMs, and the VMs themselves.

I looked long and hard at several cluster file systems, but let's face it: Ceph is complicated, OCFS2 is obscure and outdated, I tested drbd and had major split brain issues, and gluster is a glorified overlayFS. Starwind vSAN I'd have to learn something completely different, and is only controllable via powershell once the trial ends. They all kind of suck in their own ways.

Plus, with MPIO I can use whatever FS I want, and I'm kind of stuck on ZFS. What other FS will cache a workload in however much memory you throw at it, without having to configure anything else?

But I still have a lot of questions. It's hard to approach this subject in a way that keeps it simple. So I figured I'd just ask about the protocol. If you choose to respond, you're free to keep it to a simple "Pepsi vs Fanta", "LSU vs ASU" argument if you want, but let's be honest - this subject touches a myriad of different technology, each with their own (huge) set of considerations.

There's:
  1. Outboard storage servers, or hyperconverged? A combination of the two?
  2. Storage OS - which OS are we starting with? Why? What advantages might another OS have in contention? There's several different areas this could affect, such as: iSCSI or NFS drivers/utilities, network stack, file system options and the depth of their reliability/longevity, overall OS stability, ease of use / familiarity, etc. Which of these is most important to you, and how are your results?
  3. File system - what filesystem do YOU use? (I'm pretty set on ZFS, but am open to rebuttal)
  4. Block size for the FS - using ZFS this is basically record size - what size should we use to avoid wasting space, or encountering write amplification/throughput reduction?
  5. And of course, the original question - NFSv4 or ISCSI? Why'd you choose one over the other? How's that working out?

My current cluster consists of three Supermicro E5 v3/v4 servers w/ 40Gb, 10Gb and 1Gb networking, 8 or more 4TB drives each, two have 128GB mem and a few SATA SSDs and PCIe/NVMe drives. Here's my basic stack plan:

  1. Hyperconverged - it uses less power (this is a homelab, after all). It's "easier" for me to put it all in one box. The one issue I have is hosts are super slow to boot when they're looking for a share that won't be available until after it boots, but MPIO should fix that in most situations.
  2. OS: Ubuntu since I like ZFS, it's got version 2.1.1 packaged by maintainers, the OS gets a lot of attention from developers, seems stable and is easy to maintain. Plus setting up NFSv4 datastores in ESXi using it was super easy (I was pleasantly surprised). I also like OmniOS, but haven't explored it for NFSv4 yet, so I'm not even sure if it'll work. It's a close second though, since it's rock solid and uses ZFS for everything by default. Plus, comstar.
  3. ZFS: Since it's fast and has a lot of features. Plus I know it well.
  4. ISCSI for VM workloads, but I'm torn because it'd have as many as 5 layers: ZFS-->ZVOL-(ISCSI)-->VMFS6-->VMDK-->NTFS (etc.). With NFS I could skip the ZVOL and put the VMDK straight on the FS (It would remove the need for the two ZVOL-->VMFS6 layers). But ISCSI totally smoked NFSv3 in my testing. I haven't tried VMs on NFSv4 yet. If anyone has a way to make NFS faster, or ISCSI less complicated, I'm def open to suggestions.
So what's your stack look like? Why'd you decide to go with it, and how's it working out for you? And of course, ISCSI or NFSv4? Or something different altogether, like vSAN or SDRS? What are you mainly using it for? Is it something you're happy with, and would recommend to others, or have you soured on it and need to warn others not to make your same mistakes?
 
  • Like
Reactions: Firebat

dswartz

Active Member
Jul 14, 2011
610
79
28
You can actually use preallocated files for iSCSI virtual disks (depending on the server OS.) Especially OmniOS works trivially easy. I avoid ZVOLs, since the performance is not as good as NFS.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
One advantage (not one I cared about that much). If you have a high-speed link (50gb?) between the hypervisor and the storage appliance, you can set up a 2nd path using iSCSI on the LAN (1gb?) and select the 1st path as preferred, so if your HS link fails, you're not hosed.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Been there, done that (-> https://forums.servethehome.com/ind...-up-or-the-history-of-my-new-zfs-filer.28179/)

I run a vsan cluster for the bulk of my vms which dont have particular high performance requirements.
Ok, lets rephrase that.
I run a vsan cluster for most of my vms that dont have 'extreme' requirements (I say that b/c I ran 4800x's as cache up until last week and now p5800x's and for vsan cache drive performance is the only thing that matters for writes)

For the vms that *really* have high perf requirements (few as there are at this point) I run a 2x2 drive NVME TrueNas ZFS filer with an NVDimm slog, dual path'ed nfs 4.1 over 100Gbe (because I can;)).
At this point I dont use RoCE since TrueNas can't speak it, but its the next step up that I am going to look at. Or maybe iWarp instead since that *is* working on TrueNas.

Why nfs over iSCSI? I like to be able to access individual files whenever i need, i dont like zvols.

Pretty happy with it except the RoCe area, since ESXI7 is now officially capable I think its a must for the storage OS to speak it too. Or NVMEoF, but thats even more out of reach.