Hey,
So I'm wanting to set up some MPIO datastores for ESXi. They'd be for several different workloads: General file storage, video recording on VMs, and the VMs themselves.
I looked long and hard at several cluster file systems, but let's face it: Ceph is complicated, OCFS2 is obscure and outdated, I tested drbd and had major split brain issues, and gluster is a glorified overlayFS. Starwind vSAN I'd have to learn something completely different, and is only controllable via powershell once the trial ends. They all kind of suck in their own ways.
Plus, with MPIO I can use whatever FS I want, and I'm kind of stuck on ZFS. What other FS will cache a workload in however much memory you throw at it, without having to configure anything else?
But I still have a lot of questions. It's hard to approach this subject in a way that keeps it simple. So I figured I'd just ask about the protocol. If you choose to respond, you're free to keep it to a simple "Pepsi vs Fanta", "LSU vs ASU" argument if you want, but let's be honest - this subject touches a myriad of different technology, each with their own (huge) set of considerations.
There's:
My current cluster consists of three Supermicro E5 v3/v4 servers w/ 40Gb, 10Gb and 1Gb networking, 8 or more 4TB drives each, two have 128GB mem and a few SATA SSDs and PCIe/NVMe drives. Here's my basic stack plan:
So I'm wanting to set up some MPIO datastores for ESXi. They'd be for several different workloads: General file storage, video recording on VMs, and the VMs themselves.
I looked long and hard at several cluster file systems, but let's face it: Ceph is complicated, OCFS2 is obscure and outdated, I tested drbd and had major split brain issues, and gluster is a glorified overlayFS. Starwind vSAN I'd have to learn something completely different, and is only controllable via powershell once the trial ends. They all kind of suck in their own ways.
Plus, with MPIO I can use whatever FS I want, and I'm kind of stuck on ZFS. What other FS will cache a workload in however much memory you throw at it, without having to configure anything else?
But I still have a lot of questions. It's hard to approach this subject in a way that keeps it simple. So I figured I'd just ask about the protocol. If you choose to respond, you're free to keep it to a simple "Pepsi vs Fanta", "LSU vs ASU" argument if you want, but let's be honest - this subject touches a myriad of different technology, each with their own (huge) set of considerations.
There's:
- Outboard storage servers, or hyperconverged? A combination of the two?
- Storage OS - which OS are we starting with? Why? What advantages might another OS have in contention? There's several different areas this could affect, such as: iSCSI or NFS drivers/utilities, network stack, file system options and the depth of their reliability/longevity, overall OS stability, ease of use / familiarity, etc. Which of these is most important to you, and how are your results?
- File system - what filesystem do YOU use? (I'm pretty set on ZFS, but am open to rebuttal)
- Block size for the FS - using ZFS this is basically record size - what size should we use to avoid wasting space, or encountering write amplification/throughput reduction?
- And of course, the original question - NFSv4 or ISCSI? Why'd you choose one over the other? How's that working out?
My current cluster consists of three Supermicro E5 v3/v4 servers w/ 40Gb, 10Gb and 1Gb networking, 8 or more 4TB drives each, two have 128GB mem and a few SATA SSDs and PCIe/NVMe drives. Here's my basic stack plan:
- Hyperconverged - it uses less power (this is a homelab, after all). It's "easier" for me to put it all in one box. The one issue I have is hosts are super slow to boot when they're looking for a share that won't be available until after it boots, but MPIO should fix that in most situations.
- OS: Ubuntu since I like ZFS, it's got version 2.1.1 packaged by maintainers, the OS gets a lot of attention from developers, seems stable and is easy to maintain. Plus setting up NFSv4 datastores in ESXi using it was super easy (I was pleasantly surprised). I also like OmniOS, but haven't explored it for NFSv4 yet, so I'm not even sure if it'll work. It's a close second though, since it's rock solid and uses ZFS for everything by default. Plus, comstar.
- ZFS: Since it's fast and has a lot of features. Plus I know it well.
- ISCSI for VM workloads, but I'm torn because it'd have as many as 5 layers: ZFS-->ZVOL-(ISCSI)-->VMFS6-->VMDK-->NTFS (etc.). With NFS I could skip the ZVOL and put the VMDK straight on the FS (It would remove the need for the two ZVOL-->VMFS6 layers). But ISCSI totally smoked NFSv3 in my testing. I haven't tried VMs on NFSv4 yet. If anyone has a way to make NFS faster, or ISCSI less complicated, I'm def open to suggestions.