VMware vSAN Performance :-(

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I always use thin as thats the normal VM use case I have.
I don't believe in having great numbers in unrealistic optimized environments;)
 

fishtacos

New Member
Jun 8, 2017
23
13
3
Your experience closely resembles mine. Due to the simplicity of vSAN, however, I've decided to stick with it rather than set up storage VMs. Its performance is good enough for my uses currently.

My setup is a 2-node robo-cluster with a witness host, so it's mirrored, and I've deduplication enabled. In DiskMark I get anywhere from 1.0 to 1.5 GB/s read and between 130 to 330 MB/s write speeds. It's oddly inconsistent at the extremes, and I haven't figured out what that's about... maybe temperatures affecting the NVRAM cache drive at different times of day (I live in the south, daytime heat can be a biatch).

I needed a temporary iSCSI space for something in the lab recently and intalled an instance of Starwind on top of the vSAN datastore with 8gb RAM and deduplication. After a few seconds the inline dedup kicks in and I was seeing better writes than on the vSAN datastore itself. If I had dedicated storage hardware I would put SW on it no doubt, but I don't like its "hyperconverged" configuration.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Inconsistencies in benchmark often are the result of power saving attempts. Don't forget to disable in Bios, ESX and Guest OS for max numbers.
 

Myth

Member
Feb 27, 2018
148
7
18
Los Angeles
This was a great post. I was thinking about setting up a 3 node cluster with vSAN ESXi, but after reading this maybe Starwind is better.

Also, did anyone try Dell EMC ECS Community Edition?

And finally, I have use cases of 10 users streaming 4k media uncompressed. Would Starwind give me the performance I need?

I'm so used to physical SAN servers connected to a physical desktop client (either MAC or Windows) then installing special software on the client desktop which creates a sort of tunnel into the SAN either over fiber optic cabling or Ethernet 10gigE to each client.

So I would probably go out via 40GigE multi-path from each node into a switch I imagine, then break out cables 40GigE to 10GigE to each desktop host. I guest the part I'm confused about is the virtual part. I'm not familiar with VMs other than running hypervisor on my 2016 server for testing.

How would say a colorist working on uncompressed 4k footage connect to a VM from his workstation? I know one post house that runs Fiber Channel cabling from their SAN server to high power workstations but the workstations and the SAN reside in the server room, then somehow the colorist streams the workstation desktop image in real time to a dummy computer in an editing room. is it RDP?

Anyways does that make sense? help!
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Maybe you should open a new thread for this :)
Looks like there is some background needed (setup, use cases, speed requirements) to help you properly:)
 
  • Like
Reactions: T_Minus

moto316

Member
Feb 23, 2014
62
24
8
To enlighten all with the idea I spoke of two days ago (which actually did not work as well as I had it in my head)...

I doing some reading up on technical deep dives into VMware vSAN and saw that if you have a All-Flash vSAN setup the Caching Tier drive will only be used for read acceleration. Not for writing. As far as I understand writing is done directly to the capacity drives if they are flash... This is how the vSAN reading speeds are very high and the writing are pretty bad... So I thought I am very clever and tag the flash drives as normal magnetic drives... So the Intel Optane will be also used for caching writing... But (at least to me) this did not work out. Still very poor writing speeds. So I did some more digging into vSAN, setup my vSAN cluster again... single node just for testing purpose... and changed the benchmarking tool to a benchmarking solution (official HCIBench from VMware) and the results are getting in the right direction...
Just wanted to clear this up, you've got it backwards...in AF vSAN writes are written directly to the caching tier only, and reads are directly from the capacity tier. Then the caching tier destages to the capacity tier once buffer fills up/data becomes cold, etc.