Help Proofreading an ESXi AIO

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ShadowFlash

New Member
Jan 7, 2018
12
1
3
49
This site has consistently come up in all of my lengthy reading and research. Thanks for being so informative!. This is my first post here, so I'm going to make it a big one :) I guess I'll just jump right into the build I "think" I have my brain wrapped around correctly. This list is my internal cheat sheet, so some things may not make sense, but it's a good starting point I guess.

Bare Metal:
Supermicro CSE-745 Server
H8DG6-F Motherboard
2x AMD 6366HE (Each: 16 cores, 1.8Ghz base, 2.3Ghz Full-Load Turbo, 3.1Ghz Half-Load Turbo)
16x Hynix HMT31GR7BFR4C-H9 PC3-10600R ECC DDR3 (128GB total)
IPMI NIC (to Laptop) FVS318N VLAN

ESXi:
2x Cores (CPU 0)
8GB RAM
20GB Partion Kingston SSD (Local SATA)
On-Board Video (32MB Matrox) = 1680 x 1050 Input 1
2x GBe NIC load balanced between FVS318N/G

VM: Napp-It / OmniOS
2x Cores (CPU 0)
32GB RAM
40GB Partion Kingston SSD (Local SATA VM)
LSI2008 On-Board SAS (9211-8i IT Mode) Passthrough: 8x HUS156045VLS600 (450GB SAS 64MB Cache) = ~500GBx3 VMs Usable (RAID 10).
LSI SAS 3081E-R (Flashed IT Mode) Passthrough: 1x Intel X-25-E 32GB (20GB ZIL), 1x Intel 540s 120GB (100GB SLOG)
Chassis/Port Expansion Room: 1x X25-E, 1x 540s, 4x SAS ?TB RAIDZ array for a 2nd ZFS, SMB storage array
Web-Hosted GUI (IPMI Laptop) FVS318N VLAN

VM Prime:
12x Cores (CPU 0)
24GB RAM Total: 16 Usable, 8GB RAM Drive for Plex Transcoding?
~500GB iSCSI
HD6450 GPU (passthrough) = 1680x1050 Input 2 + 39" 1080p TV + Chromecast Rotator
Usage:
PIA active always (VPN proxy)
Plex Server (transcoding)
Torrents
CAD/CAM
General Purpose
Basic Gaming (AoW, Etc.)

VM Media/Gaming:
8x Cores (CPU 1)
32GB RAM
~500GB iSCSI
HD6870 Hawk GPU (passthrough) = Monitor + Video Matrix
USB Sewell 7.1 sound (passthrough) to Crestron?
Usage:
Streaming TV (Spectrum, Netflix, Amazon, Plex, Etc.)
General Purpose
Gaming

VM Gaming:
8x Cores (CPU 1)
32GB RAM
~500GB iSCSI
HD6870 Hawk GPU (passthrough) = Monitor + Video Matrix
USB 7.1 Sound (passthrough) Crestron?
Usage:
General Purpose
Gaming

Now obviously, not the best of the best... in fact quite minimal. Things like GPUs are ones I had laying around to devote to the project for now. Years ago, when DX was still experimental in VMware Workstation I ran a dual user box with acceptable results, and I have decades of hobbyist experience with "unconventional" home RAID set-ups, but quite frankly, I've never gone this far before and could use some critique please. Everything listed is already owned and on the kitchen table lol, so not a wish list. If it was, I would have better SSDs, but... $$$ ya know!

Am I kinda sorta doing this right?
Thanks
 
Last edited:

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
Your VM builds are massively overprovisioned with memory, and you don't need to allocate any memory to ESXi. You probably only need about 16GB for your storage VM, and 32 GB for your gaming VM? If you're running enough services on your media VM thst you'd need that much memory you'd probably be better off isolating them into smaller VMs, or even containers.
 
  • Like
Reactions: T_Minus

ShadowFlash

New Member
Jan 7, 2018
12
1
3
49
Your VM builds are massively overprovisioned with memory, and you don't need to allocate any memory to ESXi. You probably only need about 16GB for your storage VM, and 32 GB for your gaming VM? If you're running enough services on your media VM thst you'd need that much memory you'd probably be better off isolating them into smaller VMs, or even containers.
The two "Gaming VMs" are allocated 32GB each due primarily to the Socket (and PCIE lanes) they both run off of. Allocating that RAM elsewhere gets into node interleaving I really didn't want to do. Furthermore, the end-goal for those 2 stations is Star Citizen (obviously with GPU upgrades), and that game so far seems to eat 16GB easily alone.

8GB RAM to ESXi is just from the "best practices" guildlines from VMware. I don't know any better so I'm just following the recommended allocation.

Only "need" 16GB for the storage VM (Nappit/OmniOS)? Everything I've read says throw as much RAM as you can at it, and 32GB is a baseline for SLOG and L2ARC to start mattering. And what else to do with the RAM anyhow? If it helps there, I'm happy to allocate it. To start, the Storage VM is only for hosting the 3 ESXi VMs, but... in a month or so, I'll be adding another pool for actual storage needs.

All of this is local, and adding more VMs really doesn't change anything because I'm limited by available GPU slots for stations. Sure I could off-load some things and go web-based access, but why?

I've read a LOT, but I really have no current experience, so I easily could be a moron :)

Edit: you say "overprovisioned"...doesn't that mean more RAM allocated than I physically have?
?

Edit 2: just so I don't double post...
The VM "Prime" is really the one i'm most worried about second only to the Nappit/OmniOS storage VM. The "Gaming/Media" VM is ONLY for playback to my household A/V Matrix. All the work is done on "Prime". That is the workhorse VM and it should be very busy. Transcoding with only 12 cores should still give me my personal minimum of 2-3 transcode streams + 1-2 Direct playback streams. That VM only has 16GB usable, as I plan on an 8GB RAMdrive for plex's transcode tmp folder...to offload abuse of the main file system. Can I get away with "only" 2 cores on the Nappit/OmniOS VM is a question...some in my searching say 4 cores to be safe
 
Last edited:

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
The two "Gaming VMs" are allocated 32GB each due primarily to the Socket (and PCIE lanes) they both run off of. Allocating that RAM elsewhere gets into node interleaving I really didn't want to do. Furthermore, the end-goal for those 2 stations is Star Citizen (obviously with GPU upgrades), and that game so far seems to eat 16GB easily alone.

8GB RAM to ESXi is just from the "best practices" guildlines from VMware. I don't know any better so I'm just following the recommended allocation.
The minimum allocation that you read in the best practices guidelines is for the entire system, including ESXi and VMs. ESXi itself is very light on resources, it requires less than 1GB of RAM for itself to run, the rest of the 8GB would be used by VMs running in ESXi.

Only "need" 16GB for the storage VM (Nappit/OmniOS)? Everything I've read says throw as much RAM as you can at it, and 32GB is a baseline for SLOG and L2ARC to start mattering.
That's for enterprise use-cases, with hundreds of users accessing the system. How many do you have planning on accessing your storage?

And what else to do with the RAM anyhow? If it helps there, I'm happy to allocate it. To start, the Storage VM is only for hosting the 3 ESXi VMs, but... in a month or so, I'll be adding another pool for actual storage needs.

All of this is local, and adding more VMs really doesn't change anything because I'm limited by available GPU slots for stations. Sure I could off-load some things and go web-based access, but why?

I've read a LOT, but I really have no current experience, so I easily could be a moron :)

Edit: you say "overprovisioned"...doesn't that mean more RAM allocated than I physically have?
?
In some cases ... what I mean is that you've allocated far more resources to those VMs than they actually require, and are likely to make use of.

Edit 2: just so I don't double post...
The VM "Prime" is really the one i'm most worried about second only to the Nappit/OmniOS storage VM. The "Gaming/Media" VM is ONLY for playback to my household A/V Matrix. All the work is done on "Prime". That is the workhorse VM and it should be very busy. Transcoding with only 12 cores should still give me my personal minimum of 2-3 transcode streams + 1-2 Direct playback streams. That VM only has 16GB usable, as I plan on an 8GB RAMdrive for plex's transcode tmp folder...to offload abuse of the main file system. Can I get away with "only" 2 cores on the Nappit/OmniOS VM is a question...some in my searching say 4 cores to be safe
Throw as many cores as you need at any of the machines, no dispute with anything there, really - it's the way you are allocating the memory.

Prime example: If you're giving Napp-IT 32 GB of RAM, what do you think it's going to be using it for? Caching commonly used files. It seems like you're planning on running your VMs filesystems off your Napp-IT instance, which is going to have 32GB to cache files, but you're planning on setting up an 8GB RAM drive in your transcoding VM to save wear and tear on the filesystem ... see the circular logic?

Anyway, it sounds like you've got 128GB just sitting around, so you're going to allocate gobs and gobs of it to just a few VMs. That's okay, I suppose, it's just not really what ESX is meant for. You may be disappointed in performance.
 

ShadowFlash

New Member
Jan 7, 2018
12
1
3
49
The minimum allocation that you read in the best practices guidelines is for the entire system, including ESXi and VMs. ESXi itself is very light on resources, it requires less than 1GB of RAM for itself to run, the rest of the 8GB would be used by VMs running in ESXi.



That's for enterprise use-cases, with hundreds of users accessing the system. How many do you have planning on accessing your storage?



In some cases ... what I mean is that you've allocated far more resources to those VMs than they actually require, and are likely to make use of.



Throw as many cores as you need at any of the machines, no dispute with anything there, really - it's the way you are allocating the memory.

Prime example: If you're giving Napp-IT 32 GB of RAM, what do you think it's going to be using it for? Caching commonly used files. It seems like you're planning on running your VMs filesystems off your Napp-IT instance, which is going to have 32GB to cache files, but you're planning on setting up an 8GB RAM drive in your transcoding VM to save wear and tear on the filesystem ... see the circular logic?

Anyway, it sounds like you've got 128GB just sitting around, so you're going to allocate gobs and gobs of it to just a few VMs. That's okay, I suppose, it's just not really what ESX is meant for. You may be disappointed in performance.
yup 128GB total physical. And I most surely do not want to be disappointed in performance, so is there a better way to use my available resources?

I don't see the 8GB RAM drive as circular logic, as it's essentially a scratch drive in it's purest form, so why abuse the array? Plex forums abound with "pros" of using a RAM drive for performance, and even if it's a placebo... why not in my case as it does reduce wear?

Thanks for the discussion btw :)
Edit: you did lots of edits, I do that too ;) do you have a suggestion for improvement on my layout? That's not snarky...that's the whole point of me posting! :D

Edit Again!:
I think the disconnect is that this is designed for multiple actual full real-world stations. Virtualization (enterprise shit) is so far removed from a "home" scenario that most research simply does not apply. I'm looking at the "vs" of independent computers (x4) vs a "mainframe" style hub like I'm doing. Enterprise clients don't have any real demands. Full-Out home "gaming" stations are wayyyy beyond enterprise acceptability.

This is a home AIO multi-head Gaming setup with provisions to also handle household media serving, and a storage vault. It's just very different than enterprise stuff, as demands are exponentially higher.

For example. 2 of us in my household do 3D solid modeling CAD/CAM design. look that shit up, and all of a sudden "enterprise" level builds fold...as GPU and PCIE bandwidth/Lanes/Slots are more important. The same goes for gaming. 16-32GB RAM regarded as "far more resources" is pretty much just wrong in my usage case.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Regarding your storage VM
RAM is currently very expensive. If it can guarantee that most reads or writes can be processed by the rambased ZFS read and write caches it is worth the money but with VMs many reads must be processed by disks as the rambased ARC mainly cares about metadata and small random reads. 32GB RAM will give some advantages over 16GB but your setup has weak aspects where you should expend the money.

If you activate sync write, the rambased write cache is protected by your Slog Intel X25 what means that writes are limited by the Intel. While the X25 was a highend SSD years ago its definitely not what you want nowadays as it is simply not good enough now.

The Intel 520 as an L2ARC (not Slog) would be ok especially as you can enable read ahead caching for squential workloads an L2Arc devices but as you reduce its performance to 3G due the SAS 3081E-R it will more be a limitation than a performance boost. Most propably especially with 32GB RAM your performance would be better if you remove the X25 and the 520 completely.

If you want the best performance for the money and your disks, I would suggest to use less RAM ex 16GB and add a Optane 900P that you can use as local ESXi datastore for the storage VM with a 20G vdisk as Slog via ESXi, optionally 50 -max 100GB as L2Arc.

A typical AiO setup would be a SSD ZFS pool for VMs (no slog or L2Arc) and a disk based ZFS pool for filer use or backup or
a disk based ZFS pool with a Optane 900P as Slog optionally combined as a local datastore and L2Arc (provided as a ESXi vdisk)

see http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf
 

ShadowFlash

New Member
Jan 7, 2018
12
1
3
49
Sure an Optane would be nice but for the money there are other areas I would rather apply that budget too. And I don’t have an available PCIE slot to put it without creative engineering.

Do you really think the 3GB/s card is a handicap to the SSDs? That’s no big deal as I can just put those on the 6GB/s ports and swap the mechanicals to the crappy one. There’s no performance difference between interfaces for the mechanicals. That’s been tested.

I’m still trying to wrap my brain around the less RAM advice. I can’t say I’ve ever seen that recommendation for ZFS.

Edit: I guess I should of put things in context for expected performance. The SAS drives are $11 each. Add in the SSDs, and I’m less than $250 invested total. The server itself, including the RAM was around $650 so I’m under $1k for the whole thing so far. I don’t include GPUs in a budget simply because it’s so subjective and easy to get crazy.

I was originally going to just go with a straight hardware RAID 10 like I’ve always done, but as the SSDs were so cheap trying out ZFS seemed like fun. Obviously I’m not trying to set any records here, but am I wrong to expect higher performance than just a plain hardware SAS array?
 
Last edited:

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
I’m still trying to wrap my brain around the less RAM advice. I can’t say I’ve ever seen that recommendation for ZFS.
1. Stop reading the FreeNAS forums.

2. @gea is the developer of Napp-IT, he is a far more reliable source of information on ZFS ARC sizing than anyone with cyber or jock in their forum handle.
 

ShadowFlash

New Member
Jan 7, 2018
12
1
3
49
1. Stop reading the FreeNAS forums.

2. @gea is the developer of Napp-IT, he is a far more reliable source of information on ZFS ARC sizing than anyone with cyber or jock in their forum handle.
Lol. I wasn’t questioning it, I’m just genuinely surprised. I should have everything going by the weekend enough for some initial testing. And I know who gea is and very much appreciate both your commenting.
 
  • Like
Reactions: CreoleLakerFan

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
RAM based read and write cache is a key factor for ZFS performance.

But a server consists of many performance relevant parts. And like a chain where not the strongest part defines stability but the weakest you must care about the whole. You should not spend all the money in a huge expensive RAM when there are slow/weak parts where a replacement gives a better overall performance for the money.
 

ShadowFlash

New Member
Jan 7, 2018
12
1
3
49
@gea : I absolutely understand the budgetary reasons for your advice. In my case however, with how my locally run VM stations are run, I wasn't happy with how 64GB total split out for me (which would have given 12-16GB to Nappit) , hence the 128GB. The cost for the additional 64GB was far cheaper than a "good" SSD. I do realize further down the line, an Upgrade is very much desired, if not "needed". While I wanted the additional RAM for uses other than Nappit, having 32GB now to allocate to it is only a side affect of other portions of the build. It was the implied excessive diminishing return on the extra RAM for the VM hosting array that sparked my curiosity, not the best-bang-for-the-buck appropriate suggestion, which I agree with. A $400 P900, much less an entire array of SSDs, is far outside of a build that so far I'm into less than $1000. I'll get there lol... just not quite yet. Once the household is appeased with actual running and working stations, etc etc,.....I will absolutely throw more attention to that.

As of now, I have moved the 8x SAS HDDs to the 3081E (crap 3GB controller), which shouldn't have much if any performance impact. I've used 6GB HDDs on older 3GB controllers in normal RAID 10 a few times, and mechanical drive performance is still below the thresholds for my 4-seat usage. I've never noticed a difference at least in smaller setups.

That freed up moving the 540s and 2x X25E's onto the 6G controller, with ports left for 1 more 540s and 4x addition large HDDs for a real "storage" pool (which is high on my purchase priority list, and smarter routing to avoid the 2TB limit..thanks for that). I guess my questions are these for now... I know what I need to do in the future, but "for now" to get things going and usable for a couple months (need upgrades elsewhere like GPUs first)...

1: You mentioned the 540s would be adequate for L2ARC if it wasn't on 3GB...which I have corrected, so I assume using it would be a good thing now?

2: As the 2x X25Es are pretty much useless in the future, is there a benefit to use BOTH of them for a Slog? I'm not sure if Nappit allows striping the slog, but I know it's possible in some ZFS OS's. This would be a stop gap only, NOT recommend obviously. from my rather limited understanding, the slog helps with the little random stuff. Striped X25s seems like performance would be borderline to me, but it's an honest question vs no slog at all (for an only VM hosting array).

3: As PCIE slots are severely limited for me, unless I hack 'n slash a case expansion and use good quality PCIE ribbon cable extenders (which I actually have already oddly enough) to re-position all the double width cards to make room for a p900, is there a recommended slog SSD(x2 for 2 pools) that wouldn't require this at a reasonable cost?

Thanks for the help. It sounds corny and all, but this has kinda been my dream for a while...even if not the wisest lol. I've had moderate success previously with 2-user setups pre-passthrough. I AM thrifty a bit, but ya gotta understand this is only half of the full build. The other half integrates this with a video/audio matrix spanning the whole house with overlapping 7.1 surround zones and yada yada...so I have to look at the big picture with purchases. This kind of project nickle and dimes you to death!

Thanks for the comments and the help!
 
Last edited:

ShadowFlash

New Member
Jan 7, 2018
12
1
3
49
and yikes...now that I'm in the software phase, Imma gonna double post! ;) Does the Napp-It/OmniOS VM actually require 32GB+2xRAM? In my case...if I wanted to throw 32GB at it, that means ~100GB? I'm going with 20 for ESXi and 80 for Napp-It for now on initial testing. Happy to wipe things clean if needed...just playing for now
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
OmniOS/ OpenIndiana/ Solaris are not stripped down storage distributions but regular Unix operating systems that you can install in a minimal version with only all storage related features developed by Sun and maintained by Oracle/Illumos (iSCSI/FC/ kernel NFS/kernel SMB/network virtualisation) or on Solaris/ OpenIndiana in alive version with an optional GUI and optionally other services like an AMP stack. OmniOS strictly concentrates on the "just enough storage OS" approach to offer a production quality storage OS without offering or suporting other services (homeservices, mediaservers, webservices etc) that may affect stability.

In the 64bit version OmniOS runs with a 16GB bootdisk. If you want some space for boot environments (bootable snaps of a former system state) or space that is required for main updates, you want 20-25 GB. Add the space that is required for a dump and swap devices.

A dump device is required to hold debugging informations after a crash. Its size should be 1/2 of RAM. Without such a device (you can remove) you are not able to hold a crash dump. If you are not a developer this is acceptable.

The swap device is there to increase RAM size in special situations. Mostly you do not use it but you should calculate around 1/4 of RAM size for a swap device. Both swap and dunp are created automatically during setup (given enough hd space).

This means with a regular setup and up to 16GB RAM a 32GB bootdisk is quite ok.
With more RAM calculate 20 GB + 3/4 of RAM as suggested minimal bootdisk size. With 32GB RAM a 64GB bootdisk would be ok.

see needs of a Solaris Unix:
Planning for Swap Space - Oracle Solaris Administration: Devices and File Systems
 
  • Like
Reactions: ShadowFlash

rune-san

Member
Feb 7, 2014
81
18
8
I suppose some of this depends on what you can get the gear for, but the build does not seem like an appealing build for a gaming rig. You're talking about 8 modules per socket, 16 modules total, 32 total threads. The IPC of those Piledriver-based Opterons was barely above the Bulldozer ones. Intel's Sandy-Bridge based Xeon's were between 10-40% faster in equivalent price brackets. A 6366 HE almost keeps up with an E5-2630L, a chip with a 30% lower TDP which is down to about $25 on eBay. The clock speeds of both are simply too low to make for decent gaming CPUs.