AMD Home ESXi w/TrueNAS Build Out

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TheTamago

New Member
Oct 29, 2022
21
7
3
Hello STH forum goers,

I've been in IT for awhile but my brain lacks the proportional linkage of cost/usage in a home environment unless I sit my rear down and monitor a killawatt for 8hrs. I do have a Solar panel system/w battery backup so power usage is 50/50 on cost expense out. I am looking to retire my old mini-itx FreeNAS/TrueNAS from 2016 and my dusty MS Hyper-V test server of frankenscraps. Reading deeply on integrating a VMware ESXi vSphere w/Veeam+Cloud Repo and TrueNAS VM. Will rely on more "thin/zero client" RDP with my mobile devices such as old laptops/tablets in my household. No plans on hosting a Plex server since I am lazy on streaming. TrueNAS Storage will be for family archiving/DVD Movie backups/(BD Backups?). No plans to increase internal network from 1Gb since there are rarely any streaming requirements for 4K with gaming consoles to the home theater section.

Expected Project Completion Date Q2 2023.

The grit:
Estimated Budget $2.5K US

Default Hardware:
Chassis: 4U Rosewill Chassis with possible HS 3.5" Bays
OS/VM Primary Storage: 2x 1TB or 2x 2TB SK Hynix P31s (RAID1)
TrueNAS Secondary/Archive Storage: 4x 14TB Seagate EXOS or IW. Mirror VDEV or RZ2 VDEV
PSU: Seasonic Focus PX 80 Plat 650W
Memory: 64GB ECC Minimum

1a. Go Cheap but still expensive on a AMD Consumer platform w/R7 5800X and ASRockrack X5700D4 ~ $700 US
PRO: Lower TDP/Power Usage/Lesser ECC RAM Requirements 2x vs 4x
CON: Lower PCIe Expansion if ever for a HBA (Not sure if PCIe Lanes will be eaten by the second NVMe for 8 SATA)

1b. Alternative Go Bigger on a AMD Enterprise platform w/E 7302P and Supermicro H12SSL-i ~ $720 US (Ebay Chances...)
PRO: More Cores for more VM instances & testing/PCIe Lanes Galore with possible expansion to thin gaming clients. (Maybe a simple 3D Rendering VM for 3D Printing Projects but that is wishful thinking for my available time.)
CON: Slightly Older Arch/Higher TDP/High Power usage+idling/ECC REG RAM Requirements costs up for 4x DIMM Minimal

Current Setup:
Recently completed StarTech 12U Openframe
Startech 1U 8 Outlet PS
Pending Hardware: 2U 16" Shelf and 2U Cyberpower UPS

Gaming PC:
Rosewill RSV-R4000 w/AMD R7 7700x/ASRock X670E Pro RS/32GB DDR5 5600 G.Skill FlareX5/1TB Samsung 980 Pro NVMe/Gigabyte G.OC RX6650XT/Seasonic Focus PX650W (All Noctua cooling)

FreeNAS/TrueNAS:
Bitfenix Prodigy Mini-ITX Tower /w Intel P G3258/ASRock Rack mini-itx C something Intel chipset/16GB ECC DDR3/2x 32GB USB FDs/4x 3TB WD or 3TB SG in RV1
(Running 400W PSU that is 7-8yrs old from my frankenscrap test server after the original 550W PSU fan seized up... *gulp*)

Network:
New stuff from 2021 but only 1Gb.


20221023_153059_resize.jpg
 
  • Like
Reactions: T_Minus

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Not sure if you're looking for advice or just want to share your thoughts/build;)
 

TheTamago

New Member
Oct 29, 2022
21
7
3
Not sure if you're looking for advice or just want to share your thoughts/build;)
Both please! Having a tough time to judge the primary meaty part of hardware with the CPU/MOBO combinations that dictates the outcome. I read through the ASRockrack X570D4 thread and am about 60% sold due to the BIOS situation and RAM setup.

Convince me! lol Thank you
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
For advice I'd recommend to specify your short term and long term goals a bit better (# /type of concurrently running vms etc, # RDS users), whether u want to Game on the server, have a single or multiple ones (TN separated or all in one?...
 
  • Like
Reactions: T_Minus

TheTamago

New Member
Oct 29, 2022
21
7
3
For advice I'd recommend to specify your short term and long term goals a bit better (# /type of concurrently running vms etc, # RDS users), whether u want to Game on the server, have a single or multiple ones (TN separated or all in one?...
Long-term: Design have a service life of at least 7-8yrs or better so I don't kick myself from the early high budget expense.
Short-term: Moderation to allow maybe another maybe 128GB Total (64GB Expansion later if my VM expense gets to high)
Storage Forecast for short and long: Doubt there will be any need to increase the 2TB Primary OS/VM allocation and the estimated 28TB is more than 2x than what my current capacity is at. At growth, estimate +1TB increase per year on the TrueNAS.

Here is my estimated VM allocation
2x Win Servers 2016/2019 for vCenter & Veeam | Cost 2CPUs | 8GB Each (16GB) RAM
1x TrueNAS Storage Server | Cost 2 or 4 CPUs | 8GB or 16GB RAM
2x Win 10/11 Clients for RDP+Backup/Thin client | Cost 2 CPUs | 8GB Each (16GB) RAM
==============================
5 concurrent VMs | Cost 6-8 CPUs | 40GB to 48GB RAM

1x Backup Gaming VM - Debating to setup for future Under 5yr old | Cost 2 CPUs | 8GB RAM
4x Test VMs Various OSes | Cost 3-4 CPUs | 4GB RAM Each
==============================
4 non-concurrent VMs | Costs | 5-6 CPUs | 8GB to 24GB RAM
 

F1ydave

Member
Mar 9, 2014
137
21
18
I am in a similar boat.

I recently had a power outage that corrupted a few of my VMDK's during a backup, which apparently wouldnt let the psu appliance shutdown the server.. After a week of trying 100 different suggestions, the last update of 2011 HDD Regenerator software fixed the platter drives 7 sectors, I was able to pop it back in and be recognized, to get one VM out of it (the one that mattered). I was also running Napp-IT which ran flawlessly since 2019. lost that VM, but I am pretty sure I can roll up a new VM and it should be able to access the old array. I haven't made it that far or messaged GEA.

Anyway, I am tired of dealing with Esxi half baked software with no support from the community. I spent years adding device drivers into the OVF's, etc. from 5.0 to 6.7. I am running 6.7u3 currently licensed with vCenter. It feels like 2000 people in the world know how to use their software correctly and the rest of us are throwing darts in the dark. This experience is the same with the TrueNAS community as well. People just want to bite your head off for asking questions.


I am currently making the switch to Unraid w/SMB which I passed on years ago...but it has come a very long way. Its more like a all-in-one hybrid of FreeNAS/TrueNAS and vSphere with built-in automated snapshot/backup. Unraid technically uses a hybrid version of RAID6 but doesnt require that the drives match in size. Expanding pools are extremely easy. The community is very active. Even though I've never run it personally, I have been watching the community over the years.

If you aren't looking for HA and just a homelab file server that runs a low usage VM of windows server. I would recommend you look into Unraid. Unraid itself uses very little RAM, but Windows VMs are memory hogs, lots of scripts are already written and available through the app store that's built-in to Unraid. This app store makes its extremely easy to run offsite syncs/backups, containers, hardware/software monitors, etc. Stacking out the RAM, using a cache drive, etc should keep the bottlenecks to specific areas, like your lan. There was a nice post someone recently made on here regarding accessing ZFS snapshots/backups within windows: Tutorial: Beauty by simplicity or one zfs snapshot used by 5 layers of applications.

Software aside, I think your hardware is extreme overkill for that of a file server. I have the 16 bay rosewill case which I was using as a mining rig but its been off for a year or two. I am planning on gutting it and adding a cheap board with intel 210 series, pretty sure my old x10 series would work just fine. The IPMI on the ASrocks lets you update bios without a CPU and many other features. I may consider going that route since I have an AMD 3300x sitting in a box. A $200 ASrock board would get the job done.

Anyway here is a cool link explaining all the different style NAS software today. 15 FreeNAS Alternatives 2022 | Best Storage Operating System
 

TheTamago

New Member
Oct 29, 2022
21
7
3
I am in a similar boat.

I recently had a power outage that corrupted a few of my VMDK's during a backup, which apparently wouldnt let the psu appliance shutdown the server.. After a week of trying 100 different suggestions, the last update of 2011 HDD Regenerator software fixed the platter drives 7 sectors, I was able to pop it back in and be recognized, to get one VM out of it (the one that mattered). I was also running Napp-IT which ran flawlessly since 2019. lost that VM, but I am pretty sure I can roll up a new VM and it should be able to access the old array. I haven't made it that far or messaged GEA.

Anyway, I am tired of dealing with Esxi half baked software with no support from the community. I spent years adding device drivers into the OVF's, etc. from 5.0 to 6.7. I am running 6.7u3 currently licensed with vCenter. It feels like 2000 people in the world know how to use their software correctly and the rest of us are throwing darts in the dark. This experience is the same with the TrueNAS community as well. People just want to bite your head off for asking questions.


I am currently making the switch to Unraid w/SMB which I passed on years ago...but it has come a very long way. Its more like a all-in-one hybrid of FreeNAS/TrueNAS and vSphere with built-in automated snapshot/backup. Unraid technically uses a hybrid version of RAID6 but doesnt require that the drives match in size. Expanding pools are extremely easy. The community is very active. Even though I've never run it personally, I have been watching the community over the years.

If you aren't looking for HA and just a homelab file server that runs a low usage VM of windows server. I would recommend you look into Unraid. Unraid itself uses very little RAM, but Windows VMs are memory hogs, lots of scripts are already written and available through the app store that's built-in to Unraid. This app store makes its extremely easy to run offsite syncs/backups, containers, hardware/software monitors, etc. Stacking out the RAM, using a cache drive, etc should keep the bottlenecks to specific areas, like your lan. There was a nice post someone recently made on here regarding accessing ZFS snapshots/backups within windows: Tutorial: Beauty by simplicity or one zfs snapshot used by 5 layers of applications.

Software aside, I think your hardware is extreme overkill for that of a file server. I have the 16 bay rosewill case which I was using as a mining rig but its been off for a year or two. I am planning on gutting it and adding a cheap board with intel 210 series, pretty sure my old x10 series would work just fine. The IPMI on the ASrocks lets you update bios without a CPU and many other features. I may consider going that route since I have an AMD 3300x sitting in a box. A $200 ASrock board would get the job done.

Anyway here is a cool link explaining all the different style NAS software today. 15 FreeNAS Alternatives 2022 | Best Storage Operating System
I appreciate your views and situation but my requirements are based on consolidating my home environment while expanding my real world experiences and showcase it in my job performance. This can excel better in my automation and standardizing rollouts from homelab to production in a very competitive system admin world. VMware and Citrix used to sit near the top for Enterprise and with Azure/AWS slapping them around, the lower gantry for on-prem like designs will still be around. Sure, I'd love to learn Unraid and maybe Proxmox but I have never been in a corporate environment that did not rely on VMware or a Windows OS. FreeNAS/TrueNAS was my learning edge with *Nix since I haven't touched *Nix since RH 6.x and barely any basic CentOS administration since between 2017 and 2022 (Salutes CentOS). I rather be in the know than unknown.
 
  • Like
Reactions: F1ydave

audiophonicz

Member
Jan 11, 2021
68
32
18
Hello!

Right off the bat, id like to suggest a few things:
1. you really want to have more than 1 esx host. If that one goes down, you lose access to everything.
2. Consider host overhead, youre going to run out of that 128GB real fast, youve already estimated 72GB and RAM doesnt overprovision well. See #1
3. Windows vCenter is depreciated. VCSA appliances are the way to go. 2CPU+12GB for Tiny installation.
4. VCSA 7+ will give you NKP so you can add vTPM for your Win11 VMs with ease. See #1
5. If youre going Win repos for Veeam, consider Server instead for ReFS/FastClone.


...my requirements are based on consolidating my home environment while expanding my real world experiences and showcase it in my job performance. ... (Salutes CentOS). I rather be in the know than unknown.
::Salutes CentOS:: (OK, teeeechnically FreeNAS was BSD)
Im with you on the VMWare and Windows reliance, but only for the front end:
Again VCSA is the only way to move forward, it runs Photon Linux, learn it, love it.
On the Veeam side, block storage is out, object storage is in, and with v12 comin in Jan/Feb, backups direct to S3 are going to be the official recommendation. Immutable Repos, Minio, Cloudian, Ceph; youre going to want to learn a Nix. I prefer Ubuntu bare-metal or Photon for VMs.

Good luck with your build!
 
  • Like
Reactions: TheTamago

TheTamago

New Member
Oct 29, 2022
21
7
3
Small update. Way behind on my build out due to job and life. Finally considered my next decision from the original plan and go with 2 servers for some redundancy. Went over the pond as I couldn't find a good 2U to meet the needs of my design. Rosewill might have stopped making their higher end 2U chassis RSV2900 that I was hoping to buy locally.

Logic Case SC-2808 2U Deep Depth

I am brainstorming the Ryzen 9 7900 with an Asrock Rack's B650D4U-2L2T/BCM but it is a tough sell because of the chipset's limited connectivity. Compared to the Zen3 boards because of the higher end chipset X570, lose 1 m.2 and 4 SATA ports. May just settle for the Ryzen 9 5900X with X570D4U-2L2T/BCM since 32GB 3200 DDR4 ECC UDIMMS are 50% cheaper than the current 4800 DDR5 ECC UDIMMS.

Any recommendations on SFF-8087 Reverse cables?

Will also be going with this 2U UPS. I don't require too much power because I have solar/battery and only need the UPS to handle the quick (5sec) switch over between the Utility service and Solar system. Hoping to get this ordered by month end.

Cyberpower CP1500PFCRM2U
 
  • Like
Reactions: T_Minus

TheTamago

New Member
Oct 29, 2022
21
7
3
Going to be a semi busy weekend.
Will be preparing Primary Node 1 for hardware installation and prep work into the rack to make sure all bottom spots are spaced and racked properly.
Asrock Rack X570D4U-2TL2
AMD Ryzen 9 5900X
4x Crucial/Micron 32GB 3200 DDR4 ECC UDIMMS
2x Samsung 980 Pro 2TB w/HS
Dynatron A24 HSF (Replacing fan with a Noctua or Artic)
Not shown: Seasonic SS-600H2U 600W 2U PSU

Still waiting on 5x Artic P8 Max fans and a TPM SPI Module.
Waiting to order 4x 16TB Seagate EXOS or IW HDDs.

Will be testing noise output and debate to switch to a Dynatron L35 H2O AIO.

20231013_160436_redux50.jpg
 
  • Like
Reactions: Boris and T_Minus

TheTamago

New Member
Oct 29, 2022
21
7
3
Awesome! Following to see how it all goes and how you like it :)
Been fighting the first dry fit server/rail fit into the StarTech 12U. Being spoiled by quick snap-in rail kits, these are about 1-level worse than the iStarUSA TC-Rail-26 kit that I used on my Rosewill 4U. The alignment on each side is off by about 1-2mm and require pressure to adjust the alignment. First time I heard a rail kit screech at me. :eek:
 
  • Wow
Reactions: T_Minus

TheTamago

New Member
Oct 29, 2022
21
7
3
Dry fits completed. Not that happy for how the rail design works but its done.
Server Node 1 Test setup completed and the OEM fans and the Seasonic 2U PSU are too loud. Ordered a 500W Seasonic and hope for better results. Using a Db app on my S21 ultra, its picking up about 60db ~ 65db and right at the rear, 70db avg.
The AMD R9 5900X Temp was up at 83°C idle. Not ideal. Really tempted to spend the extra on the Dynatron L35 AIO

Done for now. Will have an update by mid-week.
 
  • Like
Reactions: T_Minus

TheTamago

New Member
Oct 29, 2022
21
7
3
Batch of 80x80x25 Artic P8 Max fans arrived and swapped them in. The server chassis fan housings come with silicone/rubber plugs. Just noticed that the hotswap(?) fan connections aren't the universal with the bracket alignment for 4 pins vs 3 pin. 2 separate fan lanes for each side of the chassis. OEM fans look like Delta rebranded. Attached pics of the fan, 1 of the 2 fan lanes and the front Panel I/O.
20231013_223816.jpg20231013_223921.jpg20231013_224106.jpg
 

TheTamago

New Member
Oct 29, 2022
21
7
3
The Seasonic SS-500L2U is installed but encounter a minor mounting issue with the internal mounts going underneath the chassis. The alignment holes aren't beveled enough for screw clearance. The 3 screw mounts at the back are stable enough considering the depth of this chassis. db was near slient. Updated the Fan settings within the IPMI to operate at 55% and will test db output once I am done.

Found a sad issue when I saw a point here that the NVMe RAID will not work with VMware ESXi so that ruins things. Will be throwing in a spare 2.5 SSD for the ESXi and make sure to backup the configs until Node 2 can be built.
Asrock Rack X570D4U-2T2L no NVMe RAID Support

Got the Dynatron L35 H2O AIO in and it kept the temp to almost 41° after idling for 30mins compared to the Dynatron A24 which barely kept cool at almost 90° avg. Sadly, this chassis isn't compatible for a good mount with the fan bracket. Sits nice and snug at least and may consider double sided tape if there's vibration once I get this node into the 12U rack. Enjoy the pics.

Cable management will be completed once I feel everything is ready to be ziptied. Placement looks good enough.
20231019_145910_redux50.jpg20231019_150003_redux50.jpg20231019_181834_redux50.jpg
 
  • Like
Reactions: Boris

TheTamago

New Member
Oct 29, 2022
21
7
3
Node 1 internals ziptied and tided up. Racked and now running Memtest86 runs. Excited and will be installing ESXi 8 first to see how it fairs.
BIOS CPU Power mode set to 65w ECO. All fans set to 58% and will dial down to maybe 50-55%. Noise is about 50db with my main PC and Node 1
I did verify that the SFF8087 to SATA Reverse seems to be working with a test ssd.

Next update may occur in late November. HDD prices aren't better than what I saw earlier in the year. Hoping for some good 16TB price sales.
1697873271072.png
 

TheTamago

New Member
Oct 29, 2022
21
7
3
November sale surprise came early by chance of luck with a fast sale direct from Newegg - SG EXOS X18 16TB $240~ after a $60 off coupon for each HDD. Sale was gone by around 10AM PDT so I was lucky!
Got my Mikrotik hAP ac2 vLAN configured for the ESXi and in the vNetworks.
Loaded TrueNAS Core - TrueNAS-13.0-U5.3.ISO and used the usual guides to setup.
Almost cracked my eggshell after losing it to get IOMMU passthru with the AMD FCH SATA Controllers working.
Thought I did my homework for this platform, dead wrong. Shall see if this was a godsend or not once the HDDs go in.
ESXi ACSCheck - Workaround

4x 16TB by late Wed/Thur o_O
 

Boris

Member
May 16, 2015
85
14
8
Almost cracked my eggshell after losing it to get IOMMU passthru with the AMD FCH SATA Controllers working.
Thought I did my homework for this platform, dead wrong. Shall see if this was a godsend or not once the HDDs go in.
This is one of the reasons why my combine runs on Proxmox.
In earlier ESXi versions, I passthru the SATA controller by adding lines into configuration files.
But this time everything is more complicated, I have a GPU and a PCIe sound card, PCIe USB card in my "HTPServer", as well as 2 separate onboard SATA controllers passed thru to different VMs.
I tried ESXi this time, but quickly gave up this fight, installed Proxmox and can pass whatever I want, wherever I want.
 

TheTamago

New Member
Oct 29, 2022
21
7
3
This is one of the reasons why my combine runs on Proxmox.
In earlier ESXi versions, I passthru the SATA controller by adding lines into configuration files.
But this time everything is more complicated, I have a GPU and a PCIe sound card, PCIe USB card in my "HTPServer", as well as 2 separate onboard SATA controllers passed thru to different VMs.
I tried ESXi this time, but quickly gave up this fight, installed Proxmox and can pass whatever I want, wherever I want.
I can see your point and I am on the tipping stone on going to Promox if I couldn't get this working but I want to harness VMware as my platform and suffer through the "Enterprise" hell that I enjoy in my career. Else, well it will never be a fun day to enjoy after almost 20 some years within IT. But hey check out this screenshot!
1698284564678.png
So I accidentally killed my ESXi when I thought that I had selected the wrong AMD FCH SATA Controller and was too lazy to get the SSH working. Went ahead and rebuilt. Not much loss since I like to make things hard on myself. :eek:*sniff* Anyways, the 4x 16TB shows up! Added the AMD FCH Sata Controller for the PCIe device and viola! Will be running HDD burn-in tests to make sure they're stable. The big challenge is done and more big challenges ahead!

In the meantime, planning to grab a vSphere license from somewhere ;) and start building my VMs plus Veeam. The next hiccup might be to install the GPU for another passthru fun but that can wait until I can get Node 2 online by late 2024.