I am late to this party.. and the thread has been week but alive so I will keep it so...
I generally run my home network on e-trash as that is all I generally can afford.. worse now after the chip shortage. ...
my current setup has some limitations and requirements and I might set up a new post to cover the whole build / decision making process.
Currently my 'rack' is a 24U short depth 'networking' b-line cabinet.. so not the traditional rack rails but screws for telco/switch stuff.. but moreover NOT deep enough for typical servers..
so the limfacs
1> need short depth server less than 25" deep
2> fairly low power but hyperconverged where storage and vm workloads live in the same box
3> use to like staying on ESXI but Broadcom has shit the bed... if I can run 8 I might do it long enough to learn proxmox but the end is near
4> I have a backup shelf with sff-8088 connection and would like to keep a connection to it
5> remote management BMC
for the last 9 years..
I found a very inexpensive Rackable System SGI 3U hybrid that was custom built for a big datacenter like google or amazon.
Intel board, 2x L5640 low power 65wtdp processors 64GB ram..
its a strange case in that the board connections are up front, and its setup as 1U with a single 90 riser.. and the 2U drive cage sits on top of the board. So essentially is a 1U which makes utilizing the other pci-e impossible
ESXI 6.7, where Napp-it boots first off VMFS that lives on an intel SAS/SATA daughter board in RAID Mirror, and is passed both the onboard Sata for one pool 5x8TB RAIDZ (media), and the onboard LSI 4I4E which hosts 2 intel 240GB SSDs in a ZFS Stripe for VMs and also connects to the external shelf for backup when needed .. a 15 drive RAIDZ pool with 3 stripe of 5 drive RAIDZ luns
Napp-it sucks for eye candy.. the interface looks circa 1980.. but it boot quick and the NFS storage it feeds back to ESXI for the VMs and the SMB shares from the media pool to Plex VM etc work well. Nappit reboots so quick in fact, I can update Omnios, and reboot it without stunning the VMs and they dont care.. no warnings..
So all this 2xL5640, 2 Raid Cards, 6 Intel SSDs, and 7 Iron Drives.. at about 25% cpu load typical runs just over 200w at the wall.... but it's getting old..
SO after nearly 10 years of running this board.. its time for a change
I have a mix of supermicro trash en route to both replace this primary server and also replace some even older boards.. I still have some machines around here rocking socket 775 CoreDuo cpus from 20 + years ago..
The primary server..
6028U-TR4T+ with a X10DRU-i+ 12 drive SAS expander backplane
2x E5-2630L low power and 128GB ram
might need the SAS 3008 12gb SAS card if the onboard wont run the SAS Expander (so that is a question for the community I guess.. do the onboard C612 SATA connectors drive a SAS expander?
I also have 2x X10DRL-i ATX boards coming, one to replace the board our of a server I build back in what like 2008 with a E8400 coreduo and another for a workstation/experimental machine to test on
Here is the real HACK of the project however.. I you guys are probably already thinking.. the supermicro 2U chassis is not a short or half depth.. and you are correct.. its not.. but I AM GOING TO TRY AND MAKE IT ONE.
I plan to cut the case between the fan wall and the backplane.. turn the fans around to suck instead of blow. put the case in the rack backward leaving the connection headers/PCI-E up front like the Rackable server, and set the drive backplane on TOP essentialy making in a 4U server.. but the compute will be living in a 2U space.
I just could not find a short depth server that had what I needed at a price I could pay...
I would love to hear thoughts on
Firmware.. some say the latest 3.5 is a mess... should I leave everything alone.. or update?
I am really fighting on ESXI or Proxmox.. I have been playing with Proxmox and things like multiple vSwitchs for internal NFS networks etc seem more tacky than baked in and learning Proxmox after just figuring out ESXI enough to be dangerous.. seems like a lot of work.. but I really detest Broadcom at the moment and would almost do it out of spite..
Hyperconverge..
I can run it the way I have been doing it NAPP-IT VM native on esxi/proxmox severing NFS/SMB back to the hypervisor and VMs..
or
I have seen some try and make Proxmox and its native ZFS do everything .. I messed with NappIT CS.. not a fan.. so I dont know how easy it would be or how it would mess with how proxmox handles the VMs from a pool / PBS backup perspective etc.
or
Instead of Napp-it , Truenas SCALE VM for the storage.. but it takes considerably longer to boot, love the interface way more than Napp-it.. and I am a MAC based house not Windows and while NappIT is built with great Windows sharing capabilities.. its MAC support is weak.. where Truenas seems to cater.. and without a License.. NappIT cuts a lot out of an already weak front end
this is a bit longer than I intended and I hope its not seen as a 'hijack' of the thread..
if you all want to keep replies to the technical side of the supermicro boards that's cool.. thinking this needs its own thread.. especially when I take the chopsaw to the case... haha.. hardware modding.. for the win..