Need help finishing out my E2278-G & Supermicro build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

voip-ninja

Member
Apr 5, 2019
36
2
8
I've been waiting eons for the new 8 core coffee lake Xeons to become available and the E2278G is finally shipping so it's time for me to finalize the components needed for my latest server build and I could really use some help from the community.

This build will be to run ESXi 6.7U1 (I have a spare license) and replace an aging HP workstation running Windows 10 pro that runs a variety of services for my home including surveillance recording, Plex server, etc. The move to VMWare is something I have been anticipating for over a year as this will greatly simplify my life when it comes to starting up new things in the home, making backups, doing migrations, etc. I'm also greatly looking forward to being able to pass through the Intel GPU from the new CPU to a Plex VM for hardware transcoding of video.

My primary storage solution is a Synology NAS but I do want onboard storage for at least the boot drives of the hosts for performance and reliability reasons.

Right now I have the following components picked out;

Intel E2278G CPU.
Noctua NHL9x65 CPU cooler.
Supermicro X11SCA-F Motherboard.
32 or 64GB ECC RAM (Crucial or similar)
2X 1TB or similar Samsung Evo SSDs for boot volume.

There are a couple of components though I'm having trouble with.

I need a case and possibly need a decent power supply for it. I have an approximately 6-7 year old modular Corsair 750W power supply and I have an even older 500W Seasonic power supply. Will these older power supplies still work with a newer generation motherboard?

Should I use an M2 drive for my ESXI boot or is it a waste of money compared to just using a decent quality USB thumb drive?

I will need a RAID controller to run my primary RAID volume in RAID-1 for ESXI. I'm leaning towards a PERC H330 used or refurbished from eBay but perhaps there is a better option? Since I'm running RAID-1 primarily for fault tolerance for my boot volumes I don't need amazing RAID6 or RAID 10 performance.

I would prefer to build the system with a 10Gb NIC and I will add a 10Gb NIC to my Synology NAS for use as my scratch volume from ESXi. What's a good 10Gb NIC that won't break the bank?

And my final question..... can anyone recommend a decent 1U or 2U case that can handle installation of two expansion cards (RAID controller & NIC)? I'm really having a hard time with this one as it seems that everything under a few hundred dollars seem to get terrible reviews but I don't want to spend $500 just on a home server chasssis. My current box is workstation compact size and sits on a rack mount shelf. I don't have the space or inclination to put a massive tower in but also don't need a micro chassis.

Thanks for anyone's time who can offer some additional input on this!
 

TXAG26

Active Member
Aug 2, 2016
397
120
43
What about a 3U Supermicro 836 chassis? Lots on ebay in the $250-$350 range. Get one with a 920w SQ (super quiet) PSU. Tons of storage expandability (x16 3.5" bays) with different backplane options. You'll probably have better cooling options with a 3U vs a 1U or 2U since they use smaller, but higher RPM fans for main cooling.
 
  • Like
Reactions: ramblinreck47

voip-ninja

Member
Apr 5, 2019
36
2
8
What about a 3U Supermicro 836 chassis? Lots on ebay in the $250-$350 range. Get one with a 920w SQ (super quiet) PSU. Tons of storage expandability (x16 3.5" bays) with different backplane options. You'll probably have better cooling options with a 3U vs a 1U or 2U since they use smaller, but higher RPM fans for main cooling.
That's a good recommendation but way too big for my application.

It looks like iStarUSA makes some shorter 2U cases that might work. Not my first choice but if I can re-use one of my existing PSUs and maybe pair it with something like an icy dock enclosure it might be what the doctor ordered since I should be able to fit my modest set up and I could always put quiet case fans in it.
 

voip-ninja

Member
Apr 5, 2019
36
2
8
Still looking for input on the other items. I'm going to go ahead and order the CPU and motherboard though before they are out of stock.
 

voip-ninja

Member
Apr 5, 2019
36
2
8
Good information, thanks for sharing!

Parts have started arriving for my build which is quite a bit more modest.... partially because I'm using a Synology NAS as my principal storage pool.

The following parts have shown up or will be here in the next few days;
E2278G
X11-SCA-F MoBo
Nocturna 9H cooler
32GB RAM
2X1TB Samsung 860 EVO... these will be put in RAID-1 as my boot setup for VMs.
LSI 9108 RAID controller.

I'm having a heck of a time finding a case I can live with.

My current box lives on a standard rack shelf but my max depth is realistically around 18-19" deep and I have max 9" height. I can fit a 3U or 4U chassis assuming it's short enough, has fans with decent filters, isn't a power hog. Ideally would have hot swap bays to simplify growing the system over time or swapping out 2.5" disks... I don't really anticipate ever putting 3.5" drives in it.

I'm also trying to decide on a NIC. I have a decent 4x 1Gb Broadcom NIC I could use but am toying with the idea of going 10Gb if I can find a combination card that works under ESXi and has 2x10Gb and 2x1Gb ports. Then I could put a 10Gb NIC in my Synology NAS and just run a crossover cable between the server and the storage setup for using the Synology for additional storage.
 

voip-ninja

Member
Apr 5, 2019
36
2
8
I ended up ordering the Silverstone RM400 since it will just barely fit in the space I have for it and being huge has lots of expandability. It has dust covers for the whole front, has a locking front panel (important as I have tiny children) and is only 18.5" deep.

There were a lot of runner ups though from a variety of case makers... some were eliminated because they had 3.5 drive docks that are useless to me, others because they would have required use of riser cables for my cards (and limit me to 3 cards), others that were just too spendy, etc.

I ordered a six bay 2.5" icy dock enclosure I can use immediately and I can add other hot swap bays later as needed.

I will reuse a pretty nice 500w modular corsair PSU I have that has been collecting dust for 5+ years.
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
Nice case!
Regarding the NIC, I bought a second hand Intel X520-DA2 2x 10G SFP+ NIC, as they are supposed to work fine with ESXi/OmniOS and my switch has a couple of spare 10G SPF+ ports.
The X520 is PCIe Gen2, I have put mine in the PCIe x4 slot, so I expect the speed would be capped at around 2GB/sec
 

voip-ninja

Member
Apr 5, 2019
36
2
8
Nice case!
Regarding the NIC, I bought a second hand Intel X520-DA2 2x 10G SFP+ NIC, as they are supposed to work fine with ESXi/OmniOS and my switch has a couple of spare 10G SPF+ ports.
The X520 is PCIe Gen2, I have put mine in the PCIe x4 slot, so I expect the speed would be capped at around 2GB/sec
I have looked at that same NIC.... since the X11A mobo only has one non shared gigabit NIC I would likely end up needing a 2nd NIC at that point so I could uplink more than 1Gb to my switched network.

I don't currently have a 10Gb switch but have a really nice ProCurve PoE switch and I suppose I could buy a "cheap" Netgear managed 10Gb switch as an aggregator to take multiple 1Gb ports up to my main switch.

All this stuff is fun to contemplate, unfortunately it all costs money.
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
I have looked at that same NIC.... since the X11A mobo only has one non shared gigabit NIC I would likely end up needing a 2nd NIC at that point so I could uplink more than 1Gb to my switched network.
The Supermicro X11SCA-F has 2x 1G LAN ports, one of which is shared with IPMI. While I am setting things up, I am only using a single 1G port, which allows me to use IPMI and connect to ESXi webclient through the same port, the second port is free.
 

voip-ninja

Member
Apr 5, 2019
36
2
8
The Supermicro X11SCA-F has 2x 1G LAN ports, one of which is shared with IPMI. While I am setting things up, I am only using a single 1G port, which allows me to use IPMI and connect to ESXi webclient through the same port, the second port is free.
I was not aware that both 1Gb ports on the motherboard could be used with a host OS. Thanks for the information.
 

TXAG26

Active Member
Aug 2, 2016
397
120
43
I believe most SM boards come this way, which could cause issues if someone mistakenly plugs one of the shared ports into an external facing network. Luckily, SM has been pushing long random default IPMI passwords out with all recent builds since late 2019, which should help mitigate.
 

voip-ninja

Member
Apr 5, 2019
36
2
8
I believe most SM boards come this way, which could cause issues if someone mistakenly plugs one of the shared ports into an external facing network. Luckily, SM has been pushing long random default IPMI passwords out with all recent builds since late 2019, which should help mitigate.
Interesting. Well I should have things partially put together middle of next week and will start updating thread with my results.
 
  • Like
Reactions: TXAG26

voip-ninja

Member
Apr 5, 2019
36
2
8
I believe most SM boards come this way, which could cause issues if someone mistakenly plugs one of the shared ports into an external facing network. Luckily, SM has been pushing long random default IPMI passwords out with all recent builds since late 2019, which should help mitigate.
I'm curious what you did to get the Intel 219 NIC on the X11 to play under VMWare 6.7. I installed 6.7U3 and it does not even see the 219 NIC hardware. The 210 NIC is working normally.

Are you using the 219 NIC?
 

voip-ninja

Member
Apr 5, 2019
36
2
8
Okay, so I rebooted the ESXi host after messing around with the cabling on the NIC and now it shows available for teaming.

*facepalm*.
 
  • Like
Reactions: TXAG26

TXAG26

Active Member
Aug 2, 2016
397
120
43
I don’t have any platforms with a i219. Just Intel i210, i350, X550, and Broadcom 10GBE adapters.
 

ARNiTECT

Member
Jan 14, 2020
92
7
8
Okay, so I rebooted the ESXi host after messing around with the cabling on the NIC and now it shows available for teaming.

*facepalm*.
Great you got it working.
They both showed up for me and I added them to the vswitch.
I’ve not tried passing the onboard NIC through, but it appears on the list.