My new build: Xeon E-2186

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Frank Bello

Member
Nov 14, 2018
36
12
8
Build’s Name: Eta carinae
Operating System/ Storage Platform: ESXi, FreeNAS and others
CPU: Intel Xeon E-2186G
Motherboard: Supermicro X11SCA-F
Chassis: Fractal Design Define R5
Drives: 8x HGST HUH721010ALE600-FR
RAM: Samsung M391A2K43BB1-CTD DDR4 Workstation RAM, PC4-21300 (2666), ECC Unbuffered, CAS 19, Dual Rank, 1.2V
Add-in Cards: IBM M1215, Chelsio T520-BT
Power Supply: Fractal Design ION+ 660P 660 Watt Fully Modular Platinum PSU/Power Supply
Other Bits: 280GB Intel Optane 900P, Crucial MX500 500GB CT500MX500SSD1, Noctua NF-A14 PWM 140mm Fans x3, ATX Pin-Remover Tool, Fractal Design Celsius S36, Riello NPW800.

Usage Profile: General-purpose VM’s (including routing and firewall labs), FreeNAS

Objectives: general-purpose VM capability via ESXi, plus a FreeNAS instance to be permanently available. I’m running my small business on this server so the build needs to be pretty mainstream (=boring) and not push the envelope too much. The server has to be quiet, since it shares office space with me and I really hate fan noise. I don’t have a 19” rack available and there is no space to add one, so this server needed to be a tower case design. The width of the case does mean it's possible to stand the case side-on on a wall-mounted shelf, but I’m a bit worried about the weight of the server pulling the shelf off the wall - haven’t tried this yet.

Some notes on the build:

The Fractal Design Define R5 was a joy to work with. No sharp edges and everything went into the chassis on the first attempt. I have 8 HDD’s installed internally which is fine for now. It’s a shame this case has been discontinued.

The hard drives and the Crucial MX500 were re-purposed from the previous system. I really like the HGST Helium drives, they run cool and I had no drive failures in the previous 12 months. These were bought as refurbs (part number HUH721010ALE600-FR), costing about £200 each at the time. There seem to be a lot of these on the market - I have a mental image of some huge corporate removing tens of thousands of these perfectly serviceable drives from their SANs in order to to install something larger. IMHO, if you trust ZFS to do a good job then refurbs are fine - I actually like knowing there are not going to be any DOA-type issues with these, since they have presumably already served time in a datacenter somewhere. They are still available for sale but the price is now about £250 - that’s £50 or so below the “new” price. Doesn’t sound that much, but if you are (as I was) in the market for 14 of these then that’s still a £700 saving.

For the power supply, I bought Fractal Design and not Seasonic (my normal preference). The reason being, in their marketing materials, Fractal Design make much of flexible modular cables and indeed the job of routing the SATA power connections from one drive to the next was made much easier with their cables. The choice of a 600W supply is questionable; measured power draw at the wall outlet is much lower, 135W for a NAS load equivalent to about 1Gbit/s and 220W when under full CPU load from FreeNAS plus other VMs. I’d like to operate this power supply where it is most efficient, at about 50% of full load, and that would imply a 450W supply when at full CPU load or maybe a 250W supply for FreeNAS only (which would then run uncomfortably close to full capacity under full CPU load). Unfortunately, those rated PSU's don’t come provided with the required 8x SATA connectors. You can’t just take an old power cable off the shelf and re-use it, because apparently the pinouts are not harmonised between manufacturers (at the PSU end) and so you can fry your drives that way.

You might be wondering about the Xeon overkill. Based on my workload, for FreeNAS the Xeon E-2146G would have been entirely adequate and would have left some cores spare for other VMs. However, it all comes down to availability. This Xeon or the E-2104G were the available choices at the time (also considering Covid-19 lockdown as a factor, Xeon stocks were low and no knowing when they would come back into widespread availability). Sure, I could import a Xeon from the U.S., it takes about 4 weeks and costs about £100 extra in import duties. In the end I just went with what I could get from UK stocks at the time. I had initially considered a Xeon-D build but really wanted the VM capability (more cores, or at least the flexibility to swap CPUs later). Socket 1151 Rev 2 supports a wide variety of CPUs including Core i9s, Core i7s, Pentium, Celeron... and the E-2200s, once those become generally available.

Problems:

Disks would not spin up after initial build. This is down to the wretched “Power disable feature” in the SATA power spec, unfortunately I hadn’t run into this problem before as I have been re-using my old-but-good Seasonic PSUs for many years. What idiot committee decided to change the meanings of the SATA pin assignments but decided to keep the same connector ? Anyway, the fix was simple enough, pull out the offending pin at the PSU end of each SATA loom (some people put tape over one pin of each SATA power connector on each drive, that sounds like a recipe for disaster when the glue dries and the tiny bits of tape fall off). Another few days’ delay ensued while I waited for an ATX pin removal tool to arrive (I tried shoving bits of wire into the connector as shown on various Youtube videos, no joy with this method). ATX pins were pulled out, all drives spun up OK. (see https://documents.westerndigital.co...h-brief-western-digital-power-disable-pin.pdf)

Second problem: Fans spinning up/down at 10 second intervals. Turns out that the Noctua fans are not approved by Supermicro, they spin too slowly and thereby cause fan alarms in the IPMI. Can be fixed by getting a copy of Ipmitool for ESXi (ipmitool 1.8.11 vib for ESXi) and tweaking the fan settings.

Third problem: on first boot, I got an odd series of beeps. Sounds like POST failure. Connected a monitor, no output. Turns out, the Supermicro board doesn’t output IPMI (or anything else) to Display Port by default. Eventually realised that the odd series of beeps is just normal POST for Supermicro and not a fault code. Once you get the password from the motherboard sticker and access IPMI over the LAN, you can switch on normal video output - but of course, by that stage, you don’t need it any more.

Could be improved:

Supermicro’s options for fan control are limited to say the least. By contrast, my Asrock workstation has temperature-based control with set points for fan speed at specific temperatures, but the Supermicro board doesn’t have anything like this.

ESXi setup:

I had to manually install the Chelsio drivers for ESXi. No problem at all, this card is well supported and the drivers worked first time. The IBM M1215 card is passed-through to FreeNAS. The Crucial boot drive provides boot storage for both ESXi and FreeNAS. The Chelsio card is "owned" by ESXi and FreeNAS sees a virtual 10G interface. FreeNAS currently gets 4 CPU’s out of 12 - the load with one busy LAN client reaches about 50%. Realistically, my NAS loading isn’t going to get much busier than this. I could probably drop to 2 CPU’s, would need to experiment to see if this would affect performance. FreeNAS currently gets 50GB of the 64GB of ECC memory. This could probably be tweaked downwards a bit if other VMs needed more, but there is no real need at the moment, my other VM’s are not particularly memory-hungry.

SLOG and ARC:

The Octane 900P device is “owned” by ESXI and partitioned to provide 32GB (SLOG) and 200GB (ARC) respectively. I’m aware this Optane device doesn’t have a battery, so the whole server is on a UPS. The Riello NPW 800 has an active power rating of 480W, more than adequate for even my maximum measured load (220W); it should give about 10 minutes run time, more than enough to flush any data and safely shut down FreeNAS.

In use:

The three fans in the Fractal design Celsius S36 and the three Noctua's are quiet enough that I don’t notice them most of the time. I just wish the Supermicro board would let me control the fans better, I could probably get to just about noise-free with a bit of tuning. My workstation gets about 2.5Gbit/s of throughput from FreeNAS with SMB using File Manager. I’m hoping this could be tuned a bit higher, since there isn’t any obvious bottleneck that prevents better performance. I've been using this for a couple of months now and it has been very stable (was soak tested with memtest86 for 48 hours after initial build). Overall I'm very happy with this server - I can spin up virtual routers, firewalls, pfsense etc., on the spare CPU cores as needed while FreeNAS chugs away in the background.

View attachment Eta carinae.jpg
 

EasyRhino

Well-Known Member
Aug 6, 2019
507
377
63
what kind of array are your spinners in?

I wouldn't sweat FreeNAS getting "too many" cores, esxi seems to handle CPU oversubscription fine.

Also... water cooling?
 

Frank Bello

Member
Nov 14, 2018
36
12
8
Hi, it's RAIDZ2. The water cooling (triple width radiator) works really well at doing most of the cooling by convection - I only run the fans at 700rpm as a result and they are nearly silent at that speed. I haven't yet tried turning off the radiator fans altogether (pump only) but it might be an option. The sound level is about 37dB at 1 metre.

I've now bumped FreeNAS to 8 cores - the most supported per VM on the free ESXI license - and it does seem to run better under heavy loading.