Hello,
I've just finished my first build, I'll start with the specs and conclude with notes about my usage and part selection.
Operating System/ Storage Platform: Ubuntu 16.04
CPU: 2x Intel Xeon E5-2683v4 ES (2x16 Cores)
Motherboard: Supermicro X10DRi-o
Chassis: Phanteks Enthoo Pro (cheapest option with SSI-EEB/E-ATX support)
Drives:
- boot: SanDisk 960GB SATA
- secondary: 512GB Samsung SM951-NVMe SSD (MZVPV512HDGL)
RAM: 8x 16GB DDR4 REG ECC Crucial (CT16G4RFD424A)
Add-in Cards: Delock PCIe 1xM.2 NGFF NVMe (not bootable )
Power Supply: Corsair RMi Series RM850i (second CPU support by default)
Other Bits:
- CPU fans: 2x Noctua NH-U12DX i4
Usage Profile:
- Virtualization and cluster deployment experiments (DevOps) at home
- runs on demand, powered on/off by IPMI/LAN
It runs nicely, however I've still some issues:
1. Fans
- had to remove the Noctua low noise adapter (LNA) because FANs somehow reported sometimes 0 RPM which made the BMC trigger full fan speed.
- lowered IPMI fan thresholds and used IPMI raw commands to reduce minimum fan speeds
- CPU fans now run at 300-600RPM depending on the load, almost silent.
- I'm not fully satisfied with the case fan setup yet. They are 3pin fans connected to a fan controller (see last picture), then connected to a 4pin PWM port on the mainboard. Due to limitations of Supermicro and those fands, I cannot control them separately. (You can set speeds for 2 groups: CPU and System, not on a per-fan-basis.) While I manage to get them to 700RPM (number is provided by the rear fan, only), this leads to my problems:
2. Problems
1. PCH-temperature is rather high, starts easily to get in the range of 45-55°C. While the Intel C612 data sheet says maximum temperature is 92°C, the SuperMicro support tells me it is 60°C. I suspect this also starts spin up of the system fan group around 45°C already. SuperMicro support told me, that in server environments there is usually enough air flow to cool the PCH and they have other boards for workstation usage - right, but I wanted IPMI which they seem not to provide with their workstation boards. I've already removed the unused drive cage to increase air flow from the front fan over the PCH but this didn't make any difference. So I'm probably have to mount a small fan on top of the PCH heatsink - I'm open for any recommendation to solve this issue.
2. NVMe
Looks like the Supermicro AMI Bios is not able to boot from NVMe without an option ROM, as it is provided by "native" PCIe NVMe-cards, for example by Intel. The M2/PCIe-adapter I'm using seems not to have an option ROM and the board itself does not support NVMe in its EFI targets. Yes, I've set everything to EFI in the BIOS, but still does not recognize the NVMe SSD as boot-option.
However the NVMe works fine once the system is booted. I'm now using a legacy SATA SSD to boot, using the NVMe SSD to store VM data and docker images/layers.
Usage:
I'm a consultant for DevOps/Infra solutions and I'm using this machine to run 100s of docker containers and kvm-based VMs to evaluate things like Kubernetes/OpenShift setups. While I can work from home most of the time, my upstream bandwidth is rather limited to 12MBit/s so renting a box at some cheap server provider (or AWS) wasn't an option. Also I don't want to have customer data been deployed on test-VMs/machines somewhere in the cloud outside of the customer's reach. The low per-core performance (~2 GHz only) is acceptable for my usage. Using the nice IPMI and KVM functionality, I can switch the box on/off from the shell on my main computer (Macbook Pro Retina, unfortunately max 16GB RAM…) and then use GBit-LAN or 802.11ac to interact. This was the main reason not to use the workstation boards like X10DAX mobo, which provide more features like Hyperspeed, SLI support, audio - but no IPMI afaik.
Thanks!
Thanks a lot for all the resources on ServeTheHome about dealing with Supermicro fan settings (and ES processors).
I've just finished my first build, I'll start with the specs and conclude with notes about my usage and part selection.
Operating System/ Storage Platform: Ubuntu 16.04
CPU: 2x Intel Xeon E5-2683v4 ES (2x16 Cores)
Motherboard: Supermicro X10DRi-o
Chassis: Phanteks Enthoo Pro (cheapest option with SSI-EEB/E-ATX support)
Drives:
- boot: SanDisk 960GB SATA
- secondary: 512GB Samsung SM951-NVMe SSD (MZVPV512HDGL)
RAM: 8x 16GB DDR4 REG ECC Crucial (CT16G4RFD424A)
Add-in Cards: Delock PCIe 1xM.2 NGFF NVMe (not bootable )
Power Supply: Corsair RMi Series RM850i (second CPU support by default)
Other Bits:
- CPU fans: 2x Noctua NH-U12DX i4
Usage Profile:
- Virtualization and cluster deployment experiments (DevOps) at home
- runs on demand, powered on/off by IPMI/LAN
It runs nicely, however I've still some issues:
1. Fans
- had to remove the Noctua low noise adapter (LNA) because FANs somehow reported sometimes 0 RPM which made the BMC trigger full fan speed.
- lowered IPMI fan thresholds and used IPMI raw commands to reduce minimum fan speeds
- CPU fans now run at 300-600RPM depending on the load, almost silent.
- I'm not fully satisfied with the case fan setup yet. They are 3pin fans connected to a fan controller (see last picture), then connected to a 4pin PWM port on the mainboard. Due to limitations of Supermicro and those fands, I cannot control them separately. (You can set speeds for 2 groups: CPU and System, not on a per-fan-basis.) While I manage to get them to 700RPM (number is provided by the rear fan, only), this leads to my problems:
2. Problems
1. PCH-temperature is rather high, starts easily to get in the range of 45-55°C. While the Intel C612 data sheet says maximum temperature is 92°C, the SuperMicro support tells me it is 60°C. I suspect this also starts spin up of the system fan group around 45°C already. SuperMicro support told me, that in server environments there is usually enough air flow to cool the PCH and they have other boards for workstation usage - right, but I wanted IPMI which they seem not to provide with their workstation boards. I've already removed the unused drive cage to increase air flow from the front fan over the PCH but this didn't make any difference. So I'm probably have to mount a small fan on top of the PCH heatsink - I'm open for any recommendation to solve this issue.
2. NVMe
Looks like the Supermicro AMI Bios is not able to boot from NVMe without an option ROM, as it is provided by "native" PCIe NVMe-cards, for example by Intel. The M2/PCIe-adapter I'm using seems not to have an option ROM and the board itself does not support NVMe in its EFI targets. Yes, I've set everything to EFI in the BIOS, but still does not recognize the NVMe SSD as boot-option.
However the NVMe works fine once the system is booted. I'm now using a legacy SATA SSD to boot, using the NVMe SSD to store VM data and docker images/layers.
Usage:
I'm a consultant for DevOps/Infra solutions and I'm using this machine to run 100s of docker containers and kvm-based VMs to evaluate things like Kubernetes/OpenShift setups. While I can work from home most of the time, my upstream bandwidth is rather limited to 12MBit/s so renting a box at some cheap server provider (or AWS) wasn't an option. Also I don't want to have customer data been deployed on test-VMs/machines somewhere in the cloud outside of the customer's reach. The low per-core performance (~2 GHz only) is acceptable for my usage. Using the nice IPMI and KVM functionality, I can switch the box on/off from the shell on my main computer (Macbook Pro Retina, unfortunately max 16GB RAM…) and then use GBit-LAN or 802.11ac to interact. This was the main reason not to use the workstation boards like X10DAX mobo, which provide more features like Hyperspeed, SLI support, audio - but no IPMI afaik.
Thanks!
Thanks a lot for all the resources on ServeTheHome about dealing with Supermicro fan settings (and ES processors).
Last edited: