Supermicro SC826 + Xeon D

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

JDM

Member
Jun 25, 2016
44
22
8
33
Tax return season is coming up here soon, and I'm laying out my plans for my new ZFS storage server. Based on the amount of data I have/plan to have in the future, the SC826 should offer me plenty of hard drive bays while conserving space in my 12U rack. I was hoping to put a Supermicro X10SDV-7TP4F (Flex ATX 8-core Xeon D) in the chassis as I move to 10Gb. Does anyone have any experience putting Xeon D motherboards in these or similar cases? Since the chassis supports larger motherboards I figured there would be at least some way (probably having to put my own standoffs in) to make it work. The board has 6 4-ping PWN headers so I'm hoping the chassis fans stay fairly quiet since there won't be a lot of heat coming off the Xeon D (I'll be adding a fan on top of the heat sink to aid in heat dissipation). Does that sound reasonable? Thanks!
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I have a SM X10SDV Flex-ATX MB in a SC216 chassis. SC216 and SC826 are exactly the same chassis ironwork but your's has 12x 3.5" drives and mine has 24x 2.5". Otherwise they are identical.

There are no real issues. IIRC, one of the mounting holes couldn't be lined up (near the rear, by the SATA ports) but its no big deal.

You'll want to be sure you have the plastic airflow shroud in order to get airflow over the passive heatsink on the Xeon-D board. Or - better - get the SM active cooler and change it out.

You also might need 4-pin fan extension cables to have the midplane fans reach the fan headers on the MB. YMMV here - depending on age/generation of the SC826 it might have really short fan cables.
 

Brian Puccio

Member
Jul 26, 2014
70
33
18
41
I can confirm PigLover's advice. I have an SC826 with a Xeon D board and 10 of the 3.5" bays occupied and use the 10GbE connection to my virtualization sever for quick access to the bulk storage in the SC826. I use the plastic shroud AND I have cobbled together a small Noctua fan on the stock heatsink.
 

JDM

Member
Jun 25, 2016
44
22
8
33
Thanks guys! Glad to hear others have done this with success. Curious what either of you think about the noise of the system? Not expecting anything desktop quiet, but hoping to not have a small jet engine in the basement (I'll probably be putting in some SQ power supplies also).
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
My xeon d flex atx mainboard in a 836b chassis:


The missing mountain hole @PigLover mentioned:


I used some Noctua extension cords and a y cable I had lying around for the mid plane fans.
 
  • Like
Reactions: _alex

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Assuming it has 4-pin PWM fans - plug them into the MB fan headers and set the fan profile to "optimal". It will be quietish for a server - but that's not even close to silent. It won't be a "jet engine" effect.

Some older SC826 chassis come equipped with 3-pin non-PWM fans. Avoid those or plan to replace the fans.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Hey @i386 - is that an SC826? Looks like you've got rear fans and full-height PCIe brackets, so it can't be 2U. Probably a 3U SC836?

No worries - the mounting story remains the same.
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
If noise is problem you could try to replace the mid plane fans with the slower supermicro 5k rpm fans.

@PigLover Yes, it's a 836 chassis.
 

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
I have the same board in an SC846 with no serious issues. One of the holes didn't line up. And I think I used an extender for at least one of the PWM fan connectors.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
any numbers on power consumption for the whole 836 setup ?
this looks like what i think about, is that an ib-hca in the second pcie?

in my proxmox-cluster i consider replacing a dual 2011 board in 826 with a xeon-d flex ATX as this is mainly stand-by storage-head that only exports some ssd/hdd via srp while stand-by and handles some really lightweight vm's.
when active / failover would happen additional srp-targets + mdraid would be added, guess nothing the soc couldn't handle without a massive impact on performance.

on pcie i would add a dualport ib-card and maybe 9207 4i4e if the onboard lsi doesn't feel right.
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
any numbers on power consumption for the whole 836 setup ?
About 110 watt with 6x 6tb drives, the pentium d1508 cpu, vpi adapter and the raid controller.


this looks like what i think about, is that an ib-hca in the second pcie?
Yes, it's a dual port vpi card. I was curious how much the cpu would be limiting with a 40gbe/qdr link.
 
  • Like
Reactions: _alex

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Sorry, late reply - thanks for the numbers.
if these are for idle they look quite high and don't promise much savings from the soc :(

i have ca. 150w idle with a single 2670v1, 96gb RAM (16gb modules), 6x Spinners + 4x SSD, lsi 92074i4e on sas2 expander backplane and two Connectx-2 hca's - all in 826.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
If the system is mostly idle then e5 v1 onwards is not that bad. If it's earlier hear then pretty easy to make the case to replace based on power usage.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
If the system is mostly idle then e5 v1 onwards is not that bad. If it's earlier hear then pretty easy to make the case to replace based on power usage.
To be honest i expected the soc to have a much higher advantage in terms of power over my E5 v1 than this. But obviously the psu's, Backplane and Fans in the Chassis account for quite a bit of the total so that the savings from the soc become marginal.

Differrence between older 1366 and v1 was really tremendous compared to 40-50w that would cause massive loose on pcie-lanes, so guess i'll stay with v1 for a while.

interesting, i suspect based on what i saw until now that v3 over v1 would add a similar difference like the xeon-d. had a single e5 2603v3 node with 2x 10gbe, single connectx-2 hca, lsi 2208 and 5x hdd idle at about 80w in fat twin.

@i386 could you see any limiting for the ib done by the CPU?
wouldn't really expect that there is some for the transport itself, as the major work should be offloaded anyway.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
SOC does make a difference but the common items like large psu, Ib cards, drives etc are a big portion on the usage.
Imagine you just had a few ssd, the a e5 v1 idle at 50-60 watts reduces to 25-30 on a Xeon-D is huge, drop 25-30 watts from 150w and as a % it's a lot less.
Fans also use a lot of watts depending on model and case etc.

Building a real Low power system means dropping things like IB cards and using the 10G on the SOC as an example, it's not easy at all. Consolidating spinning disks to say 8tb helium models etc, but your already using 6tb so well consolidated compared to those using SAS cards and expanders and a heap on 2tb drives.

What's your target in mind ?
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
if these are for idle they look quite high and don't promise much savings from the soc :(
Cpu load was 40-70% when I measured the power consumption.

@i386 could you see any limiting for the ib done by the CPU?
wouldn't really expect that there is some for the transport itself, as the major work should be offloaded anyway.
I got ~22gbit/s (max was 27gbit/s) in iperf with infiniband qdr and 40gb ethernet. The cpu utilization never maxed out so I think I was memory limited (just 1x 8gb dimm in use) or used wrong paramters in iperf.
 
  • Like
Reactions: _alex

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
No specific target, i just considered Xeon-D instead of v1 before i built these systems.
So, just curious what the difference had been, and also if it might be worth to switch mid-term.
Also, the power draw takes absolutely the biggest part regarding costs, so constantly re-thinking my setup with this in mind.

I also know i just can't an extemely low-power system for these, as the limits for sure also come from PSUs / Fans and the Backplane (the whole Chassis).

I use 2tb spinners in raid10 for production and 6tb for backup/cold data and wouldn't consolidate those 2tb for performance (raid1 of 2x 8tb vs raid 10 with 2tb disks ...) and rebuild-time atm. At least until i start running short on space on these volmes. The plus in power draw therefore is something i`m perfectly ok with, also for the expanders that allow a single hba in the box. Dropping IB is also not an option as SRP-Targets are the main service they offer, besides some lightweight VM`s and quorum votes.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Cpu load was 40-70% when I measured the power consumption.
Oh, ok - thanks, this gives that number a bit of a different meaning ...

I got ~22gbit/s (max was 27gbit/s) in iperf with infiniband qdr and 40gb ethernet. The cpu utilization never maxed out so I think I was memory limited (just 1x 8gb dimm in use) or used wrong paramters in iperf.
Hm, maybe used single thread as the CPU didn't go up more - or really the single dimm.
 

JDM

Member
Jun 25, 2016
44
22
8
33
Thanks for the help everyone, I completed this build today (for the most part..still going to try a couple different middle row fan options, but that's just tinkering). The board worked well, it was indeed just a single screw missing which was no problem. I loaded it up with 12 WD Red drives (to TQ backplane), 32GB of memory (single DIMM) and the D-1537 board. I stuck dual 500W Supermicro power supplies in it as well instead of the old 800 watt ones that it came with.

@_alex I measured power consumption of mine at the wall at ~107 watts idle, ~150 watts going full out writing to disk. Just to add another data point to the power consumption discussion.
 
  • Like
Reactions: rnavarro and _alex

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
@JDM thanks, thats interesting. the wd red must be really efficient, i use re4 2003fyyz mostly, can't imagine 12 of them would account for max. 50W only.
those 500w psu are interesting for lower-power in the 826 for sure, but quite hard to find. wish sm had something like a 320w psu that fits this chassis.

i got an x10slm-f and E3 1230v3 32gb RAM for a good Deal recently, originally was for the 826 left in the Labs to make it a zfs Box. but as it comes with Chassis, noctua and some hdd i'm tempted to use as either an Hack if it works or Linux Workstation + kvm hack. but for sure will put this Board in the 826 just to measure power compared to my e5 and the currently temporary x8dth-if in it.