TrueNAS new build hardware help - motherboard/cpu specifically

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Jun 2, 2021
48
7
8
Hey, so I've also posted this thread over on the TrueNAS forums as well.

I'm planning out a new build. My previous one decided to stop booting when I moved. Turns out the motherboard crapped out on me.

Previous build was:


Although this build treated me well previously, I'm looking to upgrade to something beefier, and I'd like to have something capable of being in the 256GB-512GB of RAM range. Ideally, single CPU if possible. I'll be adding a 40Gb NIC (maybe 2) down the road to this build as well.
I am going to replace the WD Black drives, I know they're not particularly ideal but it's what I had at the time. I'll reuse them until I get something better. My chassis also supports 16 3.5" drives, I will be maxing that out.
I will also, in the future, be adding NVMe drives to add to this chaos (hence 40Gb networking)

My use case is for my homelab. I had iSCSI setup, round robin over the quad port NIC, to use as an ESXi datastore. At the time I ran the previous build, I had one ESXi host.
I now have 4x ESXi hosts, and my overall hardware for compute/ram resources is 44c/88t and 208 GB of RAM.
These hosts will be upgraded to having a 4x10Gb NIC per host.

The sole purpose of this build is handling ESXi datastores, and doing so as fast as I can make it. I have a separate system (still running TrueNAS :) ) for my general NAS.


I test automation, and run various things, so I'm looking to go beefy if I can.

As far as cost goes, for CPU/MOBO/RAM, I'd like to stay around 1k (But I honestly don't know how realistic that is...). I'm open to starting with less RAM to fit that budget, and adding more later.
Am I looking for a board that doesn't exist? I'm hoping to not have to go to LRDIMMs due to the insane cost on Ebay of them.
I am aware that this is.... pretty overkill. I'm ok with that. My end goal is to really not have storage be a bottleneck. Or at least, reduce the bottleneck here as much as possible.

To quickly recap:

Looking for suggestions on CPU/MOBO/RAM
Single cpu, 256GB-512GB RAM, 1k max(ish)

Networking and disk upgrades I have covered and planned out.



Thoughts and/or suggestions?
If I missed something, please let me know.

Thanks!
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,090
1,507
113
With that budget, pretty much looking at a 2011-3 system. No other way to get that much RAM (unless you go DDR3, but it's old). 256GB will set you back ~$600, which should leave enough for a motherboard (not the cheapest these days) and a Xeon E5 v4.
 
Jun 2, 2021
48
7
8
With that budget, pretty much looking at a 2011-3 system. No other way to get that much RAM (unless you go DDR3, but it's old). 256GB will set you back ~$600, which should leave enough for a motherboard (not the cheapest these days) and a Xeon E5 v4.
Hmm ok. On the RAM, I'm ok with starting @ 64 or 128 if it means spending more on CPU/MOBO to get there.
I forgot to mention (and will update the O), my biggest technical concern is the amount of PCIe lanes I'll get.

I mean, 2x40Gb NIC, NVMe, and a storage controller? That'll take quite a bit. Would be nice to afford Epyc and have all the lanes go right to the CPU but that's way more than I can spend.


Would it be worth saving up a little more and going for Socket 2066? That would, I believe, get me more PCIe lanes, similar RAM support, and quad channel RAM to boot.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,090
1,507
113
You'll have 40 lanes to work with (plus the PCH). Should be enough unless you plan on having a ton of NVMe drives? NIC + HBA is 16 lanes, so that's 6 NVMe drives, assuming 4 lanes each. A motherboard like a Supermicro X10SRL-F has a pretty good layout, but there are plenty of others to choose from.

As for 2066, probably not. It would up being fairly uncommon, so limited availability on the secondary market. 2011-3 already has quad-channel memory, so no advantage there.
 
  • Like
Reactions: itronin
Jun 2, 2021
48
7
8
You'll have 40 lanes to work with (plus the PCH). Should be enough unless you plan on having a ton of NVMe drives? NIC + HBA is 16 lanes, so that's 6 NVMe drives, assuming 4 lanes each. A motherboard like a Supermicro X10SRL-F has a pretty good layout, but there are plenty of others to choose from.

As for 2066, probably not. It would up being fairly uncommon, so limited availability on the secondary market. 2011-3 already has quad-channel memory, so no advantage there.
Was not aware of quad-channel being there, that's nice. Full disclosure, I'm also unsure if an increase in memory channels actually makes a difference.

Lanes wise, I suppose I mis-judged how much a few devices needed, so yeah I after doing the math 40 should be good.


When you say plenty of others to look from, are you looking at supermicros website? or did you mean other vendors in general?
Filtering to uniprocessor and x10 generation yields 1 motherboard for me, on supermicro's website.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,090
1,507
113
When you say plenty of others to look from, are you looking at supermicros website? or did you mean other vendors in general?
Filtering to uniprocessor and x10 generation yields 1 motherboard for me, on supermicro's website.
I mean 2011-3 motherboard in general. Supermicro has removed most of them from the product pages on their website as they are 2 generations old.
 

zer0sum

Well-Known Member
Mar 8, 2013
849
474
63
You'll have 40 lanes to work with (plus the PCH). Should be enough unless you plan on having a ton of NVMe drives? NIC + HBA is 16 lanes, so that's 6 NVMe drives, assuming 4 lanes each. A motherboard like a Supermicro X10SRL-F has a pretty good layout, but there are plenty of others to choose from.

As for 2066, probably not. It would up being fairly uncommon, so limited availability on the secondary market. 2011-3 already has quad-channel memory, so no advantage there.
I'm with @BlueFox on this recommendation! The X10SRL-F is my favorite bargain 2011-3 board!! :)
You can get up to 22 cores and 1TB of ram and they can be found for $150-200 all the time.
The v3/v4 Xeon's are also cheap and solid options for ESXi or Proxmox

PCIe slot layout is pretty slick if you want to fill it up with NIC's, HBA's and bifurcated dual port nvme cards

PCI-E 3.0 x8
PCI-E 3.0 x8 (in x16 slot)
PCI-E 3.0 x8
PCI-E 3.0 x8 (in x16 slot)
PCI-E 3.0 x8
PCI-E 3.0 x4 (in x8 slot)
PCI-E 2.0 x4 (in x8 slot)
 
  • Like
Reactions: itronin

itronin

Well-Known Member
Nov 24, 2018
1,240
801
113
Denver, Colorado
I'm with @BlueFox on this recommendation! The X10SRL-F is my favorite bargain 2011-3 board!! :)
You can get up to 22 cores and 1TB of ram and they can be found for $150-200 all the time.
The v3/v4 Xeon's are also cheap and solid options for ESXi or Proxmox
...
I'm with @zer0sum and @BlueFox for all the reasons they've stated . X10SRL-F. E5-2680v4 is a still < 120.00 USD and populating 256GB can pretty easily be done using 2400T REG for $300-$360 USD (and less if you want to bargain hunt). If your cpu requirements (NAS and light VM) then the 2620v4's can be found a lot cheaper!

While I'm slowing getting rid of some of the X10SRL's I have I plan to keep a few of them chugging along because it is a very, very flexible platform giving you access to ALL of the PCIE lanes using a good mix of physical slot widths.
 
Jun 2, 2021
48
7
8
Is the concern with dual-processor due to power draw, or case size, or something else? X10DRH-CT has plenty of PCIe and DIMMs, as well as onboard SAS3.
Mainly power draw, and I think for what I'm doing there's no need for dual cpu. No dedupe or anything CPU intense.
My case (Chenbro RM31616) will certainly support larger motherboard sizes (SSI-EEB I believe)

I mean 2011-3 motherboard in general. Supermicro has removed most of them from the product pages on their website as they are 2 generations old.
aahhh ok. yeah that makes it a little difficult to find. I'll keep looking around but the suggested board is pretty nice!
I was still hoping to get to 512 GB RAM without LRDIMMS but that might not be possible until a much newer generation of hardware, or dual cpu...

I'm with @BlueFox on this recommendation! The X10SRL-F is my favorite bargain 2011-3 board!! :)
You can get up to 22 cores and 1TB of ram and they can be found for $150-200 all the time.
The v3/v4 Xeon's are also cheap and solid options for ESXi or Proxmox

PCIe slot layout is pretty slick if you want to fill it up with NIC's, HBA's and bifurcated dual port nvme cards

PCI-E 3.0 x8
PCI-E 3.0 x8 (in x16 slot)
PCI-E 3.0 x8
PCI-E 3.0 x8 (in x16 slot)
PCI-E 3.0 x8
PCI-E 3.0 x4 (in x8 slot)
PCI-E 2.0 x4 (in x8 slot)
I'm seeing 512GB and higher as... very expensive lol.
The PCI-e layout is very nice. In a previous post I had clarified that mis-judged how many laners some devices needed (like the 40Gb NICs... didn't realize those were x8)


The v3/v4 Xeon's are also cheap and solid options for ESXi or Proxmox

PCIe slot layout is pretty slick if you want to fill it up with NIC's, HBA's and bifurcated dual port nvme cards
Would be awesome, but this is only a storage host. I wish I had some newer hardware in the compute hosts (I max at being able to run e% v3's, and that's only 1 host.. time to upgrade the others I think)

When you say bifuricated..... I assume you mean having an onboard PCI-e switch on the add-in card? I checked over the site/manual for this board and saw no mention of bifurication support. Would be sweet if it had it though!

I'm with @zer0sum and @BlueFox for all the reasons they've stated . X10SRL-F. E5-2680v4 is a still < 120.00 USD and populating 256GB can pretty easily be done using 2400T REG for $300-$360 USD (and less if you want to bargain hunt). If your cpu requirements (NAS and light VM) then the 2620v4's can be found a lot cheaper!

While I'm slowing getting rid of some of the X10SRL's I have I plan to keep a few of them chugging along because it is a very, very flexible platform giving you access to ALL of the PCIE lanes using a good mix of physical slot widths.
Good to know, thanks!
This is a storage only node, so no crazy CPU requirements.
 

itronin

Well-Known Member
Nov 24, 2018
1,240
801
113
Denver, Colorado
When you say bifuricated..... I assume you mean having an onboard PCI-e switch on the add-in card? I checked over the site/manual for this board and saw no mention of bifurication support. Would be sweet if it had it though!
SM more often than not does not go back and update their motherboard manuals for features introduced post initial release.

the X10SRL-F has bifurcation support. I'm using it today. I have a pair of "noname" low cost Chinese dual u.2 adapters in my vm storage node. Each is in an x8 slot that is bifurcated to x4x4. Each card is connected to 1 optane 900p 280GB and 1 optane 905p 960GB drive.

I replaced the backplane with a hybrid u.2 and sas/sata backplane. With your chassis you could easily use m.2 bifurcated carrier boards rather than u.2 (or you could figure out how to mount some u.2 drives in your chassis - just make sure you have adequate cooling on the u.2's).

This is a storage only node, so no crazy CPU requirements.
E5-2620V4 is what I recommend.

Here's the current configuration of my vm storage node - it has 20 or so vm's stored on it (light usage).

ItemManufacturerDescriptionQty
chassissupermicroSC216B
1​
chassissupermicro2u/3u/4u rackmount rails
1​
powersupermicro501p
2​
chassissupermicroRear 2 Bay SFF Hot Swap
1​
chassissupermicroBPN-SAS3-216-N4 Hybrid backplane
1​
motherboardsupermicroX10SRL-F
1​
cpuintelE5-2620v4
1​
coolersupermicroactive 2U heatsink
1​
memorymicron16GB PC4-2400T
4​
nichpe/mellanoxIB/ETH ConnectX-3 40Gbe QSFP+ dual
1​
HBALSILSI / Avago 9400-16i
1​
HBAsupermicroLSI 3008-E IT mode
1​
HBAcheap Chinesepcie to dual u.2 adapter
2​
cablevariousSFF-8643 to SFF-8643 right angle
4​
cablevariousSFF-8643 to SFF-8643
5​
cablesupermicro50cm SATA
2​
diskintel120gb DC S3500
2​
diskhgst1.6TB MM SSD
13​
diskinteloptane 905p 960GB u.2
2​
diskinteloptane 900p 280GB u.2
2​
pcie slot #physdataitem
7​
x8x8pcie to dual u.2 adapter
6​
x16x840Gb mellanox
5​
x8x8pcie to dual u.2 adapter
4​
x16x8Avago 9400-16i
3​
x8x4 (x8)SM LSI 3008 IT Mode
2​
x8x4 (x0)
1​
x8x4 2.0

Code:
root@freenas41[/]# dmesg | grep -i nvme
nvme0: <Generic NVMe Device> mem 0xfba10000-0xfba13fff irq 40 at device 0.0 on pci6
nvme1: <Generic NVMe Device> mem 0xfb910000-0xfb913fff irq 40 at device 0.0 on pci7
nvme2: <Generic NVMe Device> mem 0xfb810000-0xfb813fff irq 42 at device 0.0 on pci8
nvme3: <Generic NVMe Device> mem 0xfb710000-0xfb713fff irq 40 at device 0.0 on pci9
nvd0: <INTEL SSDPE21D280GA> NVMe namespace
nvd1: <INTEL SSDPE21D280GA> NVMe namespace
nvd2: <INTEL SSDPE21D960GA> NVMe namespace
nvd3: <INTEL SSDPE21D960GA> NVMe namespace
root@freenas41[/]#
optane 900p are mirrored slog for the HGST SSD pool
optane 905p are a mirrored pool.

Power usage via bmc:
Screen Shot 2022-02-15 at 8.06.24 AM.png
 
Jun 2, 2021
48
7
8
SM more often than not does not go back and update their motherboard manuals for features introduced post initial release.
Good to know, thanks!

I replaced the backplane with a hybrid u.2 and sas/sata backplane. With your chassis you could easily use m.2 bifurcated carrier boards rather than u.2 (or you could figure out how to mount some u.2 drives in your chassis - just make sure you have adequate cooling on the u.2's).
This I may consider, but likely wont do. If if drop a 40Gb NIC, I could slot another NVMe Riser in there to get 4 drives. I assume we can only bifuricate an 8x slot into 2 4x?

E5-2620V4 is what I recommend.
I'll take a look. Any reason? 8c16t seems like a lot.
$120 for the E5-2620V4 is not bad.
 

zer0sum

Well-Known Member
Mar 8, 2013
849
474
63
This I may consider, but likely wont do. If if drop a 40Gb NIC, I could slot another NVMe Riser in there to get 4 drives. I assume we can only bifuricate an 8x slot into 2 4x?

I'll take a look. Any reason? 8c16t seems like a lot.
$120 for the E5-2620V4 is not bad.
Yes. x8 can be x4x4 only.
But that lets you run a dual nvme card :)

I'm not sure where you're getting $120 for the 2620v4?
I think you can do a lot better than that as I can find 2698v3's on Ebay for ~$120 and that is a 16/32 cpu :D
 
  • Like
Reactions: itronin

zer0sum

Well-Known Member
Mar 8, 2013
849
474
63
I'm seeing 512GB and higher as... very expensive lol.
The PCI-e layout is very nice. In a previous post I had clarified that mis-judged how many laners some devices needed (like the 40Gb NICs... didn't realize those were x8)
If you can find a way to get 512GB+ cheap we'd all like to know :D
 
  • Haha
Reactions: itronin

itronin

Well-Known Member
Nov 24, 2018
1,240
801
113
Denver, Colorado
I'll take a look. Any reason? 8c16t seems like a lot.
$120 for the E5-2620V4 is not bad.
why buy new? try $35-$50 USD used on the bay.

If you are concerned about "working unit" I might have a working pull in my bin but I'd need to check and would be in the same price range as the bay.

If you want lanes on intel then it comes down to 2 x CPU's. @Sean Ho had a good recommendation for slot types and lanes but at the expense of power draw. this generation intel has x40 lanes. that's it.

You can also look at something like an X9DRD 2011 / E5 v2 combo but you'd only pick up 8 extra lanes (qty 6 x8 slots, and I've only successfully had 4 of the slots simultaneously bifurcate. The board does have onboard SAS2 but only 8 SAS channels - you appear to be using 16 channels so I'm guessing no expander.
 
Jun 2, 2021
48
7
8
Yes. x8 can be x4x4 only.
But that lets you run a dual nvme card :)

I'm not sure where you're getting $120 for the 2620v4?
I think you can do a lot better than that as I can find 2698v3's on Ebay for ~$120 and that is a 16/32 cpu :D
Ah I scrolled down more and saw them for like $50 lol

If you can find a way to get 512GB+ cheap we'd all like to know :D
I'll be sure to keep everyone posted if that happens.
And if that happens.. lottery tickets will also be bought.
 
Jun 2, 2021
48
7
8
why buy new? try $35-$50 USD used on the bay.

If you are concerned about "working unit" I might have a working pull in my bin but I'd need to check and would be in the same price range as the bay.

If you want lanes on intel then it comes down to 2 x CPU's. @Sean Ho had a good recommendation for slot types and lanes but at the expense of power draw. this generation intel has x40 lanes. that's it.

You can also look at something like an X9DRD 2011 / E5 v2 combo but you'd only pick up 8 extra lanes (qty 6 x8 slots, and I've only successfully had 4 of the slots simultaneously bifurcate. The board does have onboard SAS2 but only 8 SAS channels - you appear to be using 16 channels so I'm guessing no expander.
I didn't scroll down far enough on ebay...
Not so concerned about working unit. Haven't had any issues yet with ebay.

I'm considering both boards in this thread actually. going with @Sean Ho 's suggestion gets me 512GB RAM and higher without going 3DS or LRDIMM. I could go for some lower power draw CPU's and probably be fine.

Honestly, I think i'd be fine with some quad or hex core CPU's, this server is only serving iSCSI out to a few VM hosts, that can't take much CPU.

My backplane, I think is an expander. I have 4 ports on the HBA, that go to 4 ports on the backplane.
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
I don't know that the X10DRH board I mentioned plus 512GB DDR4 would fit within your budget, but you could start with a few 32GB RDIMMs and have room to grow.

If you're planning on serving multiple NVMe over 40GbE, unless you get into RoCE/iSER, don't discount the possibility of the CPU (or at least interrupts) becoming a bottleneck.
 
  • Like
Reactions: itronin
Jun 2, 2021
48
7
8
I don't know that the X10DRH board I mentioned plus 512GB DDR4 would fit within your budget, but you could start with a few 32GB RDIMMs and have room to grow.
the plan currently, no matter which board/cpu, is to start with 2x 32GB RDIMM's, so that's fine.
The RAM was always the most expensive piece of this plan, so I assumed starting with a small amount and buying more as I move on.

If you're planning on serving multiple NVMe over 40GbE, unless you get into RoCE/iSER, don't discount the possibility of the CPU (or at least interrupts) becoming a bottleneck.
Hm. Well the possibility is there for sure. I had intended for the NVMe's to be mirrored SLOG/metadata (I'm unsure if either one of those would help my use case, but the intent is to test and see).

iSER I'm not familiar with, and RoCE I at least know of vaguely. I'll have to look into them.
The NIC is a key player in supporting those two, correct?

RoCE is RDMA over the network, iirc?

Do you have any link to a good read to what I should know, if I go down that path?

Alternatively, if not, what's the bandaid on the CPU side? Higher clockspeed?