TrueNAS new build hardware help - motherboard/cpu specifically

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Jun 2, 2021
48
7
8
It should. I have an X10DRH-iT (no SAS), and BIOS 3.4a has IIO/IOU bifurcation settings. I haven't played around with it yet, so I'm not sure if all slots can simultaneously bifurcate to x4, but I'd assume most slots can.
Awesome thank you!
I'm going to look at CPU's, but I think this is the board I'm going to get.

Going to try to find two CPU's that
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
@Sean Ho , do you know if bifurcation is supported on the X10DRH-CT? If so, that may very well be the board I order.

Basically ready to start moving forward on this!
my 2 cents. X10DRH-CT is one specific board in the SM dual CPU X10 ATX based form factor.

The -CT has hardware raid (LSI 3108) which can't be flashed to IT mode (AFAIK). that seems incongruous with your stated and discussed OS platforms. Since you are bringing your own SAS card and I suspect 10Gbe card maybe looking at some others that don't have unnecessary hardware that will also add to the power load of your system might be in order?

Some of the PCIE lanes would be allocated to the onboard raid and whether those are mixed in with lanes on the slots or NOT - IDK without looking in the manual.

@Sean Ho 's specific board comes to mind since it is basically stripped of superfluous stuff.

also X10DRi boards too .

SM's site is pretty good at filtering stuff so you can pair down the results and find what you need under building blocks -> server boards
 
Jun 2, 2021
48
7
8
Hmm ok, then I shall look at the variants (although the X10DRH-iT looks good, going to look at the block diagram).

Edit: reading the manual for the X10DRH-iT, it looks like the x16 pcie slot is in fact electrically x16 as well? If so, I'm definitely buying this and trying bifurcation from x16 to x4/x4/x4/x4. This would suit my NVMe plan well.

CPU's I really feel like 8c/16t is just overkill, am I in the wrong mindset here? Seems like 6c/12t would be fine.
 
Last edited:

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
Yes, the x16 is electrically x16. I see the bifurcation option in bios setup, but again I haven't actually tried it yet, though I have an ASUS hyper 4 m.2 card in another node.

CPUs (and RAM) are the easiest components to upgrade, so if you like you can get 6c v3 chips for now and upgrade later. Or even run single processor to start with, forgoing a couple of the PCIe slots.
 
Jun 2, 2021
48
7
8
Yes, the x16 is electrically x16. I see the bifurcation option in bios setup, but again I haven't actually tried it yet, though I have an ASUS hyper 4 m.2 card in another node.
That's fine, I'm hopeful that it'll breakout into 4 x4 though. Either way I think the X10DRH-iT is the board I'm purchasing.

CPUs (and RAM) are the easiest components to upgrade, so if you like you can get 6c v3 chips for now and upgrade later. Or even run single processor to start with, forgoing a couple of the PCIe slots.
Fair enough. RAM I'm starting with 4x 32GB sticks, hopefully getting more in the next 3 months.
Turns out, 64GB RDIMMS are very expensive, and 32GB RDIMMS are less than half the cost...

I'll be getting 2x CPU, so that I can test all RAM slots and make sure they're working.
 
  • Like
Reactions: itronin
Jun 2, 2021
48
7
8
my 2 cents. X10DRH-CT is one specific board in the SM dual CPU X10 ATX based form factor.

The -CT has hardware raid (LSI 3108) which can't be flashed to IT mode (AFAIK). that seems incongruous with your stated and discussed OS platforms. Since you are bringing your own SAS card and I suspect 10Gbe card maybe looking at some others that don't have unnecessary hardware that will also add to the power load of your system might be in order?

Some of the PCIE lanes would be allocated to the onboard raid and whether those are mixed in with lanes on the slots or NOT - IDK without looking in the manual.

@Sean Ho 's specific board comes to mind since it is basically stripped of superfluous stuff.

also X10DRi boards too .

SM's site is pretty good at filtering stuff so you can pair down the results and find what you need under building blocks -> server boards
Thank you for catching this, I didn't even think about that.
I'm going to go with the same board that Sean Ho has, X10DRH-iT. Seems like a good fit.
 
  • Like
Reactions: itronin
Jun 2, 2021
48
7
8
alright, I'll have all the parts for this by this coming thursday!

EDIT: wrong cpu. I got 2x Xeon e5-2620 V4
2x Xeon e5-2640 V4
2x Noctua NH-D9DX i4 3U
1x SuperMicro X10DRH-iT
4x 32GB Samsung DDR4 RDIMM 2133p

I even came in under my budget and made out, in my opinion, pretty well.
Thank you to everyone for you help.

I'll post up some info after I get everything tested and running.
 
Last edited:
  • Like
Reactions: itronin
Jun 2, 2021
48
7
8
ok so I got all the parts and saw that I made an amazingly bad oversight.

This chassis originally came to me years ago, with a dual cpu motherboard in it.
However, it doesn't appear to have 2x 8pin connectors. I have 1x 8pin and 1x 4 pin.

Is something like this a bad idea? Or should I consider returning the board and getting a single cpu for now?

I don't know that I can get a new module that the power supplies slide into, having trouble finding that info (chassis is a chenbro rm31616 )
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
ok so I got all the parts and saw that I made an amazingly bad oversight.

This chassis originally came to me years ago, with a dual cpu motherboard in it.
However, it doesn't appear to have 2x 8pin connectors. I have 1x 8pin and 1x 4 pin.

Is something like this a bad idea? Or should I consider returning the board and getting a single cpu for now?

I don't know that I can get a new module that the power supplies slide into, having trouble finding that info (chassis is a chenbro rm31616 )
If you have the spare 4 pin molex I think you'll be fine. The connector is "old school" but also designed for pretty high loads (unlike splitting sata connectors).
There are also dual 8 pin power splitters too if you can't pull the 4 pin molex over where you need it.
Those are not exactly high TDP cpu's and IIRC you said you weren't likely to stress the CPU too much.

Me personally I try very hard not to daisy chain splitters so my rule of thumb is one original molex drop off a cable, one splitter. If a cable has three molex drops on it and I'm not too power hungry I'm okay putting a splitter off each molex. sometimes though in small systems you find yourself needing more than a few splitters.
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
Adapter cable should be fine; just use a multimeter to verify the pinout of the PSU cable. If your PSU has 6+2-pin PCIe power or a spare Molex power, you could also use adapter cables on those (again, double-check with multimeter).
 
Jun 2, 2021
48
7
8
If you have the spare 4 pin molex I think you'll be fine. The connector is "old school" but also designed for pretty high loads (unlike splitting sata connectors).
There are also dual 8 pin power splitters too if you can't pull the 4 pin molex over where you need it.
Those are not exactly high TDP cpu's and IIRC you said you weren't likely to stress the CPU too much.

Me personally I try very hard not to daisy chain splitters so my rule of thumb is one original molex drop off a cable, one splitter. If a cable has three molex drops on it and I'm not too power hungry I'm okay putting a splitter off each molex. sometimes though in small systems you find yourself needing more than a few splitters.
yeah, I have the spare molex. I agree, no daisy chaining splitters.
I should be able to get the molex over to where I need it.

I'm going to look around online for power specs on these cpu's, not only for this but just because I'm curious.
I started labbing with two xeon x5670's, and those we definitely hungry cpu's. I'm interested to see how efficient this cpu is comparatively.

Adapter cable should be fine; just use a multimeter to verify the pinout of the PSU cable. If your PSU has 6+2-pin PCIe power or a spare Molex power, you could also use adapter cables on those (again, double-check with multimeter).
looks like it's finally time to learn how to a multimeter ha.
No 6+2 pcie, this is a 4 pin CPU and molex.
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
I'm going to look around online for power specs on these cpu's, not only for this but just because I'm curious.
I started labbing with two xeon x5670's, and those we definitely hungry cpu's. I'm interested to see how efficient this cpu is comparatively.
there's max TDP intel ark says 90. there's the performance curve and idle. all good to know. please do report back on that!

looks like it's finally time to learn how to a multimeter ha.
This.

Using a multimeter is a handy thing to know / have. cheap ones < $15USD are good and provide basic functionality.

I still have an old radio shack needle multimeter somewhere!
 
Jun 2, 2021
48
7
8
there's max TDP intel ark says 90. there's the performance curve and idle. all good to know. please do report back on that!



This.

Using a multimeter is a handy thing to know / have. cheap ones < $15USD are good and provide basic functionality.

I still have an old radio shack needle multimeter somewhere!
But TDP is a measure of heats generated in watts, right?
I always thought using TDP was frowned upon as a comparison to power used.
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
TDP is a poor metric of idle or average power draw, but it can be a useful approximation for max draw. (Of course, transient draw under turbo can very briefly exceed TDP limits.)

[I should qualify this by noting that both Intel and AMD have played shady games with TDP ratings in the past.]
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
Just edited my post.. I noted the wrong CPU. It's an e5-2620 v4, not the 2640 v4.
edited
2620v4 - you will be just fine splitting power from your existing 8pn cpu power *or* from the molex assuming you have a server grade psu in the 800w range *and* even if you load up your chassis with disks.

total load on your psu is gonna be the real test. all things being equal I *think* you will be fine but really shouldn't state that as a absolute since there are other variables to consider.

@Sean Ho is absolutely correct TDP is a poor metric for idle or average. note my comment said MAX and yeah turbo will briefly push past that.

absolute measurements of your specific configuration is the only way to know 100% for sure. If you are trulyi concerned. start with one cpu installed and one memory stick and NO add-in cards- which is a good idea anyway. make sure everything looks copacetic then add your other cpu , your additional power to the other cpu power socket and 1 memory stick for that cpu. Then add a card at a time. then add your memory to each cpu.

I'm sure you are experienced at building systems but I'll say this anyway (and no offense intended) double check your standoffs to make sure you don't have one in there that should not be there.

Perhaps a better way to say this is IMO/IME you aren't doing anything crazy or that hasn't been done before by someone else.
 
Last edited:
Jun 2, 2021
48
7
8
TDP is a poor metric of idle or average power draw, but it can be a useful approximation for max draw. (Of course, transient draw under turbo can very briefly exceed TDP limits.)

[I should qualify this by noting that both Intel and AMD have played shady games with TDP ratings in the past.]
The not too distant past, at that.

edited
2620v4 - you will be just fine splitting power from your existing 8pn cpu power *or* from the molex assuming you have a server grade psu in the 800w range *and* even if you load up your chassis with disks.

total load on your psu is gonna be the real test. all things being equal I *think* you will be fine but really shouldn't state that as a absolute since there are other variables to consider.

@Sean Ho is absolutely correct TDP is a poor metric for idle or average. note my comment said MAX and yeah turbo will briefly push past that.

absolute measurements of your specific configuration is the only way to know 100% for sure. If you are trulyi concerned. start with one cpu installed and one memory stick and NO add-in cards- which is a good idea anyway. make sure everything looks copacetic then add your other cpu , your additional power to the other cpu power socket and 1 memory stick for that cpu. Then add a card at a time. then add your memory to each cpu.

I'm sure you are experienced at building systems but I'll say this anyway (and no offense intended) double check your standoffs to make sure you don't have one in there that should not be there.

Perhaps a better way to say this is IMO/IME you aren't doing anything crazy or that hasn't been done before by someone else.
I have experience building pc's yeah. Haven't really done a full server build until now.
Oh trust me, I'm ALWAYS triple checking the standoff positions. Not a nightmare I want to deal with, having one in the wrong place.
No offense taken.

I have 3x... 750w I believe in this system.

My biggest concern, honestly, is the adaptor melting or causing a fire. I almost want to go and get a single CPU board and lower my requirements just because of that.

Edit: or another chassis. Like a nice supermicro. I could be down with that.
 
Last edited: