TrueNAS new build hardware help - motherboard/cpu specifically

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

itronin

Well-Known Member
Nov 24, 2018
1,234
794
113
Denver, Colorado
I have 3x... 750w I believe in this system.

My biggest concern, honestly, is the adaptor melting or causing a fire. I almost want to go and get a single CPU board and lower my requirements just because of that.
SATA splitters are known to do that. it *could* happen with a molex but they are a different and much older beast used in the original IBM PC - and before. I am actualy using a sata splitter in my recent posted build - but for boot drives, 120GB, not much happens on them so I'm not worried on my end since its a really low draw on the boot disks.

I put the X10SRL in a previous post... I think its a fantastic and flexible board. I use that in my VM storage server running TNC , CSE-216 with hybrid bvackplane with an e5-2620v4, 4x16GB PC4-2400T, melly dual 40Gbe, 9400-16i, 9300-8i (only 4 ports used), 13x HUSMM161620x's, (mainline storage) 2 x optane 280 GB 900p u.2 (SLOG), 2 x optane 905pm 960GB u.2 (fast storage).
it has a pretty light load - 12 vm's up right now.

Screen Shot 2022-03-02 at 3.56.18 PM.png

Edit: or another chassis. Like a nice supermicro. I could be down with that.
yeah I am about a month away from putting another cse-836 up for sale. I've convinced myself I have too many so I'm pairing down! (from 5! o_O )
 
Jun 2, 2021
48
7
8
SATA splitters are known to do that. it *could* happen with a molex but they are a different and much older beast used in the original IBM PC - and before. I am actualy using a sata splitter in my recent posted build - but for boot drives, 120GB, not much happens on them so I'm not worried on my end since its a really low draw on the boot disks.
Yeah wasn't that the molded vs crimped connector issue? the molded ones would overheat and melt?

I put the X10SRL in a previous post... I think its a fantastic and flexible board. I use that in my VM storage server running TNC , CSE-216 with hybrid bvackplane with an e5-2620v4, 4x16GB PC4-2400T, melly dual 40Gbe, 9400-16i, 9300-8i (only 4 ports used), 13x HUSMM161620x's, (mainline storage) 2 x optane 280 GB 900p u.2 (SLOG), 2 x optane 905pm 960GB u.2 (fast storage).
it has a pretty light load - 12 vm's up right now.

View attachment 21906
I'll go back and look at it. I just try to be care about stuff that's going into my house.

I sincerely appreciate the knowledge and effort that you and @Sean Ho have provided, even now in making my mistake.

yeah I am about a month away from putting another cse-836 up for sale. I've convinced myself I have too many so I'm pairing down! (from 5! o_O )
how do you have a use for 5 of those? damn. I guess I know what I aspire to.

My old boss saw my server rack and makes jokes about running the local internet!
 

itronin

Well-Known Member
Nov 24, 2018
1,234
794
113
Denver, Colorado
how do you have a use for 5 of those? damn. I guess I know what I aspire to.

My old boss saw my server rack and makes jokes about running the local internet!
just talking about the 836's.

hmmm because once upon a time (compellent sc030's) cost ~170.00 each (sometimes even less like $100) so very inexpensive chassis it was. Watch for deals for motherboards. somehow you end up building up a bunch :p I certainly did not fill all 5 up with drives. Just 4 as there was a point during my last storage migration I had 2 x servers with 16tb and 2x servers with 8tb drives all going . But that's finished. Down to 3 with drives. soon down to 2 with drives.

then the rack gets full. then you go: I need to get rid of some of them.

Sad thing is I have a bunch of CSE-826's that were ~80 each sitting empty... but I may have plans for those so we'll see.

sometimes part of the hobby is as much building tinkering testing ie. playing around with this stuff, as putting them all to work.

All that said I will keep 3 836's. One is primary storage, one is onsite backup storage, one is general purpose vm box. Plus the VM storage server plus some other odds and ends I have. that works as long as I have a house to live in. If I move to an apt even temporarily then it will all change - but I'm trying to get ready for that too with the Green Hornet Build.
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
The 15-pin SATA power connector design is not inherently dangerous; the problem is molded connectors. Cheap cables, whether SATA, Molex, or even fans, sometimes embed the connections in a molded block of plastic, where you can't tell if the connection might be poor and hence high-resistance, or even have arcing to adjacent wires.

Many other SATA power cables or splitters use crimped ends, or IDC (insulation displacement connectors, two teeth pierce either side of the insulation and make contact with the conductor), and those are just fine.
 
Jun 2, 2021
48
7
8
Yes, the x16 is electrically x16. I see the bifurcation option in bios setup, but again I haven't actually tried it yet, though I have an ASUS hyper 4 m.2 card in another node.

CPUs (and RAM) are the easiest components to upgrade, so if you like you can get 6c v3 chips for now and upgrade later. Or even run single processor to start with, forgoing a couple of the PCIe slots.
So I've gotten around to putting this together (got my self 256GB of RAM to go with it!) but I'm not seeing pci-e bifurication options, and I'm on bios 3.4a. Where are you seeing them?

Edit: nevermind, of course I find it after I post..
 
  • Like
Reactions: itronin and Sean Ho