need motherboard recommendation ...

BLinux

cat lover server enthusiast
Jul 7, 2016
2,528
975
113
artofserver.com
possible that ur card has damaged the slot, I do remember some cards were proprietary to certain boards and required taping, was a long time ago i had read a thread similar, and the board slot was damaged.
No, all the PCI-E slots seem to be working now. Not sure why it started working, but they are working. My problem was that sas2flsh in FreeDOS doesn't work with the X9 systems. I wish I had known that and avoided this entire path...

I've now ripped out the X9SCM-F + E3-1220L combo. Just too much hassle for building a bulk firmware flashing machine. I have a partially defective X8DT6-F sitting on the shelf and decided to use that. I disabled the onboard SAS controller with jumper setting (to avoid conflict with PCI-E HBA cards that I want to flash) and only have a single cpu in it (E5620). Just ordered a L5630 for $6 so that should reduce another 30W or so. It currently idles around 100W, so that should bring it down to 70W or so. Not as good as the 35-40W of the X9SCM-F+E3-1220L combo, but the following benefits outweigh the power savings:

1. all the PCI-E slots work, regardless of # of CPUs or how many PCI-E lanes the CPU can provide. I can now flash 5 cards at once, which is even better than the X9SCM-F. so, uncomplicated, working PCI-E slots and more of them.
2. DOS programs like sas2flsh.exe actually run without issues and I don't have to resort to UEFI shell. this is a convenience for me since I already have batch file programs that do the firmware flashing in bulk; I would have to figure out how to write UEFI scripts if I had to use the UEFI shell.

So, for anyone who finds this thread and are looking to build a machine for bulk firmware flashing, I'd say go for a X8 platform. The number of available PCI-E slots is a critical factor here, and having that tied to CPU features just complicates the issue too much. It's definitely not as power efficient, but the overhead of running the machine is more than compensated by the abundant number of working PCI-E slots with a single CPU.
 

nthu9280

Well-Known Member
Feb 3, 2016
1,588
441
83
San Antonio, TX
I've read in other threads here that idle power consumption of L56xx is not lower than E56xx or for that matter X56xx. I think the power guzzler is the chipset on the lga1366 boards.

I have a space heater aka LGA775 (dell T3400) that only turn on for these tasks and mine is a pure homelab.

Edit - took out my comments on EFI after re-reading your last post. :) Guess reading is not my forte.

Sent from my Nexus 6 using Tapatalk
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,528
975
113
artofserver.com
I've read in other threads here that idle power consumption of L56xx is not lower than E56xx or for that matter X56xx. I think the power guzzler is the chipset on the lga1366 boards.
I've heard similar things too, but I want to try for myself and verify. I've also heard other things that didn't turn out to be so true some times or the difference wasn't as great; so I like to verify for myself. I think a TDP difference of 40W to 80W should mean something though... we'll see.
I have a space heater aka LGA775 (dell T3400) that only turn on for these tasks and mine is a pure homelab.
Yeah, I use to use an old Xeon 54xx server to do firmware flashing, but this project is meant to do this in bulk and for several hours at a time, so that's why i was looking for something more energy efficient. alas, the complications of the PCI-E bus with the CPU and chipset just made the X9 options unfavorable; i need very little CPU but a lot of PCI-E lanes.

i just placed an order today for a X8DTH-iF based system that has 7 PCI-E slots in a 836 chassis for $110 shipped. between what i have now and this when it arrives, I should be able to flash 12 cards per boot cycle. Ironically, the $110 for this entire X8 system and chassis costs less than what I spent putting together the X9SCM-F+E3-1220L without chassis!
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,528
975
113
artofserver.com
I've read in other threads here that idle power consumption of L56xx is not lower than E56xx or for that matter X56xx. I think the power guzzler is the chipset on the lga1366 boards.
just wanted to follow up on a few findings regarding power consumption... the results are confusing...

so, after I had success with the defective X8DT6-F board as a firmware flashing machine with 5 fully working PCI-E slots, i happen to come across an eBay deal for a Dell/Compellent box that is a Supermicro 836 with a X8DTH-iF system in it for $110, which has 7 PCI-E slots. I also received the L5630 and had a little time to mess with this yesterday. the results are not what I expected, but in the end, i've got a machine that idles at 73W (without any PCI-E cards) and I am able to flash firmware on 7 cards per boot cycle.

Idle power consumption results:
1) X8DT6-F + single E5620 (manually clocked at 1.6Ghz) + onboard SAS disabled via jumper. Idle Power = 99W

2) X8DT6-F + single L5630 (manually clocked at 1.6Ghz) + onboard SAS disabled via jumper. Idle Power = 99W
comment: like i said, not what i expected. swapping the CPU made 0 difference, but I have a feeling there's something else going on here.

3) X8DTH-iF + single E5540. Idle power = 82W
comment: this doesn't make any sense to me; older Xeon E55xx vs E56xx above, dual IOH chips for all the PCI-E slots, and this thing consumed less power than the X8DT6-F???

4) X8DTH-iF + single L5630. Idle power = 73W
comment: ok, so going from E5540 to L5630 made a 9W difference here, unlike on the X8DT6-F. It's hard to say where the difference came from? From a 45nm CPU to 32nm? from going from E55xx to L56xx? the lower clock speed of the L5630 vs E5540? i'm just glad that i was able to reach the 73W range.

also, i think there's something wrong with that X8DT6-F... and I don't mean by design, I think that specific board has something going on. i had problems with it previously with DIMM slots on CPU2. can't explain why it consumes more energy than the X8DTH-iF...

so, in the end, I sort of reached my goal of being in the 70W range, but it was not the CPU (which knocked off 9W), but more the platform change from X8DT6-F to X8-DTH-iF (knocked off 17W).
 

Evan

Well-Known Member
Jan 6, 2016
3,128
522
113
Sorry to say but the L cpu’s draw normally exactly zero less watts at idle, and I know it’s the 56xx vs 55xx but in my experience those 2 platforms actually idle very similar. (At least on dual cpu HP DL380 servers)

They only limit power and subsequent heat produced at load and maybe the generation difference is less at idle also.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,528
975
113
artofserver.com
Sorry to say but the L cpu’s draw normally exactly zero less watts at idle, and I know it’s the 56xx vs 55xx but in my experience those 2 platforms actually idle very similar. (At least on dual cpu HP DL380 servers)

They only limit power and subsequent heat produced at load and maybe the generation difference is less at idle also.
I don't know about that... the results between #3 and #4 saved me 9W going from E55xx CPU to L56xx CPU.

What bothers me more is how am I getting 17W reduction by having an extra IOH?
 

Evan

Well-Known Member
Jan 6, 2016
3,128
522
113
What are you using to measure ? Sure it’s always accurate ?
Logic says the extra IOH should consume more but... logic is not always logical
 

sfbayzfs

Active Member
May 6, 2015
247
104
43
SF Bay area
I believe on the X8DT6-F disabling the onboard controller does not stop it from getting power, and that could explain the higher draw.

I use an X8DTE for my flashing - FreeDOS works great on it, 6 slots if both CPUs are populated, and it boots quickly without the IPMI (although you could jumper disable the IPMI on an -F board for the same result)

Overall it's a good solution, although I'm not sure why you are concerned about power draw unless you are flashing cards for several hours a day.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,528
975
113
artofserver.com
I believe on the X8DT6-F disabling the onboard controller does not stop it from getting power, and that could explain the higher draw.
I don't know for sure if the jumper cuts power to the LSI SAS2008 chip, but I was curious and I touched the heatsink on the SAS2008 with the disable jumper and it was dead cold, versus it gets a warm when the enable jumper is on.
 
Oct 3, 2017
19
1
3
32
BLinux, This is a very interesting thread you've started. I'm planning using the x9-SCM-F board myself to setup a vmware lab, but I do not have all the hardware complete yet. I use this board with an Intel Xeon E3-1265L
I've heard that this board somehow/sometimes acts strange when it comes to pcie 2.0/3.0 cards. I have not yet experienced this myself, but this thread sure confirms that.
I'm planning to use an pci-e x8 10gb network-card and an pci-e x4 (asus) m.2 card (so I can use m.2 sticks aswell). I was unable to find the exact number of pci-e lanes that are supported by the processor I use. Ark tells me some pcie configurations but not the exact number of pcie lanes. The bios (v2.2) does not give me any more detail of the pcie cards other than "the slot is used".
 

sfbayzfs

Active Member
May 6, 2015
247
104
43
SF Bay area
I forgot to comment on the original issue of only one slot working with an Ivy Bridge CPU in an X9SCM-F - I run several X9SCM-F boards with Ivy Bridge v2 CPUs and 3-4 of the slots in use, and have not had an issue yet... I usually have my PCIe 3.0 HBAs in the first 2 X8 slots, and slower cards like 10G and USB 3.0 which don't need full bandwidth in the third and fourth slots, but I can definitely run HBAs in both of the first two slots just fine.

As for the X8DT6-F high power draw, but cold heatsink with the HBA disabled via jumper, I will have to test some boards myself when I get a chance - I have X8DTE, X8DTE-F and X8DT6-F boards to check power draw on, as well as an X9 DP board.