X10QBI and v3/v4 cpus (e.g. supermicro sys-4048b-trft)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nviets

New Member
May 23, 2023
7
9
3
Well, the fist boot was exciting - a loud persistent beep with a bunch of codes. Does anyone know where I can find the error code sheet for LED_PORT80? From the manual, I understand a loud beep is an overheat warning, but I found that odd since it begins the second I turn on the machine. LED_PORT80 cycles roughly through following after I turn it on:

FF >> 06 >> 15 (30 seconds) >> 60 >> FF >> 15 >> 19 >> 60 >> 02 >> 58 >> AO >> 46 >> SA...

There were some other codes that cycled through for a split second, but I couldn't catch them.
Never mind, these are standard codes. Time to troubleshoot!
 

nviets

New Member
May 23, 2023
7
9
3
Well here's my take on this build. I found a deal on a 4048B-TR4FT with the MEM2 risers from TheServerStore and loaded it with E7-8890 v4s. There's 256GB of DDR4 (32x8GB), and I rehomed an Nvidia 3090 from another machine. Sadly, the GPU has an enormous backplate that won't quite fit, so it's going to live in an external enclosure thanks to a few PCIe extension cables - not pretty, but it does the job.

I wanted to keep it quiet, but I really didn't want to go with water cooling since I plan to run it remotely frequently. Dynatron make the R27, which just barely fits in this machine. After a bit of cajoling the final fit of those towers is satisfying! All of the stock fans are swapped with Noctuas and I replaced the stock PSUs with PWS-1K28P-SQs. They are indeed silent - thanks @nickblack! The entire machine is dead quiet, and I was able to -j96 build without overheating.

I'll post a bit more once I get to the OS and other tuning. At present, here are some questions I have about the setup:
  • I tried configuring two SSDs in RAID0 in the BIOS, but I'm only getting 500MBps when I was expecting them to double up for 1GBps. Am I doing something silly?
  • At idle and low load the CPUs run at 3.4GHz as expected, but under full load, they were cut back to 2.8GHz. The temps were only 70C, so I don't think it's thermal throttling. Is the lower core speed expected? Can I override it?
  • PWS-1K28P-SQs which each put out 1000W. I have them plugged into smart outlets, and they are each drawing about 500W under load for a total of 1000W that settles down to about 800W on a longer build. That shows the PSUs are balancing the power draw which is great, but I was expecting it to consume even more. The PSUs and cores seem to be capable of more. Any ideas?
Thanks to everyone in this thread for all the tips! I was hunting for a quad socket LG2011 MB, and I had seen a few of these boards on ebay. This thread helped me make the call.
 

Attachments

SnowDigger

New Member
Dec 19, 2020
28
4
3
USA
Does anyone know what the correct cpu socket cover/cap is for the X10QBI. I've tried a bunch of 2011-0 to 2011-3 covers, none seem to fit?
 

chrgrose

Active Member
Jul 18, 2018
129
66
28
Well here's my take on this build. I found a deal on a 4048B-TR4FT with the MEM2 risers from TheServerStore and loaded it with E7-8890 v4s. There's 256GB of DDR4 (32x8GB), and I rehomed an Nvidia 3090 from another machine. Sadly, the GPU has an enormous backplate that won't quite fit, so it's going to live in an external enclosure thanks to a few PCIe extension cables - not pretty, but it does the job.

I wanted to keep it quiet, but I really didn't want to go with water cooling since I plan to run it remotely frequently. Dynatron make the R27, which just barely fits in this machine. After a bit of cajoling the final fit of those towers is satisfying! All of the stock fans are swapped with Noctuas and I replaced the stock PSUs with PWS-1K28P-SQs. They are indeed silent - thanks @nickblack! The entire machine is dead quiet, and I was able to -j96 build without overheating.

I'll post a bit more once I get to the OS and other tuning. At present, here are some questions I have about the setup:
  • I tried configuring two SSDs in RAID0 in the BIOS, but I'm only getting 500MBps when I was expecting them to double up for 1GBps. Am I doing something silly?
  • At idle and low load the CPUs run at 3.4GHz as expected, but under full load, they were cut back to 2.8GHz. The temps were only 70C, so I don't think it's thermal throttling. Is the lower core speed expected? Can I override it?
  • PWS-1K28P-SQs which each put out 1000W. I have them plugged into smart outlets, and they are each drawing about 500W under load for a total of 1000W that settles down to about 800W on a longer build. That shows the PSUs are balancing the power draw which is great, but I was expecting it to consume even more. The PSUs and cores seem to be capable of more. Any ideas?
Thanks to everyone in this thread for all the tips! I was hunting for a quad socket LG2011 MB, and I had seen a few of these boards on ebay. This thread helped me make the call.
Nice quiet build! I really wish I could do the same with an R930. Did you have to do any soldering to install any of those fans? Also, what is your idle power draw?
 

farid

New Member
Mar 20, 2020
14
0
1
Well here's my take on this build. I found a deal on a 4048B-TR4FT with the MEM2 risers from TheServerStore and loaded it with E7-8890 v4s. There's 256GB of DDR4 (32x8GB), and I rehomed an Nvidia 3090 from another machine. Sadly, the GPU has an enormous backplate that won't quite fit, so it's going to live in an external enclosure thanks to a few PCIe extension cables - not pretty, but it does the job.

I wanted to keep it quiet, but I really didn't want to go with water cooling since I plan to run it remotely frequently. Dynatron make the R27, which just barely fits in this machine. After a bit of cajoling the final fit of those towers is satisfying! All of the stock fans are swapped with Noctuas and I replaced the stock PSUs with PWS-1K28P-SQs. They are indeed silent - thanks @nickblack! The entire machine is dead quiet, and I was able to -j96 build without overheating.

I'll post a bit more once I get to the OS and other tuning. At present, here are some questions I have about the setup:
  • I tried configuring two SSDs in RAID0 in the BIOS, but I'm only getting 500MBps when I was expecting them to double up for 1GBps. Am I doing something silly?
  • At idle and low load the CPUs run at 3.4GHz as expected, but under full load, they were cut back to 2.8GHz. The temps were only 70C, so I don't think it's thermal throttling. Is the lower core speed expected? Can I override it?
  • PWS-1K28P-SQs which each put out 1000W. I have them plugged into smart outlets, and they are each drawing about 500W under load for a total of 1000W that settles down to about 800W on a longer build. That shows the PSUs are balancing the power draw which is great, but I was expecting it to consume even more. The PSUs and cores seem to be capable of more. Any ideas?
Thanks to everyone in this thread for all the tips! I was hunting for a quad socket LG2011 MB, and I had seen a few of these boards on ebay. This thread helped me make the call.
Nice posting, a lot of info too. Nice to know there are other heatsink that will fit. Are these dynatrons much more quite than the original SM heatsinks? Did you replace the external fans (the one at the back of the casing, by the power supply/PCIE) too?

I also want to know about the GPU. How did you connect the RTX3090 to the PSU? Are the power cables existing power cables? If so, where are these cables? How do I get to them? Finally, how many 3090 were you able to get in the casing?
 

nviets

New Member
May 23, 2023
7
9
3
Hey @farid and @chrgrose, sorry for the late reply. No, I didn't need to do any soldering. The noctua fans match up well in width, but they are thinner so they sit loosely in the original fan mounts. The Idle power draw is about 500W, at full CPU load it draws about 1000W, and it will top out at 1400W when I spin up the GPU for modeling or graphics on top of that. Conveniently, the max load is just under the limit of my breakers, so I'm able to run the machine on regular wall power. I plug the entire thing into a smart plug so I can monitor the power draw in real time and power on/off remotely.

Yes, @farid, the Dynatrons are much beefier than the original SM heatsinks. They fill up the entire vertical space and allow for mounting 80mm fans. The rear mount fans are also replaced with noctua. I used this configuration so I could swap the very noisy Dynatron fans with noctua ones. One of my goals with the build was to avoid water cooling.

The GPU power cord was the hardest part of the build. The PSU has lots of spare 8pin connectors, but they are positioned deep under the motherboard. I had to remove the entire riser, move a ton of cables out of the way, and then use chopsticks with my phone camera to nudge the cable extension into a port. You can buy something similar to this.

The consumer grade RTX3090 DOES NOT fit inside the machine due to enormous back plane heat sink. I didn't want to hack at my GPU, so I just bought a PCIe extension cable and a cheap Mining rig mount to place on top of the server. The server has four full size PCIe slots, which you could fill up if you use the extension cables. If you're looking for a quiet build, the external solution is nice because you can improve the airflow inside the server and let the GPUs cool off in their own enclosure.
 
  • Like
Reactions: farid and chrgrose

nviets

New Member
May 23, 2023
7
9
3
Just realized I never posted a picture of the final form with the external GPU. Here it is :) I made a custom glass cover to pass the cables though without cutting the original case. The green lighting is stock, and I really enjoy the glow when it's on.

Btw, @farid and @chrgrose, this build puts out an enormous amount of heat. I live in a high rise, and the server will easily raise the room temperature by 10-15F. I have to open windows, set the AC to max, or just leave the room altogether when it's running. The heat output is something I did not anticipate when I planned the server.
 

Attachments

aij

Active Member
May 7, 2017
106
48
28
I finally got one of these X10QBI systems! :) I almost got one before the pandemic, and then prices went crazy, but better late than never.

I put a single DIMM it in to get it to boot because I heard they are picky about memory, and it did boot but the fans seem to be running way too fast and I'm not sure how to tell why.

If I'm reading this correctly, the midplane fans are running over 9000 RPM and the 3 exhaust fans are over 10,000 RPMs. It sure is blasting a lot of cold air out the back!

Is it because it didn't like the DIMM I used, or do I need to populate more memory boards since it has 4 CPUs? It has no disks and no PCIe cards other than the special I/O module.

Code:
# ipmitool sensor
CPU1 Temp        | 21.000     | degrees C  | ok    | 0.000     | 0.000     | 0.000     | 82.000    | 87.000    | 87.000   
CPU2 Temp        | 21.000     | degrees C  | ok    | 0.000     | 0.000     | 0.000     | 82.000    | 87.000    | 87.000   
CPU3 Temp        | 23.000     | degrees C  | ok    | 0.000     | 0.000     | 0.000     | 82.000    | 87.000    | 87.000   
CPU4 Temp        | 24.000     | degrees C  | ok    | 0.000     | 0.000     | 0.000     | 82.000    | 87.000    | 87.000   
System Temp      | 19.000     | degrees C  | ok    | -10.000   | -5.000    | 0.000     | 75.000    | 77.000    | 79.000   
Peripheral Temp  | 21.000     | degrees C  | ok    | -10.000   | -5.000    | 0.000     | 75.000    | 77.000    | 79.000   
PCH Temp         | 43.000     | degrees C  | ok    | 0.000     | 5.000     | 16.000    | 90.000    | 95.000    | 100.000   
MB_10G Temp      | 47.000     | degrees C  | ok    | -5.000    | 0.000     | 5.000     | 90.000    | 95.000    | 100.000   
P1M1 DIMMAB Tmp  | 21.000     | degrees C  | ok    | 1.000     | 2.000     | 4.000     | 80.000    | 85.000    | 90.000   
P1M1 DIMMCD Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P1M2 DIMMAB Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P1M2 DIMMCD Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P2M1 DIMMAB Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P2M1 DIMMCD Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P2M2 DIMMAB Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P2M2 DIMMCD Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P3M1 DIMMAB Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P3M1 DIMMCD Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P3M2 DIMMAB Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P3M2 DIMMCD Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P4M1 DIMMAB Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P4M1 DIMMCD Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P4M2 DIMMAB Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
P4M2 DIMMCD Tmp  | na         |            | na    | na        | na        | na        | na        | na        | na       
FAN1             | na         |            | na    | na        | na        | na        | na        | na        | na       
FAN2             | 9100.000   | RPM        | ok    | 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN3             | 9100.000   | RPM        | ok    | 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN4             | 9000.000   | RPM        | ok    | 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN5             | 9100.000   | RPM        | ok    | 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN6             | na         |            | na    | na        | na        | na        | na        | na        | na       
FAN7             | 10800.000  | RPM        | ok    | 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN8             | 10300.000  | RPM        | ok    | 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN9             | 10700.000  | RPM        | ok    | 300.000   | 500.000   | 700.000   | 25300.000 | 25400.000 | 25500.000
FAN10            | na         |            | na    | na        | na        | na        | na        | na        | na       
Vcpu1            | 1.776      | Volts      | ok    | 0.544     | 0.576     | 0.608     | 1.952     | 2.000     | 2.096     
Vcpu2            | 1.792      | Volts      | ok    | 0.544     | 0.576     | 0.608     | 1.952     | 2.000     | 2.096     
Vcpu3            | 1.792      | Volts      | ok    | 0.544     | 0.576     | 0.608     | 1.952     | 2.000     | 2.096     
Vcpu4            | 1.792      | Volts      | ok    | 0.544     | 0.576     | 0.608     | 1.952     | 2.000     | 2.096     
VMSE_CPU12       | 1.350      | Volts      | ok    | 1.174     | 1.190     | 1.238     | 1.494     | 1.526     | 1.542     
VMSE_CPU34       | 1.350      | Volts      | ok    | 1.174     | 1.190     | 1.238     | 1.494     | 1.526     | 1.542     
1.5VSSB          | 1.500      | Volts      | ok    | 1.324     | 1.340     | 1.388     | 1.644     | 1.660     | 1.692     
VTT              | 0.936      | Volts      | ok    | 0.824     | 0.840     | 0.888     | 1.144     | 1.160     | 1.192     
3.3V             | 3.347      | Volts      | ok    | 2.771     | 2.819     | 2.963     | 3.539     | 3.635     | 3.683     
3.3VSB           | 3.299      | Volts      | ok    | 2.782     | 2.829     | 2.970     | 3.534     | 3.628     | 3.675     
12V              | 12.000     | Volts      | ok    | 10.198    | 10.304    | 10.728    | 12.954    | 13.272    | 13.378   
VBAT             | 3.018      | Volts      | ok    | 2.634     | 2.730     | 2.826     | 3.834     | 3.930     | 4.026     
Chassis Intru    | 0x1        | discrete   | 0x0100| na        | na        | na        | na        | na        | na       
PS1 Status       | 0x1        | discrete   | 0x0100| na        | na        | na        | na        | na        | na       
PS2 Status       | 0x1        | discrete   | 0x0100| na        | na        | na        | na        | na        | na       
PS3 Status       | 0x1        | discrete   | 0x0100| na        | na        | na        | na        | na        | na       
PS4 Status       | 0x1        | discrete   | 0x0100| na        | na        | na        | na        | na        | na
 

zorg33

New Member
Oct 19, 2022
21
2
3
Hi,
I've got a weird problem recently. I've been running X10QBI servers for some time now, but haven't seen anything like this before.
I had samsung/hynix 10600R DDR3 in my servers that were working fine.
Then I got access to some Micron 14900R DDR3 16GB 2Rx4 modules and they just go through memory test fine, but the server always freezes during OS startup. The symptomes are exactly like HW error interrupts, like the win10 circle stutters more and more until it freezes.
In BIOS there is no issue at all. RAM configuration does not have any effect, I tried every setting possible.
What could this be? And if Micron is incompatible with this board, then how does it pass the memory test?
And I have only modules in the blue slots but they only run at 1333MHz, while they should run at 1600. I tried 2 and 4 modules per board also.
CPUs are 8880v4 QS

Update: win10 boots up fine with only 1 module per CPU.
Currently trying 1:1 mode...still freezes with 2 modules per cpu (channels A & C)
 
Last edited:

NablaSquaredG

Bringing 100G switches to homelabs
Aug 17, 2020
1,591
1,050
113
And I have only modules in the blue slots but they only run at 1333MHz, while they should run at 1600. I tried 2 and 4 modules per board also.
RDIMMs cannot ever run faster than 1333 on the Rev 1.01 DDR3 memboards, as it's a 3 SPC (Slots per Channel) Setup. Only LRDIMMs can run at 1600
 

farid

New Member
Mar 20, 2020
14
0
1
Hey @farid and @chrgrose, sorry for the late reply. No, I didn't need to do any soldering. The noctua fans match up well in width, but they are thinner so they sit loosely in the original fan mounts. The Idle power draw is about 500W, at full CPU load it draws about 1000W, and it will top out at 1400W when I spin up the GPU for modeling or graphics on top of that. Conveniently, the max load is just under the limit of my breakers, so I'm able to run the machine on regular wall power. I plug the entire thing into a smart plug so I can monitor the power draw in real time and power on/off remotely.

Yes, @farid, the Dynatrons are much beefier than the original SM heatsinks. They fill up the entire vertical space and allow for mounting 80mm fans. The rear mount fans are also replaced with noctua. I used this configuration so I could swap the very noisy Dynatron fans with noctua ones. One of my goals with the build was to avoid water cooling.

The GPU power cord was the hardest part of the build. The PSU has lots of spare 8pin connectors, but they are positioned deep under the motherboard. I had to remove the entire riser, move a ton of cables out of the way, and then use chopsticks with my phone camera to nudge the cable extension into a port. You can buy something similar to this.

The consumer grade RTX3090 DOES NOT fit inside the machine due to enormous back plane heat sink. I didn't want to hack at my GPU, so I just bought a PCIe extension cable and a cheap Mining rig mount to place on top of the server. The server has four full size PCIe slots, which you could fill up if you use the extension cables. If you're looking for a quiet build, the external solution is nice because you can improve the airflow inside the server and let the GPUs cool off in their own enclosure.
Thank you. I didn't have time to look at the GPU cables yet. I plan on looking at them next week, your tips help.
 

zorg

New Member
Oct 27, 2023
3
1
3
Thank you. I didn't have time to look at the GPU cables yet. I plan on looking at them next week, your tips help.
I used GPUs in these servers with extra PSUs with a lot of GPU power connectors.
Another mod I used to do is to solder GPU power cables on the power distribution PCB directly.
 

farid

New Member
Mar 20, 2020
14
0
1
I used GPUs in these servers with extra PSUs with a lot of GPU power connectors.
Another mod I used to do is to solder GPU power cables on the power distribution PCB directly.
How many GPUs I can connect with only the on-board PSUs/PDB? I can't find any info on these after a couple of hours. Can you point me to docs if you have any.
 

zorg

New Member
Oct 27, 2023
3
1
3
How many GPUs I can connect with only the on-board PSUs/PDB? I can't find any info on these after a couple of hours. Can you point me to docs if you have any.
As many as you can shove in. I had 6 GPUs in there, but it depends on the actual gpu design.
What docs? This is totally out of spec :)
You have to add the GPU TDPs together and see if the stock PSUs can handle it.
In redundancy mode it is not capable of more than 1 or 2 GPUs, because the CPUs+RAM themselves consume around 1000-1200W. So of you got 1600W PSU, teher is only headroom for 1 or 2 GPUs.
If you trade in redundancy, then 2x1600=3200W, so you got ~2000W for GPUs.
 
  • Like
Reactions: farid

willo

New Member
Apr 26, 2024
11
0
1
I joined the x10qbi club earlier this year. Figured I'd share my results.
  • I rescued 1.5TB of ram from ewaste.
  • I bought a server from theserverstore as well. I chose to order directly since it included a raid controller. The first one arrived with some faceplace damage. They sent me a second one as a free replacement with all the parts but the raid controller. So... spares yay.
  • I bought a four pack of E7-8880v4 CPUs.
  • I slotted some old but new SSDs into the drive bays using some 2.5 to 3.5 adapter trays.
At this point the machine started to piss me off. It was a trial but I finally figured it out. I thought I may have gotten bad CPUs for a bit
  • The box just would not post, no matter what I did with memory and cpu combinations.
  • The BMC/IPMI card had previous user authentication creds.
  • It turns out that the only way to reset the IPMI password is to boot an OS and run ipmitool!!!!
  • I finally bought a v3 cpu which allowed me to post the box, boot linux and set the password.
  • THEN I was able upgrade the BIOS via IPMI. Then I was able to post with the four v4 CPUs. (I got really good at swapping CPUs while I figured this out.)
Phew... Now onto drive issues:
It turns out that the raid card is still supported but it's also old enough that it's not in the kernel by default these days. After some pondering I remembered that I had a spare 2GB nvme that I'd bought for a NUC but wasn't using. so I shopped and found a $10 PCI to m.2 NVME adapter on amazon and ordered it up.
The NVME drive workd out great (Western digial black) AND it provides fantastic IO performance. From there I was able to install proxmox VE. I was also able to use the raid SSD array, but I don't use it for the boot/system drive.

This config with the NVME is fantastic. I have another box of ram in my bin. I'm tempted to build another monster so I can have twinsies. u.2 or u.3 would be faster of course but for budget builds? m.2 is pretty good. I have not tested the multiple nvme cards, which require pci bifurcation. I'm not even sure if the X10qbi can do it. (It probably can)
 

willo

New Member
Apr 26, 2024
11
0
1
Performance wise, I did run passmark and submitted my results. There are a few in the system now.

Certainly not top of the chart but four of these things in one box still destroys something like a current i9-7900X and even beats the performance of a dual Silver 4514Y. I find that to be pretty nice for the low $- especially if you're able to salvage the ram components!
 

Sean Ho

seanho.com
Nov 19, 2019
822
384
63
Vancouver, BC
seanho.com
Yep, single-thread on those old Broadwell-EX chips isn't great, but there are just so many threads to work with. How's the power consumption?

A more reasonable HBA like 9300 or 9211 is under $20 nowadays.