X10QBI and v3/v4 cpus (e.g. supermicro sys-4048b-trft)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Iaroslav

Member
Aug 23, 2017
112
24
18
37
Kyiv
Hi there! I'm into the hassle with this system too with the aim of higher single node CPU resource for that cheap.
Already ordered SYS-8048B-TRFT with MEM1 rev 1.01 I think and found some fairly priced qty 32 bunch of tested hynix 16Gb HMT42GR7BFR4A-PB.
Now I'm a bit confused about CPU's - as far as I see this setup will run with ebay's popular and dirt cheap (from cpubenchmark.net):
[Quad CPU] Intel Xeon E7-4890 v2 @ 2.80GHz 34,472
[Quad CPU] Intel Xeon E7-8895 v2 @ 2.80GHz 36,831
What is the real difference here?

Now as I mentioned before, the common mem1 rev 1.01 does not officially support v3/v4 cpus. But I have personally found that they work. Some support matrix documents I receive indicate that for JC1 chip intel performs only limited on electrical and operational tests and did not validate w/ the full matrix of supported memory configurations. So YMMV.
I only discovered the v3/v4 apparently-compatibility myself because I obtained several systems, some of which had the newer mem boards and were supported so I had v4 cpus for them and I tried the cpus in the unsupported systems.
And after all, I found myself looking for some even faster e7-8890v3's, still cheap enough and slightly faster in total GHz.
Can you please recommend me the best CPU of your choice for that system?
 

Micha

New Member
Apr 3, 2020
5
0
1
Hi, I have to ask for advice again. I now tried two different types of 1.35V RDIMM-1600 16Gb modules: 4x Samsung M393B2G70BH0-YK0 (1.35V RDIMM-1600, 16Gb) and 1x Hynix HMT42GR7AFR4A-PB. The Samsung is not exactly (albeit ...QH0-YK0 is) in Supermicro's list, however, the Hynix module IS explicitly listed. I'm still operating the board with a single processor Xeon E7-4880 v2 30x 2,5 GHz, and I tried different of the eight Mem1 Rev1.01 boards I have here in slot P1M1. For the Samsung memory, I tried with 1, 2 or 4 DIMMs in A1, A1/B1, and A1/B1/C1/D1 (all blue sockets). The Hynix memory I put in A1 only (I only have a single module of those). Already when powering the board I see the ON LED lit on the MEM1 board, but never any of the socket LEDs lights up. And when starting up, the POST switches from FF directly to 15 where it hangs for at least 3 minutes (and likely forever). I am beginning to ask myself whether something else is utterly wrong, or whether I am doing something wrong. Next I probably swap the board and the CPU, for both of which I have spare, and maybe I try multiple CPUs (two?). Unfortunately I only have the E7-4880 v2 at hand. If someone here has can tell by experience why my test setup should fail by any general error that I am not aware, please let me know. Am I right that the board should already show in standby the recognized DIMM modules by lit LEDs for the corresponding sockets, or is it necessary to start the computer? If I understood Ag. Patience right, all slots of unused MEM1 boards (i.e. all slots besides P1M1) can remain empty--that does not influence the DIMM init process, right?
 

agentpatience

New Member
Mar 3, 2020
20
2
3
Ottawa
Micha - On a working system DIMM LED's don't light up. I don't know why. If you get to ERROR 15 then at least the system is trying to post and that CPU is likely OK. Populate 2 DIMM in first two blue slots and place memcard in P1M1. Does it post? Do you know what your bios version is? What do you see on the screen when error 15 happens?
 

agentpatience

New Member
Mar 3, 2020
20
2
3
Ottawa
Hi there! I'm into the hassle with this system too with the aim of higher single node CPU resource for that cheap.
Already ordered SYS-8048B-TRFT with MEM1 rev 1.01 I think and found some fairly priced qty 32 bunch of tested hynix 16Gb HMT42GR7BFR4A-PB.
Now I'm a bit confused about CPU's - as far as I see this setup will run with ebay's popular and dirt cheap (from cpubenchmark.net):
[Quad CPU] Intel Xeon E7-4890 v2 @ 2.80GHz 34,472
[Quad CPU] Intel Xeon E7-8895 v2 @ 2.80GHz 36,831
What is the real difference here?
8895 v2 has higher turbo bins so that is why they score higher than 4890 v2.
 
  • Like
Reactions: Iaroslav

Micha

New Member
Apr 3, 2020
5
0
1
Hi AP, thanks for the info with the LEDs on the MEM board, it's reassuring if they do not light either somewhere else. Not sure whether I expressed myself badly last time--I can boot now. And so far all DDR3 modules I have around are working (see my last post for models), at least with 1, 2 or 4 DIMMs in one MEM board. However, Im still puzzling why I have to put all 4 of my E7-4880 v2 CPUs for the board to come up. As said, I confirmed it with two independent X10 boards, two different sets of 4 E7-4880 and plenty of different mem boards (as a side note: someone else had curiosities like bad contact when repeatedly inserting a MEM board in the P1M1 slot?) and three different brands of DIMM modules (cf. my last post): with only one or two CPUs my POST hangs at 15 (directly after starting at ff and a short 09 level I think). Screen shows Supermicro ASCII logo with 15 in th lower right corner, also displayed on the board's LED display. Stays there for at least 5 minutes, then I usually switch off. BIOS version is 3.2a (08/08/2019). From other posts and also from the board's manual "The X10QBi baseboard supports up to four processors." (2-13) I actually assumed that you can put one OR two OR four CPUs. Can this be because of some attribute special to my CPUs?
 

cibiman

New Member
Apr 22, 2020
1
0
1
Hi,sorry for my bad english. I got a X10QBI barebone system with MEM1 Rev.1.01 boards.

My only works with 2,3 or 4 cpu.To work with 2 cpu you need to put cpu on socket 1 and 3.
Can anyone provide how to run E7v3 and v4 with memoryboard v1.01?
cheers!
 

Persepolis

New Member
Apr 29, 2020
5
0
1
I run a small media production company and I have a wild thought. I am thinking a of buying a set of this system to build a 4 way video editing workstation. Do you think it is easy to add PCIe display cards to make it work? I don’t have a lot of knowledge on server system. But I do heard about people using server hardware to build workstation. Just want to make sure PCIe GPU would work with it. Anyone can share some insight?
 

Raziel

New Member
Apr 19, 2020
4
2
3
I run a small media production company and I have a wild thought. I am thinking a of buying a set of this system to build a 4 way video editing workstation. Do you think it is easy to add PCIe display cards to make it work? I don’t have a lot of knowledge on server system. But I do heard about people using server hardware to build workstation. Just want to make sure PCIe GPU would work with it. Anyone can share some insight?
Yes, it should work fine. It might be a little difficult to connect the power to the GPU as you'll probably have to disassemble the entire system to connect the GPU power cable. You will need a CBL-PWEX-0581 cable (or equivalent - it looks very much like standard ATX on the PSU end as well) for each GPU, and with all 8 memory boards installed you can likely only fit two long dual-slot GPUs.
 

jpk

Member
Nov 6, 2015
66
29
18
46
I've been struggling not to buy one of these for a few weeks now.
My main concerns have actually been power consumption (and the resultant heat generation)

I have a 4U dual socket e5-2680 v2 server right now, and it takes a bit over 600W being mildly CPU busy with 24x 3.5" sas drives + GPU, 40gb card, etc. (I know it uses up a bit more if I really peg it, but around 600 is where it often sits). I know @agentpatience said theirs was over 1200W, but I'm curious if that was being fully utilized or just partially? Anyone else record how much power it uses, and what CPUs/etc they are using?

Also, what is the noise level like? I assume the fans all ramp down if the machine is being properly cooled.

Any other good reasons I shouldn't buy one? :-D

Thanks in advance!
 

Persepolis

New Member
Apr 29, 2020
5
0
1
Yes, it should work fine. It might be a little difficult to connect the power to the GPU as you'll probably have to disassemble the entire system to connect the GPU power cable. You will need a CBL-PWEX-0581 cable (or equivalent - it looks very much like standard ATX on the PSU end as well) for each GPU, and with all 8 memory boards installed you can likely only fit two long dual-slot GPUs.
I think I probably will place an order to buy one of those second hand system to build my dream workstation. There are a few question that I want to be sure. Wonder if you know:
1) If I set it up as a workstation, do I need to set up the bios in a different way. I plan to use Windows 10 Pro for workstation. I’ve check the user manual, the bios is quite different from ordinary PC board.
2) If I want to install a sound card for that, any suggestion for Windows 10?
3) Does the 4 PSU in the system need individual mains for each of them? If I need 4 mains for that, that could be an issue. Or I can use a power bar for them? As I am not using it as server, I guess I don’t really need 4 mains. Just want to confirm.
4) If I want to use NVMe SSD to be the system drive, what would be the appropriate solution?
5) Will the passive CPU coolers pose any problem for the system if I don’t maintain constant temperature as a server room?

Thank you very much!
 

Raziel

New Member
Apr 19, 2020
4
2
3
I've been struggling not to buy one of these for a few weeks now.
My main concerns have actually been power consumption (and the resultant heat generation)

I have a 4U dual socket e5-2680 v2 server right now, and it takes a bit over 600W being mildly CPU busy with 24x 3.5" sas drives + GPU, 40gb card, etc. (I know it uses up a bit more if I really peg it, but around 600 is where it often sits). I know @agentpatience said theirs was over 1200W, but I'm curious if that was being fully utilized or just partially? Anyone else record how much power it uses, and what CPUs/etc they are using?

Also, what is the noise level like? I assume the fans all ramp down if the machine is being properly cooled.

Any other good reasons I shouldn't buy one? :-D

Thanks in advance!
Mine has 4x 4880v2 and 32x 8GB DDR3 DIMMs and while idling in the BIOS it draws 390W according to the IPMI. I'll check draws under different loads a little later and post them here.
Noise-wise, it's decently loud even idling. I can hear the hum in the background through my closed headphones easily. In my case it's only in my office while I'm setting it up, but if you want to work in the same room as this it'll probably be a bit annoying if you can't isolate it somehow.

I think I probably will place an order to buy one of those second hand system to build my dream workstation. There are a few question that I want to be sure. Wonder if you know:
1) If I set it up as a workstation, do I need to set up the bios in a different way. I plan to use Windows 10 Pro for workstation. I’ve check the user manual, the bios is quite different from ordinary PC board.
2) If I want to install a sound card for that, any suggestion for Windows 10?
3) Does the 4 PSU in the system need individual mains for each of them? If I need 4 mains for that, that could be an issue. Or I can use a power bar for them? As I am not using it as server, I guess I don’t really need 4 mains. Just want to confirm.
4) If I want to use NVMe SSD to be the system drive, what would be the appropriate solution?
5) Will the passive CPU coolers pose any problem for the system if I don’t maintain constant temperature as a server room?

Thank you very much!
1. I don't think you need to mess with any specific settings to use it as a workstation. You can probably modify some of the power settings to be even more aggressive than the defaults, but they're quite performance-oriented by default already.
2. I'd just recommend getting an external USB audio interface or DAC, and getting a nice USB3 PCI-E card plugged into the server - you'll probably need more than the two integrated USBs anyway, for a workstation.
3. You can plug all four into the same mains using a power bar, but you have to make sure the fuse on that main is enough to potentially provide up to 3300W - even though it's unlikely you'll get it to draw that much power. That'd be about 15 amps in Europe and 30 in the US.
4. I haven't yet tested booting from NVMe drives. I might try that later but it'll take a while as I need to get one first. It is, however, currently booting off an M.2 non-nvme drive on an m2 pcie card connected through the integrated sata controller - not optimal, really, since sata is gonna bottleneck most any ssd nowadays. I'll definitely have to try and get it to boot off at least pci-e.
5. Not really, if they get too hot they'll throttle like any other CPU, and the passive heatsinks are only passive in the sense that they don't have a fan right on them. The case fans are so powerful and the airflow is a straight line, so a lot of air passes through them, probably as much as if there were a fan right on the heatsinks.
 
  • Like
Reactions: jpk

jpk

Member
Nov 6, 2015
66
29
18
46
Awesome! Thank you so much for that! Mine will be going into a dedicated room, but if it were really loud, that might come through.

One more question - how complicated would it be to get power (e.g. for GPUs) into the back of the chassis?
(and I assume there's not room to get standard 6/8 pin plugs into the side, so I'll need to use ones that have the plug on the back)

Thanks again!
 

Raziel

New Member
Apr 19, 2020
4
2
3
You don't need to get power for 6/8pin atx pci-e from outside the chassis, the PSUs (or rather, whatever converts those to ATX, since the mobo does use ATX power connectors) have connectors for it. Look for CBL-PWEX-0581. However, none of the manuals detail how many of these can be plugged in, or how. I can only assume the connection points are somewhere under the motherboard, from where the other power cables are coming - so you'll have to take the entire thing out.
Alternatively, you can probably fit cables through a removed pci-e cover quite easily, if you'd rather use a separate PSU. No way to do it through the side of the case, unless you modify it.
Edit: Actually, after perusing the closes chassis manual I could find some more, it seems the power distributor that you'd have to plug the cable into is under the middle fans, so it shouldn't require taking out the motherboard. And I don't think the motherboard tray is removable anyway...
 
Last edited:
  • Like
Reactions: jpk

Persepolis

New Member
Apr 29, 2020
5
0
1
Mine has 4x 4880v2 and 32x 8GB DDR3 DIMMs and while idling in the BIOS it draws 390W according to the IPMI. I'll check draws under different loads a little later and post them here.
Noise-wise, it's decently loud even idling. I can hear the hum in the background through my closed headphones easily. In my case it's only in my office while I'm setting it up, but if you want to work in the same room as this it'll probably be a bit annoying if you can't isolate it somehow.



1. I don't think you need to mess with any specific settings to use it as a workstation. You can probably modify some of the power settings to be even more aggressive than the defaults, but they're quite performance-oriented by default already.
2. I'd just recommend getting an external USB audio interface or DAC, and getting a nice USB3 PCI-E card plugged into the server - you'll probably need more than the two integrated USBs anyway, for a workstation.
3. You can plug all four into the same mains using a power bar, but you have to make sure the fuse on that main is enough to potentially provide up to 3300W - even though it's unlikely you'll get it to draw that much power. That'd be about 15 amps in Europe and 30 in the US.
4. I haven't yet tested booting from NVMe drives. I might try that later but it'll take a while as I need to get one first. It is, however, currently booting off an M.2 non-nvme drive on an m2 pcie card connected through the integrated sata controller - not optimal, really, since sata is gonna bottleneck most any ssd nowadays. I'll definitely have to try and get it to boot off at least pci-e.
5. Not really, if they get too hot they'll throttle like any other CPU, and the passive heatsinks are only passive in the sense that they don't have a fan right on them. The case fans are so powerful and the airflow is a straight line, so a lot of air passes through them, probably as much as if there were a fan right on the heatsinks.
Thank you so much for that. Let me order one and probably I will have more question to ask here.
 

Persepolis

New Member
Apr 29, 2020
5
0
1
Mine has 4x 4880v2 and 32x 8GB DDR3 DIMMs and while idling in the BIOS it draws 390W according to the IPMI. I'll check draws under different loads a little later and post them here.
Noise-wise, it's decently loud even idling. I can hear the hum in the background through my closed headphones easily. In my case it's only in my office while I'm setting it up, but if you want to work in the same room as this it'll probably be a bit annoying if you can't isolate it somehow.



1. I don't think you need to mess with any specific settings to use it as a workstation. You can probably modify some of the power settings to be even more aggressive than the defaults, but they're quite performance-oriented by default already.
2. I'd just recommend getting an external USB audio interface or DAC, and getting a nice USB3 PCI-E card plugged into the server - you'll probably need more than the two integrated USBs anyway, for a workstation.
3. You can plug all four into the same mains using a power bar, but you have to make sure the fuse on that main is enough to potentially provide up to 3300W - even though it's unlikely you'll get it to draw that much power. That'd be about 15 amps in Europe and 30 in the US.
4. I haven't yet tested booting from NVMe drives. I might try that later but it'll take a while as I need to get one first. It is, however, currently booting off an M.2 non-nvme drive on an m2 pcie card connected through the integrated sata controller - not optimal, really, since sata is gonna bottleneck most any ssd nowadays. I'll definitely have to try and get it to boot off at least pci-e.
5. Not really, if they get too hot they'll throttle like any other CPU, and the passive heatsinks are only passive in the sense that they don't have a fan right on them. The case fans are so powerful and the airflow is a straight line, so a lot of air passes through them, probably as much as if there were a fan right on the heatsinks.
My system just arrived. Still need to wait for the CPUs. After an initial look into it, I have a few questions:

1) In what way is the most performance oriented approach to populate memory modules? It seems that 8GB modules are most reasonably priced at the moment. Not sure if I should go for 256GB or 320GB. Will it hurt performance if I go for 320GB?
2) What is the best way to mount a SSD as boot drive? I don’t see any drive bay apart from the 24 bays for raid on the front side;
3) For GPU, do I pull one of the 8 pin plug from the main board and feed power for that through an adapter? I don’t see any extra 8 pin plug, thus, not sure the way how it works.
4) What kind of data cable is needed to connect a raid adapter to the connection board for the drive? Again, I am not a server guru, so need to make sure.

Thanks in advance!
 

Kneelbeforezod

Active Member
Sep 4, 2015
529
122
43
46
High core count v3/v4 E7 CPUs (as well as v2 for that matter) are often available at relatively low prices. A big reason the prices are low is that compatible boards are scarce, as they're quad socket parts that were only found in relatively exotic and high end systems.

One of the most widely available boards is the Supermicro X10QBI. The X10QBI is an unusual board: it has four 2011-1 (E7 compatible) sockets and 96 dimm slots. The 96 dimm slots are accomplished by the ram being placed on daughter boards, with part-number X10QBI-MEM. The daughterboard based ram creates a lot of confusion because there are multiple versions of the memory boards: MEM1 rev 1.01, MEM1 rev 2.0, and MEM2 rev 1.01.

.......
That was a great read! Well done!
 

Persepolis

New Member
Apr 29, 2020
5
0
1
You don't need to get power for 6/8pin atx pci-e from outside the chassis, the PSUs (or rather, whatever converts those to ATX, since the mobo does use ATX power connectors) have connectors for it. Look for CBL-PWEX-0581. However, none of the manuals detail how many of these can be plugged in, or how. I can only assume the connection points are somewhere under the motherboard, from where the other power cables are coming - so you'll have to take the entire thing out.
Alternatively, you can probably fit cables through a removed pci-e cover quite easily, if you'd rather use a separate PSU. No way to do it through the side of the case, unless you modify it.
Edit: Actually, after perusing the closes chassis manual I could find some more, it seems the power distributor that you'd have to plug the cable into is under the middle fans, so it shouldn't require taking out the motherboard. And I don't think the motherboard tray is removable anyway...
Thank you for this piece of great information. It really save a lot of researching time for a server novice.