HPE ProLiant MicroServer Gen10 Plus Ultimate Customization Guide

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

randman

Member
May 3, 2020
67
12
8

dandickson

New Member
Oct 2, 2017
8
4
3
44
As for the quadport NIC, that's actually useful in the context of the MSG10+ since Intel didn't skimp on the NIC used - the i540 can actually do SRIOV/VT-d on-card, and I am pretty sure SRIOV will work right out of the bat if you have the E2224. It's then possible to run several VMs (one being something, like, say, pfsense), allocate PCIe VFs via SR-IOV to each of the VMs and then have them do networking at nearly line speeds since the hypervisor will not need to flip packets between virtual NICs. Even for a gigabit NIC that's super useful.
@WANg I'm still early in the troubleshooting phase, but Windows Server 2019 HyperV Core, with the 4 x i350 ports and 2 x X710-DA2 and then 5 x vswitches with SR-IOV enabled is causing random code 12s which I suspect is caused by running out of resources due to the SR-IOV.

This is with no VMs deployed yet, just the base system
 

SNBG

New Member
Jul 19, 2020
1
0
1
@WANg I'm still early in the troubleshooting phase, but Windows Server 2019 HyperV Core, with the 4 x i350 ports and 2 x X710-DA2 and then 5 x vswitches with SR-IOV enabled is causing random code 12s which I suspect is caused by running out of resources due to the SR-IOV.

This is with no VMs deployed yet, just the base system
I found that there is no SR-IOV option in BIOS on MicroServer Gen10 Plus, which may cause the error code.

Here is how to enabling or disabling SR-IOV on Gen10 Plus.

I also contacted with HPE official support, who did not understand what I said at all, about the BIOS options, replying with things totally unrelated.
 

dandickson

New Member
Oct 2, 2017
8
4
3
44
@SNBG @WANg

I'm still troubleshooting this with HP, I ordered a batch of P21930-B21 (MCX4121A-XCHT) cards and am still seeing the same issues, HP don't seem to know what the issue is either and are still working the issue internally.

Even out of the box with just the i350 ports, SR-IOV doesn't work, I had to deploy the proset drivers as the HP bundled drivers did not correctly expose the feature.
 

rb2k

New Member
Dec 11, 2016
25
11
3
39
I got a xeon Gen 10 Plus and I added a HP Smart Array P421 to have some native RAID for VM Storage on ESXI7.
So far I'm pretty happy with it aside from 1 thing: Thermals for the BMC.

Oddly, all other areas are reasonably cool, but the BMC (above the HP RAID PCIe card) is HOT which in turn seems to raise the case fan speed to 'too loud for inside the house'.

I initially tried to just replace the case fan for a Noctua, but HP seems to have a proprietary connector and I couldn't find any adapters that I wouldn't have to solder myself.

Any recommandations on how to improve this situation?




Screen Shot 2020-08-14 at 5.02.13 PM.png
 

manxam

Active Member
Jul 25, 2015
234
50
28
Does anyone happen to know if these support Intel VROC? Thinking that 2x m.2 drives on a PCI-E car in a RAID 1 (like a Dell "BOSS" card) would be an excellent use of the PCI-E slot but don't know whether this system supports this.
 

rb2k

New Member
Dec 11, 2016
25
11
3
39
Does anyone happen to know if these support Intel VROC? Thinking that 2x m.2 drives on a PCI-E car in a RAID 1 (like a Dell "BOSS" card) would be an excellent use of the PCI-E slot but don't know whether this system supports this.
I think that's only offered with the with new Intel Xeon Scalable processors, so probably not?

I also looked into the BOSS card, but it seemed that there's very limited hardware support outside Dell for it. (I could be wrong)
 

manxam

Active Member
Jul 25, 2015
234
50
28
@rb2k : I thought as much but Dell states VROC support on their T140 line with the same processor (2224).
There are other cards (i.e. Asus) that offer RAID support and multiple m.2 cards but require VROC support.
I don't want to go out and buy a bunch of hardware only to find out that it doesn't work -- and likely won't.
 
  • Like
Reactions: rb2k

dandickson

New Member
Oct 2, 2017
8
4
3
44
I figured y'all would appreciate the update I received today from HP regarding SR-IOV:

For onboard NIC i350, HPE BIOS and SW POR does NOT support "SR-IOV" feature. Therefore, HPE BIOS and driver disable this feature by default, and that is why SR-IOV option/selection is not appeared in RBSU.

In addition, we don’t allow to use other driver to enable this feature since we didn’t validate this feature in development phase. So this is not a product issue. If you want to enable this feature by Intel ProSet driver, HPE could not guarantee this feature can work normally.

For Mellanox NIC card, SR-IOV is not POR for CPU CFL-S, so that’s why Mellanox NIC card didn’t work.
 
  • Like
Reactions: rb2k

Juny1h

New Member
Sep 6, 2020
2
1
3
I figured y'all would appreciate the update I received today from HP regarding SR-IOV:
Very sad news, i was planning to buy MSG10+ for SRIOV.
Now i don't have another option for the same size factor uprice.:(

Thank you dandickson for digging into this.
 

pyro_

Active Member
Oct 4, 2013
747
165
43
Would anyone be able to confirm the max length of pcie Card that would fit in this. Might have found an interesting option providing there is room for it. Synology has a pcie 3.0 x8 card with 10gbe and dual m2 22110 slots

 
Last edited:

Juny1h

New Member
Sep 6, 2020
2
1
3
Would anyone be able to confirm the max length of pcie Card that would fit in this. Might have found an interesting option providing there is room for it. Symbology has a pcie 3.0 x8 card with 10gbe and dual m2 22110 slots

I can confirm it doesn't fit, too long, but it is possible, you have to cut the PCB of E10M20.
check: https://www.bilibili.com/video/BV1kz4y1D73y
 
  • Like
Reactions: DeltaQ

Think

Member
Jul 5, 2017
32
5
8
Not sure if this is the best place to ask, but I will give it a shot:

Following the great review by @Patrick , I bought a Gen10+ server to play around with. I am trying to add 64GB of RAM. As per the review, I had hopes that these Micron DIMMs (supporting DDR4-2666, CL19, ECC, dual-rank) would work. However, when I put the DIMMs into the machine, it says

Code:
229 - Unsupported DIMM Configuration Detected
for both DIMMs. Any change I can make this work with some configuration change? Or do I need to try different ones?

Thanks!

Edit: Ok, this seems to be an RDIMM vs. UDIMM problem, will try now with UDIMMs.
 
Last edited:
  • Like
Reactions: rb2k

Your name or

Active Member
Feb 18, 2020
285
41
28
hey Guys. I think about an additional Server to my current HP one.
Maybe its better to get a HPE ML30 Gen10 insted of an Micro Server? The HPE ML30 Gen10 offer 4x PCI-E Bus + 2 Bays for CD Rom who can be reused for 3,5" Drives. So I would say the ML30 is better? Any recommendations?
 

Think

Member
Jul 5, 2017
32
5
8
I guess it's a question of what you need or want: If you want/need more space for HDDs/PCIe and the bigger size and higher price are of no concern, the ML30 is probably better for you. For me, the ML30 would be too big, and I don't need the extra space it offers over the Microserver.
 

DeltaQ

New Member
Sep 29, 2020
3
0
1
Hello! Thank you very much for the very good article!

I just buyed a “QNAP QM2-2P10G1TA” with a “Samsung EVO Plus 970 2TB” NVME SSD as it was advertised as an operational possibility to add 10Gbit + NVME to the Microserver 10+ in the article (even it will not use the full speed).

If I try to boot the HPE Microserver 10+ with the QNAP Adapter WITH the EVO 970 (it doesn’t matter if build on Slot1 or Slot2) I only see an error message in red letters: “RIP address out of range”.

I googled about this and come to some articles which says that I have to disable “UEFI Optimized Boot”. I tried this but it wont work.

If I try to boot only the “QNAP QM2-2P10G1TA” without a NVME SSD everything is okay and the Microserver Gen10+ boots normally and the AC107 NIC of the QNAP-Adapter is usable. So this is good. :)

Does anyone knows whats wrong in my setup? I hope only a BIOS/UEFI setting is wrong but I can’t find the wrong setting.

- I just updated the BIOS of the HPE Microserver 10+ to "2.18_06-24-2020(14 Aug 2020)"
- I updated iLO to 2.31
- I have done a reset of the BIOS settings to "Default"
- I tried to change UEFI to "Legacy BIOS-boot" but the error comes before the Boot-Screen

I have also a simple PCIe M.2 NVME SSD-Adapter from "Icy Box" (IB-PCI214M2-HSL):
- this works very well and without problems with the same "Samsung EVO Plus 970 2TB"
So I asume that there is no error within the SSD itself.

I tried to watch the UEFI boot process via iLO serial port and see that the SSD and also the 10GBit-NIC-AC107-Chip is recognized:
Code:
....
Starting handle 72355D98
Starting Slot 1 Port 2 : Aquantia AQtion 10Gbit Network Adapter (HTTP(S) IPv4)
Starting handle 72355318
Starting handle 72354318
Starting handle 72354698
Starting handle 72354598
Starting handle 72353E98
Starting handle 72353218
Starting handle 72352018
Starting handle 72352C18
Starting handle 72352A18
Starting handle 72352798
Starting Slot 1 Port 2 : Aquantia AQtion 10Gbit Network Adapter (PXE IPv4)
Starting handle 72351C18
Starting handle 72351818
Starting handle 72350998
Starting handle 7234FE18
Starting handle 7234EC18
Starting handle 7234EE18
Starting Slot 1 NVMe Drive 1 : NVM Express Controller - S***********-Samsung SSD 970 EVO Plus 2TB-58382500
Starting Slot 1 NVMe Drive 1 : NVM Express Controller - S***********-Samsung SSD 970 EVO Plus 2TB-58382500
Starting iLO Virtual USB 1 : iLO Virtual Keyboard
Starting iLO Virtual USB 1 : iLO Virtual Keyboard
Starting Internal USB 1 : USB SanDisk 3.2Gen1
...
I attach the error screen and the messages to give a better understanding of the problem:
Code:
X64 Exception Type 0x02 - NMI Detected. Please check the IML for more details.
Software NMI

RCX=00000000708C4118 DX=00000000454EC648 R8=00000000793C7060 R9=00000000453C5790
RSP=00000000454EC680 BP=00000000454EC719 AX=00000000453C42E0 BX=00000000708C4118
R10=0000000000000180 11=0000000000000002 12=0000000000000000 13=0000000000000004
R14=8000000000000012 15=000000007298E028 SI=000000007249C010 DI=8000000000000006
CR2=0000000000000000 CR3=00000000454ED000 CR0=80000013 CR4=00000668 CR8=00000000
CS=00000038 DS=00000030 SS=00000030 ES=00000030 RFLAGS=00000246
MSR: 0x1D9 = 00004801, 0x345=000033C5, 0x1C9=00000008

LBRs From              To                From              To
01h  00000000453B2398->0000000075993052  00000000453B2368->00000000453B2398
03h  00000000453B2332->00000000453B2364  00000000453B22F8->00000000453B2309
05h  0000000077F718A1->00000000453B22E8  0000000077F718A7->0000000077F71887
07h  00000000453B23A5->0000000077F718A4  00000000453B2368->00000000453B2398
09h  00000000453B23A5->0000000077F718A4  00000000453B2368->00000000453B2398
0Bh  00000000453B2332->00000000453B2364  00000000453B22F8->00000000453B2309
0Dh  0000000077F718A1->00000000453B22E8  0000000077F718A7->0000000077F71887
0Fh  00000000453B23A5->0000000077F718A4  000000007599305D->00000000789B8660

CALL ImageBase        ImageName+Offset
00h  00000000453AD000 DxeCore+005398h
01h  0000000077F6F000 NvmExpressDxe+0028A4h
02h  0000000064992000 HpSmbiosType242HddInventory+00355Bh
RIP address out of range
2020-10-23_22_00_48-iLO-Error.jpg

If this isn't the right place for this issue please advice me where to post it elsewhere in the forum. :)

Best regards!




EDIT:
- I just tried to downgrade the BIOS to the original one "2.00_12-06-2019(19 Mar 2020)" =>
The same error message at boot :(

The iLO-Log gives a good feedback in the early boot process but later the boot process end with the red error which I have posted earlier "RIP address out of range":

Code:
....
Starting PciRoot(0x0)/Pci(0x0,0x0)
Starting Embedded : PCIe Controller
Starting Slot 1 Port 1 : PCIe Controller
Starting Slot 1 Port 1 : PCIe Controller
Starting Slot 1 Port 1 : PCIe Controller
Starting Slot 1 Port 1 : Aquantia AQtion 10Gbit Network Adapter
Starting handle 73B6FE18
Starting Slot 1 Port 1 : PCIe Controller
Starting Slot 1 Port 1 : NVM Express Controller - S***********-Samsung SSD 970 EVO Plus 2TB-58382500
Starting Embedded : PCIe Controller
Starting Embedded : eXtensible Host Controller (USB 3.0)
Starting PciRoot(0x0)/Pci(0x14,0x2)
....
For me it is not clear where the error is and what to do to make the “QNAP QM2-2P10G1TA” usable in the HPE Microserver 10+. :(

I will now try to test the QNAP Adapter in my good old Microserver 8...
 
Last edited:

DeltaQ

New Member
Sep 29, 2020
3
0
1
So some good news:
- The same “QNAP QM2-2P10G1TA” runs with a new Samsung Evo Plus 1TB out of the Box in my good old Microserver Gen8!!
- The NVME is usable out of the box under ESXi 7.0b
- After installing the driver VMware ESXi 6.7 atlantic 1.0.2.0 Driver for AQtion Ethernet Controllers (AQC100/107/108/109/111/112) the NIC negotiates without problems with 10GBit with the Zyxel XS1930-10 :)

2020-10-24_18_05_27.jpg
2020-10-24_18_06_26.jpg

As I have second QNAP-Adapter I have a few options left.
One idea is that the problems may occur as the 2 TB-NVME-SSD was used and not clean. But I will figuring this out on another day.

So for now my Gen 8 Microserver is faster in the Network as my new Gen10+ Microserver. :D

Any ideas regarding the problem with the QNAP Adapter and the Microserver Gen10+ are still very welcome!! :)
 
Last edited: