HPE ProLiant MicroServer Gen10 Plus Ultimate Customization Guide

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I know a lot of folks have been waiting for this one.

It took a crazy amount of time and effort to do.
 
  • Like
Reactions: BoredSysadmin

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
@Patrick Thank you! Just saw the video... awesome stuff. Just curious, is the power plug a standard HP laptop one? If so, I wonder if you could swap the power brick for one of the high capacity ones? I know I've seen some 250W ones from them before when hunting for power bricks for the HP thin clients...
 
  • Like
Reactions: Patrick

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
@Patrick Thank you! Just saw the video... awesome stuff. Just curious, is the power plug a standard HP laptop one? If so, I wonder if you could swap the power brick for one of the high capacity ones? I know I've seen some 250W ones from them before when hunting for power bricks for the HP thin clients...
Yeah, but those 250w PSUs were for powering HP laptop docking stations - most likely from the ZBook/Elite lines. I am not sure if it's just so easy as swapping one on for extra power headroom on the MSG10P.
 
  • Like
Reactions: Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Adding extra power there are two concerns:
  1. Downstream power handling capabilities. This system was designed for 180W max, but realistically much lower than that. Going much higher on the circuit means the VRMs and other components have to do more work. It may work short term, but it is something that makes me nervous to recommend.
  2. Cooling in that Xeon E-2288G, one SATA SSD, and two DIMM configuration was just about at the practical limit. Even adding four hard drives is likely going to require a re-worked cooling solution.
Honestly, by the time you swap to a larger power brick and cooling, you are much better off in a ML30 Gen10.
 

101

Member
Apr 30, 2018
35
12
8
Thank you for this, read the article and watching the video. I got mine last week, and it has been a nice distraction. Any chance you could post some of the pns/skus of the hardware you tested (i.e. ram/nics)? Thanks.
 
  • Like
Reactions: Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@101 put a lot of it in the main site article but will continue to add more there.
 

chilipepperz

Active Member
Mar 17, 2016
212
64
28
54
Amazing video. You should've just said if you want higher power and more expansion get a ML30 Gen10 in there.
 
  • Like
Reactions: Patrick

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Does anyone else have this impression that they should've made the 10Plus the Opteron machine and the regular 10 the Xeon-E machine? Actually, scratch that.

This is performing around the same ballpark as a smaller form factor quad-bay Coffee Lake VPro desktop (like an EliteDesk 800G4 tower), but minus the 2 extra DIMM slots or internal M2 bays.
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
After all the reviews and discussion of customization, etc, I end up feeling that my first reaction was correct:

HP's biggest miss on this product was the omission of on-board M.2 support. If that were available the list of use cases for this box would grow dramatically. Without it you end up scratching and clawing to overcome the gap in ways that are just not satisfying (e.g., the old 10Gbe/M.2 combo board).

In the end its disappointing. I doubt I'll do any project with it, mainly because every time you look at the box you'll be left remembering what it could have been.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I have to say @PigLover I would have been happier even if the internal USB Type-A was a USB 3.0. Then there are plenty of options. M.2 would have been great as well, and the PCH has PCIe lanes for it.

This feedback was part of a long discussion with the HPE team several weeks/ months ago.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
I have to say @PigLover I would have been happier even if the internal USB Type-A was a USB 3.0. Then there are plenty of options. M.2 would have been great as well, and the PCH has PCIe lanes for it.

This feedback was part of a long discussion with the HPE team several weeks/ months ago.
That was pretty much a dealbreaker for me as well. Why, HPe? Why would you not include an internal USB3 port or other internal storage options? Every FreeNAS release after 11.1U7 will not run off a USB2 drive (code size growth == middlewared timeout on boot == unstable system), and no admin would like the boot media to be outside the chassis. Here I am trying to figure out if I need to replace my MSG7 to upgrade to 11.2/3/TrueNAS (probably need to leverage the optical drive bay SATA for the new boot media instead), and here is HPe laying yet another egg when they could've gotten my money.

Oh well, maybe they'll revise the full size MSG10 chassis to use a single die Eypc embedded or something...
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@WANg that would be an ask/ comment to put in today's article/ video if you wanted to see it.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
@WANg that would be an ask/ comment to put in today's article/ video if you wanted to see it.
Regarding the Epyc embedded 3000 series? Already did. Even referenced it on my Linkedin postings. Yeesh, just how much kickback is Intel offering the vendors to keep the Eypcs out?
 

NicApicella

New Member
Apr 6, 2020
1
0
1
I've owned a Microserver since its very first versioncame out.

I can totally relate to a missing m.2 or internal USB3. Though there's one thing that I am missing even more sorely: video output.

Hear me out: I know we're talking about servers.
But the CPU-makers have understood that there is a necessity for graphics. So, even the Xeons have an iGPU. In fact, looking at the E-2 lineup, more than half have one! But what's the use if HPE didn't add the traces and a connector on the back? Wasted potential!

Look at the "normal" Gen10 – yes, it was aimed at digital signage, but how refreshing to have the *option* to drive a display. Heck, even two 4k at the same time! You don't _have to_ if you don't want to, you can just as easily run a headless server. But it's there in case you need it. Just like I don't believe that every single person buying a Gen10+ will use it for virtualization: How much effort (and cost) is lost on having four RJ45? The people who really use those probably aren't happy either – they'd rather prefer faster ports…

I believe that the Microserver was and is aimed at the SOHO enthusiast crowd, as a low-cost entry system flexible enough to be used in multiple scenarios. Once you figure out you need a more specialized server, you'll know what exactly you'll need; for the time being, the Gen10+ is your general-purpose server (without having to invest too much).
As such, a decent "boot drive option" is as bad an omission as "a video connector". I can understand that a PCIe connector had to be cut (probably due to size constraints), but that's OK if everything's on board (and PCIe bifurcation is available, which it is). Wasted potential. And still: It's the best option I can find on the market if I want a decent value, high quality product that I can just buy, throw a bunch of drives in, and just have it run.

The perfect machine just doesn't exist. Truth is, I'm still running my N36L because of that. I actually need to be able to connect a display (and have it show more than just a terminal). After finding reason after reason for skipping the Gen8 and the Gen10, I finally got myself a Gen10+. It's not perfect, but it's still very, very good. And it's the best option on the market right now.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
I've owned a Microserver since its very first versioncame out.

I can totally relate to a missing m.2 or internal USB3. Though there's one thing that I am missing even more sorely: video output.

Hear me out: I know we're talking about servers.
But the CPU-makers have understood that there is a necessity for graphics. So, even the Xeons have an iGPU. In fact, looking at the E-2 lineup, more than half have one! But what's the use if HPE didn't add the traces and a connector on the back? Wasted potential!

Look at the "normal" Gen10 – yes, it was aimed at digital signage, but how refreshing to have the *option* to drive a display. Heck, even two 4k at the same time! You don't _have to_ if you don't want to, you can just as easily run a headless server. But it's there in case you need it. Just like I don't believe that every single person buying a Gen10+ will use it for virtualization: How much effort (and cost) is lost on having four RJ45? The people who really use those probably aren't happy either – they'd rather prefer faster ports…

I believe that the Microserver was and is aimed at the SOHO enthusiast crowd, as a low-cost entry system flexible enough to be used in multiple scenarios. Once you figure out you need a more specialized server, you'll know what exactly you'll need; for the time being, the Gen10+ is your general-purpose server (without having to invest too much).
As such, a decent "boot drive option" is as bad an omission as "a video connector". I can understand that a PCIe connector had to be cut (probably due to size constraints), but that's OK if everything's on board (and PCIe bifurcation is available, which it is). Wasted potential. And still: It's the best option I can find on the market if I want a decent value, high quality product that I can just buy, throw a bunch of drives in, and just have it run.

The perfect machine just doesn't exist. Truth is, I'm still running my N36L because of that. I actually need to be able to connect a display (and have it show more than just a terminal). After finding reason after reason for skipping the Gen8 and the Gen10, I finally got myself a Gen10+. It's not perfect, but it's still very, very good. And it's the best option on the market right now.
Well, yes and no.

For the digital signage role you could've easily just used a thin client - the HP t620 Plus can drive 2 Displayports by default, while the t730 can drive 4 at a time. Either devices are less than 200 USD on eBay, and I doubt that you'll need a quad bay SATA setup to drive something like that.

I went with a t730 to upgrade my N40L (interconnected to the t730 using Mellanox CX3 40GbE). This would yield something similar to a Qnap TVS473e (an 900 USD 4 bay unit with a similar AMD APU and capabilities.)..if you factor in the price of a used t730, an old N40L, the 40GbE cards+Twinax cable, its about 450 USD. Add in about 3 years of power consumption from the N40L+t730 combo (more than the Qnap) and it's about even money versus the TVS473e. The newest t740 would also make for a decent drop-in upgrade, but mine has not yet been delivered so far.

What is useful however in the SoHo server context is something like Intel Quicksync/AMD UVD in the processor, which can help on transcoding tasks common in home media center and/or professional video situations (like DVR playback/search for security camera streams) - its present on the Pentium G5420 SKU but not the E2224 (but is present on the E2246G). You should not need pinouts to take advantage of that.

As for the quadport NIC, that's actually useful in the context of the MSG10+ since Intel didn't skimp on the NIC used - the i540 can actually do SRIOV/VT-d on-card, and I am pretty sure SRIOV will work right out of the bat if you have the E2224. It's then possible to run several VMs (one being something, like, say, pfsense), allocate PCIe VFs via SR-IOV to each of the VMs and then have them do networking at nearly line speeds since the hypervisor will not need to flip packets between virtual NICs. Even for a gigabit NIC that's super useful.

My beef with HPe is that they are crippling an otherwise decent design to force market segmentation. For example - FreeNAS will not function with a USB2 USB drive as its boot media since version 11.2 - but yet HPe decided not to give the GS10+ internal USB3 bays or equivalent internal storage options (M.2, even if it's M.2 SATA would've been nice here, or paired MicroSD supporting something like A2/V20 speeds). Instead of leaving a server out on a shelf at a branch office I would have to lock it in a ventilated closet to keep, say, some sales guy from "borrowing" the USB boot drive (instead of having it secured within the server itself) that is out in the open...which limits its deployability (well, the same sales guy could also yank out the power brick if they lost their laptop charger and need some juice before a sales meeting, for instance - really need a locking mechanism, HPe). Then there is the 2 RAM slots (4 would've been super-useful), or shrinking the chassis down without much benefit in return (besides it being smaller, but that messes with internal airflow i.e. inability to take a larger, slower fan). At least the old G7/G8/G10s have an optical drive bay on top which can be used to host a boot drive. Sure, they are cheaper than their peers but its flexibility is so limited compared to, say, Supermicro SYS5029D-TN4T (1200 USD but a 3+ year old design) or a Qnap TV-677 (much newer, and around 1500 USD) that I don't think it's that great of a value. As @Patrick says, at some point you are better off with a Proliant full sized tower server instead.
 
Last edited:

randman

Member
May 3, 2020
67
12
8
Anyone have part numbers for 32GB ECC memory (I'd like 2 for a total of 64GB)? From what I've searched online, this seems to be the part number for Crucial (1 x 32GB): CT32G4RFD4266. However, I looked for this part number in Crucial's web site but could not find it. I can see it in online stores. I also looked in Micron's web site and couldn't find it (it seems that Micron took over Crucial's memory line).

Can someone confirm if Crucial CT32G4RFD4266 is the correct memory or recommend what memory to get with part numbers?

EDIT: Also, it should be okay to just put 1 x 32 GB memory in one slot and leave the 2nd slot unused (for now, until I need an additional 32GB memory in the future)?
 
  • Like
Reactions: mikemac9

doudoufr

New Member
May 11, 2020
1
0
1
hi
Does it support core i5 ?
I saw on the article and blog post that you tested several processors, but no i5 ? Why ?
i5-9500 is under 65w right ? is it not good ?
thanks
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
hi
Does it support core i5 ?
I saw on the article and blog post that you tested several processors, but no i5 ? Why ?
i5-9500 is under 65w right ? is it not good ?
thanks
That's normal. Core i5, i7, i9 aren't supported on server chipsets. Only Core i3. It isn't just this HPE it's all server chipsets for many years.