Project TinyMiniMicro: Reviving Small Corporate Desktops

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,059
1,478
113
Per Intel's ARK, 9th gen CPUs top out at 60Hz at 4K.
 

tinfoil3d

QSFP28
May 11, 2020
873
400
63
Japan
Just lower the refresh rate if you're not gaming/watching vids. You can go above the specs if you lower the refresh. Sufficient for text work.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
You mean will the monitor work with the machine ? At the lower frame rates sure it will work, will the machine provide 120hz output, not any I am aware of.
 

chaoscontrol

Member
Aug 15, 2019
43
11
8
I've been reading a few pages but there is a lot of info. Can I use non T cpu's in these machines? I have a few Prodesk G3's I want to upgrade but 7700T is hard to find. Wondering if they will take a normal 7700 and throttle a bit for temps.
 

Marsh

Moderator
May 12, 2013
2,642
1,496
113
HP mini has a 65w version that would support 65w CPU, but the fan would run fast and loud.

STH has a 65w mini review
 

chaoscontrol

Member
Aug 15, 2019
43
11
8
Noise and temps aren't an issue for me as they are in a rack within a cooled environment. I just don't want to buy a normal 7700 and find the mini's are whitelist locked to certain CPUs.
 

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,871
617
113
I've been reading a few pages but there is a lot of info. Can I use non T cpu's in these machines? I have a few Prodesk G3's I want to upgrade but 7700T is hard to find. Wondering if they will take a normal 7700 and throttle a bit for temps.
You could always use ThrottleStop to limit TDP and undervolt, which I did with my 3040.
 
  • Like
Reactions: chaoscontrol
Aug 17, 2021
35
7
8
Came here to ask a few Lenovo tiny questions:

Go easy on me. Last time I messed with machines this small was Teradici PCoIP stuff 10+ years ago. I see that a bunch of people have put E3 xeon v3 cpus in the older m72/93 tiny machines. In looking at the m720q/m920q/m920x machines I believe that there are xeon cpus that match same socket, same/compatible chipset, similar tdp and same architecture (coffee). Am I missing something? I can't even find people asking on Google if xeons are compatible with the m720q and m920q/x machines. (If I'm missing something obvious go easy on me)

What is the actual seat-of-the pants difference between running esxi on an intel i-cpu vs. a xeon? All of my hardware has always been on the HCL list. If I can put a compatible xeon into a TMM machine with a compatible chipset on a compatible mobo...

What models have the 2x dual M.2 slots?

I could swear that I saw someone online selling a PCIe right-angle bracket that split up the PCIe lanes (bifurcation). Does that exist or was I imagining it? Would be nice to have a xeon machine with 2x M.2 slots as well as the ability to break out an x16 pci slot for storage. Or xeon CPU with 2x M.2 on Mobo, an open pcie slot available and a ribbon-cable-sata-interface that gives you more options. Six xeon cores and 64gb ddr4 would be nice in a cluster of 3-4 machines like that.

@snoturtle started a list about PCIe interfaces:

Lenovo
m720q tiny
m920q tiny
P320 tiny
P330 tiny

**m920x tiny: has 2x (two) M.2 interfaces and a PCIe slot however the PCIe GPU's cooling is integrated into the CPU heatsink so it complicates things. If you want to put something else into the m920x pcie slot you need to swap out the cpu heatsink to non-gpu model.

Are there other small-ish sized machines that I am missing? I like the idea of a Lenovo Tiny with a 6-core xeon, 64gb RAM, pcie slot and 2x M.2 slots plus a sata ribbon port. That's three hdd (boot esxi off usb) and if you get creative you can put a gpu, more storage or a 10GbE nic in the pcie slot. Why can't I search for others who have done this? Aren't they all coffee lake, chipsets, same sockets, similar tdp... ??


... trying to run opnsense or pfsense ...
I just bought a Lenovo M720q tiny that came with an i3-8100T, 2x8gb of RAM and 256gb m.2 Samsung brand ssd. It's still under warranty. I wanted the i3 for its single-thread clock speed. I put a Supermicro dual SFP+ 10GbE NIC in it. It will run as a router-on-a-stick, everything was fine with the install and I am adding VLANs to the new build now. I don't know about packet filtering and routing speeds yet as I am still setting it up. We only have a 300/300 internet connection and will have a layer 3 10G switch. As long as I can route and filter ~1.25-2.5gbit through that hardware and outbound maybe 100-ish mbps I'll be fine/happy.

Price: I paid about $250 for the M720q (eBay), about $55 for the supermicro NIC (ebay), then I bought a 3d printed pcie slot-bracket-thing from someone on reddit for the NIC, and the right-angle 90-degree PCIe proprietary Lenovo adapter was about $25 shipped. So less than $350 all-in for hardware that can run TNSR if needed.
 
  • Like
Reactions: memilanuk

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
Not sure on the Xeon or 2 x M2 situation, but I have a few insights on using an M920Q with a dual port Mellanox ConnectX-3 card.

I was running ESXi and then doing virtual firewalls and passing the network card through.
It's pretty good overall and can provide symmetric 1Gbps easily enough with OPNsense

The problem I've had is if there is a VM that uses a lot of CPU then the cooling struggles and gets really noisy! :(
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Came here to ask a few Lenovo tiny questions:

Go easy on me. Last time I messed with machines this small was Teradici PCoIP stuff 10+ years ago. I see that a bunch of people have put E3 xeon v3 cpus in the older m72/93 tiny machines. In looking at the m720q/m920q/m920x machines I believe that there are xeon cpus that match same socket, same/compatible chipset, similar tdp and same architecture (coffee). Am I missing something? I can't even find people asking on Google if xeons are compatible with the m720q and m920q/x machines. (If I'm missing something obvious go easy on me)

What is the actual seat-of-the pants difference between running esxi on an intel i-cpu vs. a xeon? All of my hardware has always been on the HCL list. If I can put a compatible xeon into a TMM machine with a compatible chipset on a compatible mobo...

What models have the 2x dual M.2 slots?

I could swear that I saw someone online selling a PCIe right-angle bracket that split up the PCIe lanes (bifurcation). Does that exist or was I imagining it? Would be nice to have a xeon machine with 2x M.2 slots as well as the ability to break out an x16 pci slot for storage. Or xeon CPU with 2x M.2 on Mobo, an open pcie slot available and a ribbon-cable-sata-interface that gives you more options. Six xeon cores and 64gb ddr4 would be nice in a cluster of 3-4 machines like that.

@snoturtle started a list about PCIe interfaces:

Lenovo
m720q tiny
m920q tiny
P320 tiny
P330 tiny

**m920x tiny: has 2x (two) M.2 interfaces and a PCIe slot however the PCIe GPU's cooling is integrated into the CPU heatsink so it complicates things. If you want to put something else into the m920x pcie slot you need to swap out the cpu heatsink to non-gpu model.

Are there other small-ish sized machines that I am missing? I like the idea of a Lenovo Tiny with a 6-core xeon, 64gb RAM, pcie slot and 2x M.2 slots plus a sata ribbon port. That's three hdd (boot esxi off usb) and if you get creative you can put a gpu, more storage or a 10GbE nic in the pcie slot. Why can't I search for others who have done this? Aren't they all coffee lake, chipsets, same sockets, similar tdp... ??



I just bought a Lenovo M720q tiny that came with an i3-8100T, 2x8gb of RAM and 256gb m.2 Samsung brand ssd. It's still under warranty. I wanted the i3 for its single-thread clock speed. I put a Supermicro dual SFP+ 10GbE NIC in it. It will run as a router-on-a-stick, everything was fine with the install and I am adding VLANs to the new build now. I don't know about packet filtering and routing speeds yet as I am still setting it up. We only have a 300/300 internet connection and will have a layer 3 10G switch. As long as I can route and filter ~1.25-2.5gbit through that hardware and outbound maybe 100-ish mbps I'll be fine/happy.

Price: I paid about $250 for the M720q (eBay), about $55 for the supermicro NIC (ebay), then I bought a 3d printed pcie slot-bracket-thing from someone on reddit for the NIC, and the right-angle 90-degree PCIe proprietary Lenovo adapter was about $25 shipped. So less than $350 all-in for hardware that can run TNSR if needed.
a) The Xeon-Ws are not available on the Lenovo Tiny machines. Why? The chipset Lenovo shipped with the machine (B or Q series) are not compatible. Lenovo only offer Xeon-Ws on the SFF machines and above. They have been doing that since the p320 and is still doing so with the new p350.

b) The Xeons will kill significantly more power than its Core series cousins. On those TMM machines you are looking at 65w TDP chips, while the Xeons are usually at 90 (unless they are gimped to run on laptops). Speed-wise? Yeah, theres definitely a noticeable difference, but unless you are willing to drop non-deprecated money on a current generation HP Z2 Mini G5, it's not really around in the TMM realm. I don't even think that @Patrick included the Z2 minis in his TMM roundup because they are rather expensive. You might also be better served with an HPe Microserver Gen10 Plus.

c) Ehh, unless you want to gun for the HP z2 Minis (which offers MXM graphics and doesn't do external PCIe), the Lenovos are the only game in town. Dell doesn't offer a Precision Micro variant - I think the smallest machine that they have with a Xeon-W is an SFF model (just like Lenovo).
Also, don't forget that Lenovo GPUs for the P-series are essentially proprietary due to their custom cooling solutions, and on those types of machines, it's a 1 slot, half-height, standard width PCIe card, bus powered up to ~50w. I think the nVidia P600/AMD WX3100 is the most that you can gun for. Also, I thought the P3x0 Tiny chassis only allow PCIe GPU or SATA drive off the ribbon, but not both at the same time?

d) Horsepower around/better than a Core i3-8100T, dual SODIMM slots for up to 64GB of RAM, dual M2 (one NVMe and one SATA) and a PCIe 3 x8 slot for around 350 dollars? Ehh, have you seen the writeups here about the HP t740 thin clients? Technically it's a GMM (GhettoMiniMicro) machine, but it's a fairly competent one that has been sold for less than 300 USD on evilBay in the past 6 weeks.
 
Last edited:

Parallax

Active Member
Nov 8, 2020
417
207
43
London, UK
It's not clear what Whiskytango is trying to achieve. The Tinys are perfect as a small lab and the like, but they're not really designed to give you all the RAM, storage, ESXI compliance etc that a Xeon box would.

With Xeon CPUs there is very little difference between them and i-series CPUs in terms of raw performance for most types of workload. I don't think there's a 35W TDP Xeon that will go into a Lenovo Tiny, at least the x20q series. I have successfully put an i9-9900T (8C/16T) into an M920q where I needed a high core count; under load I do get fan noise, but most of the time it's silent. It has more or less the same Passmark as an E-2246G but two extra cores. An i7-8700T will get you ~75% of the benefit (about an E-2126G's performance) for a worthwhile saving in price, or if you want more cores but fewer threads then an i7-9700T would be best.

If you're really looking for a compact Xeon box then I think your main option will be the Supermicro SuperServer E200s, which will also give you 10GbE natively in the box.

I am not clear why you would buy an M720q to put a 10GbE card in it, particularly if you only have a 300/300Mbps Internet connection. I put a 4x 1GbE card in my M720q (also an i3) and it is fine running Opnsense up to the maximum speed of my Internet connection (about 850Mbps) - my ISP modem connects straight into it and then it has different ports to associate with internal VLANs (not necessary, but I had the ports). If you turn everything on (Suricata and Sensei) then the i3 will struggle a bit at full speed. I am doing an A/B comparison between Opnsense and Sophos XG comparison and my initial impression is that Sophos is a little hungrier again on CPU, but it does more as well. I have not tuned it yet so I might be able to get it back to parity. The Sophos XG Home licence only allows up to 4C and 6MB of RAM so there are limits on what you can push through it.

I now have 6 Tinys and 1 HP Microserver Gen10+; this is more than enough for everything I need to run at home to support my job in network security with a lab, run a K8s cluster, run all the home services, etc.
 
Aug 17, 2021
35
7
8
@WANg and @Parallax,

Specifically, what I am trying to achieve is consolidating down seven, 2U, dual x5680 machines into something that resembles a TMM footprint.

The person who approves the spending sent me this photo saying she likes it:

I have never run ESXi on unsupported hardware so I don't know what happens: Is everything fine, do you give up features, performance, etc.?
Another easy question is:
Is there a reasonably priced m-ITX board with reasonably priced CPU options, with 2x M.2 slots, ideally on-board SFP+ and sataDOM? I don't mind building 3-4 small machines.

I'm having a hard time figuring out what would be comparable to a dual 5680 cpu server with 96gb ddr3. My gut says that a 6c/12t i7-8700 with 64gb ddr4 would come really close. But then what about PxE boot (or vSphere Auto Deploy), PCIe slot and 2x M.2 slots? I'm pretty sure that a TMM Dell/HP/Lenovo/Fujitsu or even Intel NUC doesn't come with integrated SFP+.

Not trying to be cheap, just don't want to spend money for the sake of spending money. If I buy/build something from m-ITX or m-ATX then I'd pick something with a CPU socket, memory capacity and storage space such that I'd need 10GbE networking. For a little TMM machine the gigabit NIC is fine since everything is so small that you don't have the CPU or space to need anything over gigabit networking (yet).

I'm just trying to shove a 2U, dual 5680 cpu server with 96gb ddr3 into a TMM form factor. I think the sweet-spot for a TMM machine is probably 6c/12t. Once you start getting up into 8c/16t then 64gb ddr4 starts getting small. And 4c/8t isn't quite enough CPU to max out 64gb RAM.

@WANg, The HP Gen10+ machines aren't going to work. If I go that big I want to put drives in it and you start to get limited. That's why I bought two Fractal Node 804 chassis. I also like the HP z2 mini's but the price point puts them up in a range that supermicro pre-built servers start to be priced in that range, or building something with a more expensive mobo such as the superO x11sdv-8c or -16c. I also really like the Dell Precision 3431 SFF machine with an E-2236 but again, at a $2k price point they get expensive. Buying anything on eBay gets expensive when you aren't STH with a bench full of ddr4 & CPU's just "laying around". When you buy on eBay 8gb almost always means 2x sticks of 4gb or 4x sticks of 2gb. You almost never get 1x stick of 8gb therefore you end up buying RAM in the sizes and speeds you actually want.

@Parallax, I get it. I've never run ESXi on non-supported hardware. That's one of my biggest concerns and why I was asking about supported chipsets and if any of these TMM machines could be made to fit within the HCL.

Regarding the SuperServer E200 boxes, I haven't looked into the motherboards other than seeing (what seems to be everyone on the planet) running the x10/x11sdv Xeon-D boards. Other than running supported/unsupported hardware, the next thing I don't know is what is a modern replacement for a dual socket x5680 machine with 96gb ddr3? The Lenovo M720q I bought for pfSense hw with the i3 8500t was faster than I expected. Your comment "I'm not clear about buying M720q and putting 10GbE card in it": it's a dual SFP+ card for router-on-a-stick. It's pfSense hardware. pfSense is going to have to do some routing until I figure out a proper L3 switch. My all-in cost for the M720q with 16gb ddr4, i3-8500T, 256gb M.2, a super micro dual sfp+ card, the pcie bracket and 3d printed pcie external bracket was just under $325. Can't really beat that.

I have 8x 2u racked machines with dual 5680 cpus. I used to do a lot with fpga stuff, vGPU (grid k1/k2) and phi cards. The work side of things will be one machine and I'll buy cloud compute time if needed. I bought two Fractal 804 boxes. (m-atx) One will be 10x 3.5" truenas/unraid and the other will get a cuda/tensor card that I'll use for work. Outside of what I used for work, I don't need much for home/homelab: The usual stuff, about 30 VM's total. vSphere, AD server, Unifi controller, web server, email server, windows machines we RDP into, she's teaching herself python so she has a couple VMs... Home Assistant. Then a bunch of other random stuff I rarely use like a windows machine with BMW factory software on it so I can tune/flash my car, update maps/gps, etc. Typical home/homelab stuff that could probably run on 3-4, maybe 4-5 TMM (I assume).

The only real caveat with PxE boot is I need one machine that has a dedicated boot device so if the power goes out, I can restart one machine and delay start the rest.

Does that make more sense? It's been a bit overwhelming going through each server one at a time, figuring out what is on it, consolidating the VMs, deciding to keep/ditch the data on the drives, etc. A Lenovo M920x or P330 with 2x M.2 is a great idea (but expensive and work to make it happen). Swap out the combined cpu/gpu cooler for cpu-only, pull the GPU and PxE boot it then you have a machine with 2x M.2 and a 2.5" drive bay. 64gb RAM and a 6/8 core CPU and it would be great.

I'm starting to think I'll keep my work related machine as a linux box on bare metal (one of the Fractal 804's) which makes it easier not to need/want so much horsepower from a TMM sized esxi host.

Thanks.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
@WANg and @Parallax,

Specifically, what I am trying to achieve is consolidating down seven, 2U, dual x5680 machines into something that resembles a TMM footprint.

The person who approves the spending sent me this photo saying she likes it:

I have never run ESXi on unsupported hardware so I don't know what happens: Is everything fine, do you give up features, performance, etc.?
Another easy question is:
Is there a reasonably priced m-ITX board with reasonably priced CPU options, with 2x M.2 slots, ideally on-board SFP+ and sataDOM? I don't mind building 3-4 small machines.

I'm having a hard time figuring out what would be comparable to a dual 5680 cpu server with 96gb ddr3. My gut says that a 6c/12t i7-8700 with 64gb ddr4 would come really close. But then what about PxE boot (or vSphere Auto Deploy), PCIe slot and 2x M.2 slots? I'm pretty sure that a TMM Dell/HP/Lenovo/Fujitsu or even Intel NUC doesn't come with integrated SFP+.

Not trying to be cheap, just don't want to spend money for the sake of spending money. If I buy/build something from m-ITX or m-ATX then I'd pick something with a CPU socket, memory capacity and storage space such that I'd need 10GbE networking. For a little TMM machine the gigabit NIC is fine since everything is so small that you don't have the CPU or space to need anything over gigabit networking (yet).

I'm just trying to shove a 2U, dual 5680 cpu server with 96gb ddr3 into a TMM form factor. I think the sweet-spot for a TMM machine is probably 6c/12t. Once you start getting up into 8c/16t then 64gb ddr4 starts getting small. And 4c/8t isn't quite enough CPU to max out 64gb RAM.

@WANg, The HP Gen10+ machines aren't going to work. If I go that big I want to put drives in it and you start to get limited. That's why I bought two Fractal Node 804 chassis. I also like the HP z2 mini's but the price point puts them up in a range that supermicro pre-built servers start to be priced in that range, or building something with a more expensive mobo such as the superO x11sdv-8c or -16c. I also really like the Dell Precision 3431 SFF machine with an E-2236 but again, at a $2k price point they get expensive. Buying anything on eBay gets expensive when you aren't STH with a bench full of ddr4 & CPU's just "laying around". When you buy on eBay 8gb almost always means 2x sticks of 4gb or 4x sticks of 2gb. You almost never get 1x stick of 8gb therefore you end up buying RAM in the sizes and speeds you actually want.

@Parallax, I get it. I've never run ESXi on non-supported hardware. That's one of my biggest concerns and why I was asking about supported chipsets and if any of these TMM machines could be made to fit within the HCL.

Regarding the SuperServer E200 boxes, I haven't looked into the motherboards other than seeing (what seems to be everyone on the planet) running the x10/x11sdv Xeon-D boards. Other than running supported/unsupported hardware, the next thing I don't know is what is a modern replacement for a dual socket x5680 machine with 96gb ddr3? The Lenovo M720q I bought for pfSense hw with the i3 8500t was faster than I expected. Your comment "I'm not clear about buying M720q and putting 10GbE card in it": it's a dual SFP+ card for router-on-a-stick. It's pfSense hardware. pfSense is going to have to do some routing until I figure out a proper L3 switch. My all-in cost for the M720q with 16gb ddr4, i3-8500T, 256gb M.2, a super micro dual sfp+ card, the pcie bracket and 3d printed pcie external bracket was just under $325. Can't really beat that.

I have 8x 2u racked machines with dual 5680 cpus. I used to do a lot with fpga stuff, vGPU (grid k1/k2) and phi cards. The work side of things will be one machine and I'll buy cloud compute time if needed. I bought two Fractal 804 boxes. (m-atx) One will be 10x 3.5" truenas/unraid and the other will get a cuda/tensor card that I'll use for work. Outside of what I used for work, I don't need much for home/homelab: The usual stuff, about 30 VM's total. vSphere, AD server, Unifi controller, web server, email server, windows machines we RDP into, she's teaching herself python so she has a couple VMs... Home Assistant. Then a bunch of other random stuff I rarely use like a windows machine with BMW factory software on it so I can tune/flash my car, update maps/gps, etc. Typical home/homelab stuff that could probably run on 3-4, maybe 4-5 TMM (I assume).

The only real caveat with PxE boot is I need one machine that has a dedicated boot device so if the power goes out, I can restart one machine and delay start the rest.

Does that make more sense? It's been a bit overwhelming going through each server one at a time, figuring out what is on it, consolidating the VMs, deciding to keep/ditch the data on the drives, etc. A Lenovo M920x or P330 with 2x M.2 is a great idea (but expensive and work to make it happen). Swap out the combined cpu/gpu cooler for cpu-only, pull the GPU and PxE boot it then you have a machine with 2x M.2 and a 2.5" drive bay. 64gb RAM and a 6/8 core CPU and it would be great.

I'm starting to think I'll keep my work related machine as a linux box on bare metal (one of the Fractal 804's) which makes it easier not to need/want so much horsepower from a TMM sized esxi host.

Thanks.
It sounds like you are trying to shoehorn something that should be running on full-sized servers into a bunch of mini-desktops (which is essentially what the TMM nodes are - "suit NUCs" that are depreciated off corporate leases but packs enough bang for the buck). The question is what are you trying to optimize for - Space usage? Power constraints? Noise? Longevity? Are you looking to downsize/consolidate?

a) Most of those TMM machines have Intel i21x or Realtek embedded NICs. Not the expensive SRIOV capable stuff, mind you. The "cheap" option, but considering that they are glorified corporate NUCs, yeah. They can all do PXE booting. The Realteks are not supported by VMWare 7 by default.

b) Yeah, I am not all that gung-ho about putting a 10GbE card on an m720q/920q Tiny or a P series Tiny, at least not without considering the thermals.

PCIe device heat build-up is a challenge on TMM machines, since airflow is pulled through the shroud going around the CPU heatsink, and then back out again, and rarely across the actual motherboard surface. Lenovo had to custom design a heat sink/pipe/shield for that nVidia P600 so it'll dump heat into that airflow channel, and that card is around 30 watts heat generation when pushed. Your average secondary market, dual port PCIe3x8 10GbE card with SFP cages (Intel X710, Solarflare 7, Chelsio T4s) generates 13-15 watts of heat with optical transceivers in the SFP+ cages. and unless there are plenty of natural convective (or even forced) airflow, all the card will do is to sit there and stew...until it overheats and shuts off (or worse, in designs like the HP t730, the PCIe slot flips the card so the heatsink on the 10GbE card sits on top of the heat/EM shield for the RAM and cooks the SODIMMs inside).

Granted that there are some who put i350 quadport GigE (~7w), or a SolarFlare 5 (PCIe 2.0x8 devices with dual SFP+ cages - ~9w thermal) in their TMMs, or sidestep the issue with a cooler running 40GbE card (one reason why I abandoned my Solarflare 7122s and went with Mellanox ConnectX-3 VPI, ~8w of heat production versus nearly 18) there's no guessing whether having a thermal hotspot like that will crash your machine...or not. You can dremel out holes (if there isn't any already) and slip a large slow USB powered fan on top.

c) Well, based on the original posting it was hard to read into your storage needs, and on those TMM nodes the storage is typically trimmed back. The Lenovo P-Tinys are a bit different from the rest in that they come with dual NVMe slots (note: Not every SKU has that, so when in doubt, ask) - but most either have only a single NVMe slot, or that + a SATA device (either populated as an actual SATA port for 2.5" SSD or as an additional M.2 slot). Even for Mini-ITX that specific "ask" (dual M.2 NVMe + PCIe x16) isn't all that typical. Mini-ITX is a standard created by Via back in '02 for their C3/C7 based HTPCs and It ballooned into this standard for smaller-form-factor that doesn't work terribly well for expansion-minded machines. That's also reflective of the pre-baked limitations on the desktop APU/CPUs or the tricky thermals/cooling airflow inside the case. Mini-ITX usually imply less I/O, and that's a fact of life.

The HP MSG10+ is decent if all you need is something that you throw a desktop CPU (or a Xeon-D 2xxx series) chip in, throw in a pair of FBDIMMs, 4 big slow spinners, park a 10/40/100GbE card (if needed) on the PCIe slot and expect it to sip power quietly in the corner all day serving files. It's not a bad choice if you want quiet, smaller power bill and something Intel Eighth/Ninth Gen and can flip up to 80w TDP, but if you need more disk, well, that's not going to fly. I wanted one...up until the point when i realized that it came with only 4 bays, and no internal USB3/SD slot(s)/NVMe. It's a bit of a step back from the MSG7 or the G8. There are prosumer/almost-server mini-ITX boards from the likes of DFI, Supermicro or Tyan, but those are a bit more expensive. If you want something smaller and quieter for a storage node, yeah, it's either build-it-yourself with a good board + a chassis like the Fractals...or look at something pre-made like a Qnap TVS-873E (or its Synology cousin) and price/spec to match.

d) Is there any setup that is similar to dual X5680s (Nahalem?) with 96GB of server-side DDR3?
In terms of pure horsepower? Probably something like an i5-10600 or one of the Ryzen 4000 series APUs. Those are current and in somewhat high demand so they might not come down in price yet.
The issue here is that most Intel consumer CPUs and AMD APUs are limited/maxed to 64GB of RAM, usually 2 sticks of 32GB DDR4-2166 or something around that ballpark (in the TMMs they are normally laptop SODIMMs). That runs for roughly 225-275 USD per set (Q3 '21 pricing in the Northeastern US)...and if you are using embedded Ryzen APUs (like me) they are only 200-400 USD per node, so you are looking at 800-850 including S&H&T) to max them out. The TMMs generally do not have extra memory channels, and the server-side RAM features like buffering and error correction don't exist. Whatever you run on them, consider them disposable/non-essential if they crash once a year or so (not that apps or the OS ever last that long in terms of runtime - updates and reboots are a recurrent thing). Think about how many multiple of 800 you are willing to buy in order to cluster them up. Maybe you don't want that.

If you need more than 64GB per node or if you want something that is more akin to what you already have...except less power hungry and possibly more future-resistant, well, you might want to consider something like an AMD Eypc powered Supermicro E301 (or whatever motherboard they use on that box - the M11SDV series). it runs cooler than the old Intel silicon, it's current, and it's supported for the next 10 or so years. Something like an Eypc Embedded 3251 can replace the dual X5680 on a per-machine basis while using 1/4 of the power even when idle. If you need more consolidation, a single Epyc 7000 series machine can probably replace multiple machines at a time.
 
Last edited:
  • Like
Reactions: Aluminat

EngChiSTH

Active Member
Jun 27, 2018
108
45
28
Chicago
Finished (or at least paused) my TMM project of building a small cluster for virtualization running cheap commodity hardware

Requirements were
- budget of within $1.5k for all three nodes combined (500 per node target)
- sufficient compute capacity to start , with ability to expand the cluster if/as needed
- lower power consumption
- failover ability for key VMs/containers (for maintenance), backup/restore ability for nodes and their VMs

Approach taken
- multiple weak(er) nodes vs single server. Less concerned about redundant disks, have spare hardware, backup, and will swap/restore for any individual node that needs it.
- mix of local and shared storage
- Proxmox as virtualization technology (Hyper-V is essentially sundown by Microsoft going forward as separate Windows server build) , great support and knowledge base, affordable (free if you want it that way). Special thanks to Terry Wallace for his patience as I was figuring this out.


Hardware
3x of HP S01-pf1013w desktop PCs purchased from WM for $74 each new (Slickdeals clearance with WM rotating Comet Lake hardware out ) . Detailed link here HPS01-PF1013W
3 x I7 10700 CPUs from Microcenter $220 each new , purchased over time.
32 GB RAM (G.Skill Aegis, $99 at Newegg)
Crucial MX500 1TB SATA SSDs (~$85 each) for OS storage and ZFS storage
9.5mm universal caddy enclosure ($9 on Amazon) to hold second SATA SSD

cost per node - 74+220+99+85*2+9 = $580 . I actually had DDR4 RAM laying around so it was cheaper , however counting honestly that is what it would have cost me.
hardware of each TMM - 8C16T , 32GB RAM, 2TB SSD storage.


on each node
- unplugged HDD, removed Celeron G5900 for i7-10700 , connected two SSDs, and connected 32 GB RAM . Very simple
- installed Proxmox on first SSD , connected to network
- through GUI configured ZFS local storage and then grouped in clustered storage. Craft Computing has video on this

other tasks
- configured NFS share on QNAP NAS , exposed to cluster, stored ISOs on it
- began converting VMs or rebuilding them

left to do
- add 10G cards to the nodes (watching prices for CX311 on bay)
- likely upgrade switching for more SFP+ ports (from Brocade 6450 ->7250) , running out of these fast as both my NAS and my primary desktop connects over fiber to the switch, along with existing domain controller.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Finished (or at least paused) my TMM project of building a small cluster for virtualization running cheap commodity hardware

Requirements were
- budget of within $1.5k for all three nodes combined (500 per node target)
- sufficient compute capacity to start , with ability to expand the cluster if/as needed
- lower power consumption
- failover ability for key VMs/containers (for maintenance), backup/restore ability for nodes and their VMs

Approach taken
- multiple weak(er) nodes vs single server. Less concerned about redundant disks, have spare hardware, backup, and will swap/restore for any individual node that needs it.
- mix of local and shared storage
- Proxmox as virtualization technology (Hyper-V is essentially sundown by Microsoft going forward as separate Windows server build) , great support and knowledge base, affordable (free if you want it that way). Special thanks to Terry Wallace for his patience as I was figuring this out.


Hardware
3x of HP S01-pf1013w desktop PCs purchased from WM for $74 each new (Slickdeals clearance with WM rotating Comet Lake hardware out ) . Detailed link here HPS01-PF1013W
3 x I7 10700 CPUs from Microcenter $220 each new , purchased over time.
32 GB RAM (G.Skill Aegis, $99 at Newegg)
Crucial MX500 1TB SATA SSDs (~$85 each) for OS storage and ZFS storage
9.5mm universal caddy enclosure ($9 on Amazon) to hold second SATA SSD

cost per node - 74+220+99+85*2+9 = $580 . I actually had DDR4 RAM laying around so it was cheaper , however counting honestly that is what it would have cost me.
hardware of each TMM - 8C16T , 32GB RAM, 2TB SSD storage.


on each node
- unplugged HDD, removed Celeron G5900 for i7-10700 , connected two SSDs, and connected 32 GB RAM . Very simple
- installed Proxmox on first SSD , connected to network
- through GUI configured ZFS local storage and then grouped in clustered storage. Craft Computing has video on this

other tasks
- configured NFS share on QNAP NAS , exposed to cluster, stored ISOs on it
- began converting VMs or rebuilding them

left to do
- add 10G cards to the nodes (watching prices for CX311 on bay)
- likely upgrade switching for more SFP+ ports (from Brocade 6450 ->7250) , running out of these fast as both my NAS and my primary desktop connects over fiber to the switch, along with existing domain controller.
Good deal, but those S01s (just like their HP 290 cousins) are SFF (small form factor) desktops meant for home consumers, not TMM machines - HP's own specs on it states that they are 7.75 Liters in volume. Most TMM machines are less than 2 liters in size (the Z2 Mini G4 that @Patrick just reviewed was considered a bit chunky at 2.7 Liters)...not really applicable for this thread since it's not a Tiny Mini Micro machine.
 
Last edited:
  • Like
Reactions: paf

paf

New Member
Sep 21, 2020
24
5
3
Portugal
Good deal, but those S01s (just like their HP 290 cousins) are SFF (small form factor) desktops meant for home consumers, not TMM machines - HP's own specs on it states that they are 7.75 Liters in volume. Most TMM machines are less than 2 liters in size (the Z2 Mini G4 that @Patrick just reviewed was considered a bit chunky at 2.7 Liters)...not really applicable for this thread.
A machine similar in size (7.8 Liters) is the Dell T3420. It can take two 2.5" HDDs, one NVMe SSD, and Xeon Processors with ECC RAM.
https://i.dell.com/sites/csdocument...ecision-Tower-3000-Series-3420-Spec-Sheet.pdf