Yeah, I know. But I suppose (I hope) the reality may be a little better.Per Intel's ARK, 9th gen CPUs top out at 60Hz at 4K.
When you first see the mouse cursor at 120Hz, you don't want to go backJust lower the refresh rate if you're not gaming/watching vids. You can go above the specs if you lower the refresh. Sufficient for text work.
Luckily I haven't yet but I know how it looks on under 30hz comparatively to 60.When you first see the mouse cursor at 120Hz, you don't want to go back
No, i know it will work at 60Hz at least.You mean will the monitor work with the machine ?
You could always use ThrottleStop to limit TDP and undervolt, which I did with my 3040.I've been reading a few pages but there is a lot of info. Can I use non T cpu's in these machines? I have a few Prodesk G3's I want to upgrade but 7700T is hard to find. Wondering if they will take a normal 7700 and throttle a bit for temps.
I just bought a Lenovo M720q tiny that came with an i3-8100T, 2x8gb of RAM and 256gb m.2 Samsung brand ssd. It's still under warranty. I wanted the i3 for its single-thread clock speed. I put a Supermicro dual SFP+ 10GbE NIC in it. It will run as a router-on-a-stick, everything was fine with the install and I am adding VLANs to the new build now. I don't know about packet filtering and routing speeds yet as I am still setting it up. We only have a 300/300 internet connection and will have a layer 3 10G switch. As long as I can route and filter ~1.25-2.5gbit through that hardware and outbound maybe 100-ish mbps I'll be fine/happy.... trying to run opnsense or pfsense ...
a) The Xeon-Ws are not available on the Lenovo Tiny machines. Why? The chipset Lenovo shipped with the machine (B or Q series) are not compatible. Lenovo only offer Xeon-Ws on the SFF machines and above. They have been doing that since the p320 and is still doing so with the new p350.Came here to ask a few Lenovo tiny questions:
Go easy on me. Last time I messed with machines this small was Teradici PCoIP stuff 10+ years ago. I see that a bunch of people have put E3 xeon v3 cpus in the older m72/93 tiny machines. In looking at the m720q/m920q/m920x machines I believe that there are xeon cpus that match same socket, same/compatible chipset, similar tdp and same architecture (coffee). Am I missing something? I can't even find people asking on Google if xeons are compatible with the m720q and m920q/x machines. (If I'm missing something obvious go easy on me)
What is the actual seat-of-the pants difference between running esxi on an intel i-cpu vs. a xeon? All of my hardware has always been on the HCL list. If I can put a compatible xeon into a TMM machine with a compatible chipset on a compatible mobo...
What models have the 2x dual M.2 slots?
I could swear that I saw someone online selling a PCIe right-angle bracket that split up the PCIe lanes (bifurcation). Does that exist or was I imagining it? Would be nice to have a xeon machine with 2x M.2 slots as well as the ability to break out an x16 pci slot for storage. Or xeon CPU with 2x M.2 on Mobo, an open pcie slot available and a ribbon-cable-sata-interface that gives you more options. Six xeon cores and 64gb ddr4 would be nice in a cluster of 3-4 machines like that.
@snoturtle started a list about PCIe interfaces:
Lenovo
m720q tiny
m920q tiny
P320 tiny
P330 tiny
**m920x tiny: has 2x (two) M.2 interfaces and a PCIe slot however the PCIe GPU's cooling is integrated into the CPU heatsink so it complicates things. If you want to put something else into the m920x pcie slot you need to swap out the cpu heatsink to non-gpu model.
Are there other small-ish sized machines that I am missing? I like the idea of a Lenovo Tiny with a 6-core xeon, 64gb RAM, pcie slot and 2x M.2 slots plus a sata ribbon port. That's three hdd (boot esxi off usb) and if you get creative you can put a gpu, more storage or a 10GbE nic in the pcie slot. Why can't I search for others who have done this? Aren't they all coffee lake, chipsets, same sockets, similar tdp... ??
I just bought a Lenovo M720q tiny that came with an i3-8100T, 2x8gb of RAM and 256gb m.2 Samsung brand ssd. It's still under warranty. I wanted the i3 for its single-thread clock speed. I put a Supermicro dual SFP+ 10GbE NIC in it. It will run as a router-on-a-stick, everything was fine with the install and I am adding VLANs to the new build now. I don't know about packet filtering and routing speeds yet as I am still setting it up. We only have a 300/300 internet connection and will have a layer 3 10G switch. As long as I can route and filter ~1.25-2.5gbit through that hardware and outbound maybe 100-ish mbps I'll be fine/happy.
Price: I paid about $250 for the M720q (eBay), about $55 for the supermicro NIC (ebay), then I bought a 3d printed pcie slot-bracket-thing from someone on reddit for the NIC, and the right-angle 90-degree PCIe proprietary Lenovo adapter was about $25 shipped. So less than $350 all-in for hardware that can run TNSR if needed.
It sounds like you are trying to shoehorn something that should be running on full-sized servers into a bunch of mini-desktops (which is essentially what the TMM nodes are - "suit NUCs" that are depreciated off corporate leases but packs enough bang for the buck). The question is what are you trying to optimize for - Space usage? Power constraints? Noise? Longevity? Are you looking to downsize/consolidate?@WANg and @Parallax,
Specifically, what I am trying to achieve is consolidating down seven, 2U, dual x5680 machines into something that resembles a TMM footprint.
The person who approves the spending sent me this photo saying she likes it:
I have never run ESXi on unsupported hardware so I don't know what happens: Is everything fine, do you give up features, performance, etc.?
Another easy question is:
Is there a reasonably priced m-ITX board with reasonably priced CPU options, with 2x M.2 slots, ideally on-board SFP+ and sataDOM? I don't mind building 3-4 small machines.
I'm having a hard time figuring out what would be comparable to a dual 5680 cpu server with 96gb ddr3. My gut says that a 6c/12t i7-8700 with 64gb ddr4 would come really close. But then what about PxE boot (or vSphere Auto Deploy), PCIe slot and 2x M.2 slots? I'm pretty sure that a TMM Dell/HP/Lenovo/Fujitsu or even Intel NUC doesn't come with integrated SFP+.
Not trying to be cheap, just don't want to spend money for the sake of spending money. If I buy/build something from m-ITX or m-ATX then I'd pick something with a CPU socket, memory capacity and storage space such that I'd need 10GbE networking. For a little TMM machine the gigabit NIC is fine since everything is so small that you don't have the CPU or space to need anything over gigabit networking (yet).
I'm just trying to shove a 2U, dual 5680 cpu server with 96gb ddr3 into a TMM form factor. I think the sweet-spot for a TMM machine is probably 6c/12t. Once you start getting up into 8c/16t then 64gb ddr4 starts getting small. And 4c/8t isn't quite enough CPU to max out 64gb RAM.
@WANg, The HP Gen10+ machines aren't going to work. If I go that big I want to put drives in it and you start to get limited. That's why I bought two Fractal Node 804 chassis. I also like the HP z2 mini's but the price point puts them up in a range that supermicro pre-built servers start to be priced in that range, or building something with a more expensive mobo such as the superO x11sdv-8c or -16c. I also really like the Dell Precision 3431 SFF machine with an E-2236 but again, at a $2k price point they get expensive. Buying anything on eBay gets expensive when you aren't STH with a bench full of ddr4 & CPU's just "laying around". When you buy on eBay 8gb almost always means 2x sticks of 4gb or 4x sticks of 2gb. You almost never get 1x stick of 8gb therefore you end up buying RAM in the sizes and speeds you actually want.
@Parallax, I get it. I've never run ESXi on non-supported hardware. That's one of my biggest concerns and why I was asking about supported chipsets and if any of these TMM machines could be made to fit within the HCL.
Regarding the SuperServer E200 boxes, I haven't looked into the motherboards other than seeing (what seems to be everyone on the planet) running the x10/x11sdv Xeon-D boards. Other than running supported/unsupported hardware, the next thing I don't know is what is a modern replacement for a dual socket x5680 machine with 96gb ddr3? The Lenovo M720q I bought for pfSense hw with the i3 8500t was faster than I expected. Your comment "I'm not clear about buying M720q and putting 10GbE card in it": it's a dual SFP+ card for router-on-a-stick. It's pfSense hardware. pfSense is going to have to do some routing until I figure out a proper L3 switch. My all-in cost for the M720q with 16gb ddr4, i3-8500T, 256gb M.2, a super micro dual sfp+ card, the pcie bracket and 3d printed pcie external bracket was just under $325. Can't really beat that.
I have 8x 2u racked machines with dual 5680 cpus. I used to do a lot with fpga stuff, vGPU (grid k1/k2) and phi cards. The work side of things will be one machine and I'll buy cloud compute time if needed. I bought two Fractal 804 boxes. (m-atx) One will be 10x 3.5" truenas/unraid and the other will get a cuda/tensor card that I'll use for work. Outside of what I used for work, I don't need much for home/homelab: The usual stuff, about 30 VM's total. vSphere, AD server, Unifi controller, web server, email server, windows machines we RDP into, she's teaching herself python so she has a couple VMs... Home Assistant. Then a bunch of other random stuff I rarely use like a windows machine with BMW factory software on it so I can tune/flash my car, update maps/gps, etc. Typical home/homelab stuff that could probably run on 3-4, maybe 4-5 TMM (I assume).
The only real caveat with PxE boot is I need one machine that has a dedicated boot device so if the power goes out, I can restart one machine and delay start the rest.
Does that make more sense? It's been a bit overwhelming going through each server one at a time, figuring out what is on it, consolidating the VMs, deciding to keep/ditch the data on the drives, etc. A Lenovo M920x or P330 with 2x M.2 is a great idea (but expensive and work to make it happen). Swap out the combined cpu/gpu cooler for cpu-only, pull the GPU and PxE boot it then you have a machine with 2x M.2 and a 2.5" drive bay. 64gb RAM and a 6/8 core CPU and it would be great.
I'm starting to think I'll keep my work related machine as a linux box on bare metal (one of the Fractal 804's) which makes it easier not to need/want so much horsepower from a TMM sized esxi host.
Thanks.
Good deal, but those S01s (just like their HP 290 cousins) are SFF (small form factor) desktops meant for home consumers, not TMM machines - HP's own specs on it states that they are 7.75 Liters in volume. Most TMM machines are less than 2 liters in size (the Z2 Mini G4 that @Patrick just reviewed was considered a bit chunky at 2.7 Liters)...not really applicable for this thread since it's not a Tiny Mini Micro machine.Finished (or at least paused) my TMM project of building a small cluster for virtualization running cheap commodity hardware
Requirements were
- budget of within $1.5k for all three nodes combined (500 per node target)
- sufficient compute capacity to start , with ability to expand the cluster if/as needed
- lower power consumption
- failover ability for key VMs/containers (for maintenance), backup/restore ability for nodes and their VMs
Approach taken
- multiple weak(er) nodes vs single server. Less concerned about redundant disks, have spare hardware, backup, and will swap/restore for any individual node that needs it.
- mix of local and shared storage
- Proxmox as virtualization technology (Hyper-V is essentially sundown by Microsoft going forward as separate Windows server build) , great support and knowledge base, affordable (free if you want it that way). Special thanks to Terry Wallace for his patience as I was figuring this out.
Hardware
3x of HP S01-pf1013w desktop PCs purchased from WM for $74 each new (Slickdeals clearance with WM rotating Comet Lake hardware out ) . Detailed link here HPS01-PF1013W
3 x I7 10700 CPUs from Microcenter $220 each new , purchased over time.
32 GB RAM (G.Skill Aegis, $99 at Newegg)
Crucial MX500 1TB SATA SSDs (~$85 each) for OS storage and ZFS storage
9.5mm universal caddy enclosure ($9 on Amazon) to hold second SATA SSD
cost per node - 74+220+99+85*2+9 = $580 . I actually had DDR4 RAM laying around so it was cheaper , however counting honestly that is what it would have cost me.
hardware of each TMM - 8C16T , 32GB RAM, 2TB SSD storage.
on each node
- unplugged HDD, removed Celeron G5900 for i7-10700 , connected two SSDs, and connected 32 GB RAM . Very simple
- installed Proxmox on first SSD , connected to network
- through GUI configured ZFS local storage and then grouped in clustered storage. Craft Computing has video on this
other tasks
- configured NFS share on QNAP NAS , exposed to cluster, stored ISOs on it
- began converting VMs or rebuilding them
left to do
- add 10G cards to the nodes (watching prices for CX311 on bay)
- likely upgrade switching for more SFP+ ports (from Brocade 6450 ->7250) , running out of these fast as both my NAS and my primary desktop connects over fiber to the switch, along with existing domain controller.
A machine similar in size (7.8 Liters) is the Dell T3420. It can take two 2.5" HDDs, one NVMe SSD, and Xeon Processors with ECC RAM.Good deal, but those S01s (just like their HP 290 cousins) are SFF (small form factor) desktops meant for home consumers, not TMM machines - HP's own specs on it states that they are 7.75 Liters in volume. Most TMM machines are less than 2 liters in size (the Z2 Mini G4 that @Patrick just reviewed was considered a bit chunky at 2.7 Liters)...not really applicable for this thread.