- Jan 23, 2021
Patrick just revied the M80q Gen3 and it is missing the PCIe slot https://www.servethehome.com/lenovo-thinkcentre-m80q-tiny-gen-3-review-a-big-lesson-learned-intel/.
Confirming that you're running this in a Lenovo Tiny, and it recognizes all the HDs as a separate drives? Probably goes without saying, but wanted to be sure your experience wasn't on a different machine before I pull the trigger on one of these cards.I've been running that 4xM.2+SATA SSD setup with Truenas without any issue.
60-61C under normal use, but that's jammed right up at the top of a media cabinet with less than a cm of clearance between them and the shelf above and sitting on top of a TP-Link 10GbE switch, a KVM switch, and a Ruckus ICX7150. I'm going to relocate the Tinys one weekend to a shelf with more breathing room.I just bought 2 P360, each came with a T400 GPU and the riser FRU 5C50W00910. Afraid that a PCIe 2.0 10 Gbe card would not be recognized I bought a Mellanox MCX512A-ACAT. This one is correctly recognized but the temperature makes me afraid. It varies between 70 and 80° C idle. For those who have an AOC-STGN-I2S card, what temperatures do you have?
|Intel Xeon W-1290P||Intel Core i9-10900|
|Architecture codename||Comet Lake||Comet Lake|
|Launch date||1 Apr 2020||Q2'20|
|Launch price (MSRP)||$539||$439 - $449|
|Place in performance rating||272||384|
|Series||Intel Xeon W Processor||10th Generation Intel Core i9 Processors|
|64 bit support|
|Base frequency||3.70 GHz||2.80 GHz|
|Bus Speed||8 GT/s||8 GT/s|
|L3 cache||20 MB||20 MB|
|Manufacturing process technology||14 nm||14 nm|
|Maximum core temperature||100°C||100°C|
|Maximum frequency||5.30 GHz||5.20 GHz|
|Number of cores||10||10|
|Number of threads||20||20|
|L1 cache||1280 KB|
|L2 cache||2.5 MB|
|Max memory channels||2||2|
|Maximum memory bandwidth||45.8 GB/s||45.8 GB/s|
|Maximum memory size||128 GB||128 GB|
|Supported memory types||DDR4-2933||DDR4-2933|
|Graphics base frequency||350 MHz||350 MHz|
|Graphics max dynamic frequency||1.20 GHz||1.20 GHz|
|Intel® Clear Video HD technology|
|Intel® Clear Video technology|
|Intel® InTru™ 3D technology|
|Intel® Quick Sync Video|
|Max video memory||64 GB||64 GB|
|Processor graphics||Intel UHD Graphics P630||Intel UHD Graphics 630|
|Number of displays supported||3||3|
|Graphics image quality|
|4K resolution support|
|Max resolution over DisplayPort||4096x2304@60Hz||4096x2304@60Hz|
|Max resolution over eDP||4096x2304@60Hz||4096x2304@60Hz|
|Max resolution over HDMI 1.4||4096x2160@30Hz|
|Graphics API support|
|Configurable TDP-down||95 Watt|
|Configurable TDP-down Frequency||3.30 GHz|
|Max number of CPUs in a configuration||1||1|
|Package Size||37.5mm x 37.5mm||37.5mm x 37.5mm|
|Thermal Design Power (TDP)||125 Watt||65 Watt|
|Thermal Solution||PCG 2015D||PCG 2015C|
|Max number of PCIe lanes||16||16|
|PCI Express revision||3.0||3.0|
|PCIe configurations||Up to 1x16, 2x8, 1x8+2x4||Up to 1x16, 2x8, 1x8+2x4|
|Scalability||1S Only||1S Only|
|Security & Reliability|
|Execute Disable Bit (EDB)|
|Intel® Identity Protection technology|
|Intel® OS Guard|
|Intel® Secure Key technology|
|Intel® Software Guard Extensions (Intel® SGX)|
|Intel® Trusted Execution technology (TXT)|
|Enhanced Intel SpeedStep® technology|
|Instruction set extensions||Intel SSE4.1, Intel SSE4.2, Intel AVX2||Intel SSE4.1, Intel SSE4.2, Intel AVX2|
|Intel® AES New Instructions|
|Intel® Hyper-Threading technology|
|Intel® Optane™ Memory Supported|
|Intel® Thermal Velocity Boost|
|Intel® Turbo Boost technology|
|Intel® Stable Image Platform Program (SIPP)|
|Intel® vPro™ Platform Eligibility|
|Intel® Virtualization Technology (VT-x)|
|Intel® Virtualization Technology for Directed I/O (VT-d)|
|Intel® VT-x with Extended Page Tables (EPT)|
Hi guys, good evening!
First of all, I'm really pleased to be part of this forum. It's kinda hard to find some forum focused about mini pcs, clusters etc where I live. So, nice to meet you all guys!
So, I'm starting to plan to make my own mini cluster project and at first I bought two NEC Mate tiny (asian version of Thinkcentre). Specifically, M710q with Intel B250 Chipset . The model that I bought comes to Intel Xeon E-2176M (6C-12T), but it doesn't have a PCIe slot and, because of that, I ended up changed some steps of my project (I don't know if I'm gonna sell these mini pcs yet) and I'm mounting two M90q Gen 1 since last week.
I would like to know if the M90q supports Intel Xeon or I need to modify my bios to accept the CPU. To comparison, I leave below the Xeon W-1290 vs Core i9 10900 to understand and make it clear if this is possible to do.
I know that Xeon get more TDP than Core i9, but it comes Intel vPro capability, get the same LGA socket and, overall, looks pretty similar to i9. But my question blow my mind just on Device ID. Do you guys think that B520 chipset allows this Xeon? Because in AliExpress I can purchase the Xeon costing less than i9 (both 10900 and 10900K).
Intel Xeon W-1290P Intel Core i9-10900 Essentials Architecture codename Comet Lake Comet Lake Launch date 1 Apr 2020 Q2'20 Launch price (MSRP) $539 $439 - $449 Place in performance rating 272 384 Processor Number W-1290P i9-10900 Series Intel Xeon W Processor 10th Generation Intel Core i9 Processors Status Launched Launched Vertical segment Workstation Desktop Performance 64 bit support Base frequency 3.70 GHz 2.80 GHz Bus Speed 8 GT/s 8 GT/s L3 cache 20 MB 20 MB Manufacturing process technology 14 nm 14 nm Maximum core temperature 100°C 100°C Maximum frequency 5.30 GHz 5.20 GHz Number of cores 10 10 Number of threads 20 20 L1 cache 1280 KB L2 cache 2.5 MB Memory Max memory channels 2 2 Maximum memory bandwidth 45.8 GB/s 45.8 GB/s Maximum memory size 128 GB 128 GB Supported memory types DDR4-2933 DDR4-2933 Graphics Device ID 0x9BC6 0x9BC5 Graphics base frequency 350 MHz 350 MHz Graphics max dynamic frequency 1.20 GHz 1.20 GHz Intel® Clear Video HD technology Intel® Clear Video technology Intel® InTru™ 3D technology Intel® Quick Sync Video Max video memory 64 GB 64 GB Processor graphics Intel UHD Graphics P630 Intel UHD Graphics 630 Graphics interfaces Number of displays supported 3 3 Graphics image quality 4K resolution support Max resolution over DisplayPort 4096x2304@60Hz 4096x2304@60Hz Max resolution over eDP 4096x2304@60Hz 4096x2304@60Hz Max resolution over HDMI 1.4 4096x2160@30Hz Graphics API support DirectX 12 12 OpenGL 4.5 4.5 Compatibility Configurable TDP-down 95 Watt Configurable TDP-down Frequency 3.30 GHz Max number of CPUs in a configuration 1 1 Package Size 37.5mm x 37.5mm 37.5mm x 37.5mm Sockets supported FCLGA1200 FCLGA1200 Thermal Design Power (TDP) 125 Watt 65 Watt Thermal Solution PCG 2015D PCG 2015C Peripherals Max number of PCIe lanes 16 16 PCI Express revision 3.0 3.0 PCIe configurations Up to 1x16, 2x8, 1x8+2x4 Up to 1x16, 2x8, 1x8+2x4 Scalability 1S Only 1S Only Security & Reliability Execute Disable Bit (EDB) Intel® Identity Protection technology Intel® OS Guard Intel® Secure Key technology Intel® Software Guard Extensions (Intel® SGX) Intel® Trusted Execution technology (TXT) Secure Boot Advanced Technologies Enhanced Intel SpeedStep® technology Idle States Instruction set extensions Intel SSE4.1, Intel SSE4.2, Intel AVX2 Intel SSE4.1, Intel SSE4.2, Intel AVX2 Intel 64 Intel® AES New Instructions Intel® Hyper-Threading technology Intel® Optane™ Memory Supported Intel® Thermal Velocity Boost Intel® Turbo Boost technology Thermal Monitoring Intel® Stable Image Platform Program (SIPP) Intel® vPro™ Platform Eligibility Virtualization Intel® Virtualization Technology (VT-x) Intel® Virtualization Technology for Directed I/O (VT-d) Intel® VT-x with Extended Page Tables (EPT)
Thanks dude to share these infos. I didn't know that Lenovo has blocked via BIOS guard any modify microcode.There is no bios mod after M710 and M910 as Lenovo implement Bios guard in the bios. We cannot modify microcode by using the hardware programmer. I don't know there is a way to disable bios guard yet. If anyone know please let me know
I saw a vendor (in China) selling M710q could help modifying VRM, second NVME slot and even adding the extra PCI-E slot with the modded bios support everything. The modding fee was around 25 USD on top of the device cost. Or buy a M910X directly (recommend)
I'm not aware of anyone who has tried with Xeons. The boxes can get hot and the premium of adding a Xeon over an i5 say doesn't make sense. For a 65W CPUs strictly speaking you should use one of the models designed for that, eg the M90q versions, since they have uprated heatsinks etc. You can do the heatsink replacement etc yourself - I made some notes in the original post - but it can become expensive quickly.Do you know if the Xeon W-1290 can be compatible with M90q? It would be very interesting to know because It would open a gap to have a potential cluster.
PS.: My two M710q by NEC has arrived and it looks so cool! But I already put them up for sale.
Yes, W-1290P has 95W and because of that I'm looking for other SKUs like W-1290 or W-1270. Core i7 10700 is on my list as well as the i9 10900.I'm not aware of anyone who has tried with Xeons. The boxes can get hot and the premium of adding a Xeon over an i5 say doesn't make sense. For a 65W CPUs strictly speaking you should use one of the models designed for that, eg the M90q versions, since they have uprated heatsinks etc. You can do the heatsink replacement etc yourself - I made some notes in the original post - but it can become expensive quickly.
It depends on your usecase, but I've had no issues building clusters of various types (Proxmox, k8s, XCP-ng, VSphere, LXD etc) on i5 based Tinys.
Since power here in the UK is expensive I'm going towards getting the highest possible density of use out of my servers so I can keep most of them off until I need them - to turn up a k8s cluster, say.Yes, W-1290P has 95W and because of that I'm looking for other SKUs like W-1290 or W-1270. Core i7 10700 is on my list as well as the i9 10900.
I totally agree with you, Parallax. I've been searching a lot before making this decision because, as you said, CPU prices are very expensive and, because of that, resources used in my future cluster may not be so aggressive in terms of CPU utilization and It's resources.Since power here in the UK is expensive I'm going towards getting the highest possible density of use out of my servers so I can keep most of them off until I need them - to turn up a k8s cluster, say.
I was finding that spinning up VMs causes you to allocate a lot of cores, drives, and RAM, but actual utilisation of each one is quite low so you end up buying more resources than you actually need and the intensity of resource utilisation is poor.
One answer is Docker; but not everything can go efficiently in a Docker container, and they are supposed to be ephemeral rather than long running workloads. It's also quite messy from a security perspective, I ended up running a container registry to find and fix vulnerabilities and then you're effectively regression testing the patching of pre-packaged apps which is not what I want to spend my homelab time doing. Every container uses a different base image and relies on different versions of the dependencies.
So to Linux containers (LXCs). You get most of the benefits of VMs with less overhead and without the need to pre-allocate CPU, RAM, or drive space. My long running workloads are fairly idle most of the time with spikes of activity, that's a good model for an LXC. Sharing the kernel is increasingly efficient vs VMs the more workloads you run.
So with that in mind I've tried various hypervisors and for now I'm on LXD. I can focus on one Linux distro for the majority of my workloads and I get most of the destroy-and-replace benefit of a Docker container but with centralised vulnerability management. If I need another distro for a project, I can spin one up in a couple of seconds - much faster than Ansible or turning up a VM. Moving off Docker brought a considerable disk performance benefit (I run ZFS) as well. I have more than 20 LXCs running on one server and average CPU utilisation is around 15% and RAM is 19%, which is not anything like the ideal level of ~80%, but I have a lot of other projects I want to add so it will improve. I get a granular view of CPU, memory and disk per project or workload.
This is a long way of asking - do you really need that amount of CPU? I ran an i9-9900T for a while in a M920q because I was needing cores to allocate to VMs, but it was overkill almost all the time. The CPU was expensive to buy and expensive to run, and resale value was poor afterwards. CPUs today offer an enormous amount of compute - I'm on non-T i5 11500s now and it takes a hell of a lot to max one out and even then it's usually only for a few seconds. Would a midrange consumer CPU not suit? Otherwise it seems to me you're trying to make a Tiny into a shrunk rackmount and that isn't the benefit of using the Tiny format.
What card are you using ?I was just curious, has anyone using a 4x 2.5GbE network card found a baffle that works? I purchased one from superbuy for my 920q but unfortunately, it doesn't fit
I did measure some 18 months ago on an earlier Tiny, so it's possible they've rearranged the front antenna assembly to give a little more room. We've had people with issues with cards smaller than yours and some who had to take off (and replace afterwards) the front fascia to fit cards in, so I did try to be a little conservative (but not 16mm conservative!).Semi-short time lurker, short time member here:
I recently got a m90q off eBay with the intention of turning into a ESXi box with PfSense and a few other things.
OP was quite comprehensive to make sure I got the right model and bought the PCI-E slot and riser separately.
However, there are a couple references mentioning that the maximum length of a PCI-E card can be "150mm or less".
I can report I was able to fit a Draytek Vigor 132f card in which is 155mm total, and there is still room left over near the front.
The total length I measured is actually 166mm from back to front:
View attachment 28878
Not the poster, but I do link an example 4x 2.5GbE card in the first post.What card are you using ?