Drag to reposition cover

Lenovo Thinkcentre/ThinkStation Tiny (Project TinyMiniMicro) Reference Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ajb

New Member
Oct 26, 2024
1
0
1
I posted this over in Reddit but then it occured to me this would be a much more relevant place. Wondering if anyone else with a Lenovo P360 Tiny might be able to help?

I bought two Lenovo Tiny P360's to upgrade my home lab. Both have an Intel I219-LM NIC (17) which I'm playing around with while I wait for a couple of X550-T2 cards to arrive. I've installed ESXi 8.0U3.

I noticed when trying to upgrade packages in VMs for both Fedora 40 and Alma Linux 9, that upgrading packages was timing out due to low download speeds of under 1 MB/s. I ran some speed tests on the command line which confirmed that the download speed was repeatedly 0.X Megabits per second, but the upload speed was around 30 Megabits per second.

I did the same tests on my non-VM workstations (same switch upstream) and get around 70-80 Megabits.

I ran the esxtop network page while testing and noticed I was getting around 10-15% dropped receive packets on the VMs. I only ever tested one at a time so it wasn't an overall volume/load issue.

I ran the same speed tests on the ESXi host itself and noticed that the host itself was suffering from the same terrible download speeds.

I grabbed a USB-C network adapter I had lying around and pushed the ESXi host management interface onto the USB NIC and ran the same speed test. Instantly got around 9 megabytes a second on download which seems a little more in line with my 70 megabit connection. Reconfigured back to the built in i219-LM and the test downloads went back to the slow download speeds again.

I have flashed both P360's to the latest BIOS. They both exhibit identical behavior and from my troubleshooting all I can think of is it's a I219-LM driver/NIC issue. Has anyone experienced this or have any other troubleshooting steps they would recommend? I have disabled the vPro AMT on the interface and it made no difference either. The NIC is on the HW compatibility list for ESXi and seems to be using the correct driver. Any help would be appreciated!

I grabbed the custom Lenovo ESXi installer as recommended by someone on Reddit, but unfortunately this made no difference and it used the same NE1000 driver for the I219-LM. I've heard of people previously having success downgrading to E1000E driver, but this seems to be no longer possible.

I installed Proxmox and tested and immediately get around 70 megabits per second, and it identifies the NIC as E100E. Not sure if this is isolated to my two machines or if other people are having this problem or found a fix on ESXi?

esxcli network nic list

Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
------ ------------ ------ ------------ ----------- ----- ------ ----------------- ---- -----------
vmnic0 0000:00:1f.6 ne1000 Up Up 1000 Full e8:80:88:d7:ef:bb 1500 Intel Corporation Ethernet Connection (17) I219-LM


esxcli network nic get -n vmnic0
Advertised Auto Negotiation: true
Advertised Link Modes: Auto, 10BaseT/Half, 100BaseT/Half, 10BaseT/Full, 100BaseT/Full, 1000BaseT/Full
Auto Negotiation: false
Backing DPUId: N/A
Cable Type: Twisted Pair
Current Message Level: -1
Driver Info:
Bus Info: 0000:00:1f:6
Driver: ne1000
Firmware Version: 2.3-4
Version: 0.9.2
Link Detected: true
Link Status: Up
Name: vmnic0
PHYAddress: 0
Pause Autonegotiate: false
Pause RX: false
Pause TX: false
Supported Ports: TP
Supports Auto Negotiation: true
Supports Pause: false
Supports Wakeon: true
Transceiver:
Virtual Address: 00:50:56:5c:28:df
Wakeon: MagicPacket(tm)

lspci -v | grep -A1 -i ethernet
0000:00:1f.6 Network controller Ethernet controller: Intel Corporation Ethernet Connection (17) I219-LM Class 0200: 8086:1a1c


Meanwhile on Proxmox...

lspci -v | grep -A1 -i ethernet

00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (17) I219-LM (rev 11)
DeviceName: Onboard - Ethernet
Subsystem: Lenovo Ethernet Connection (17) I219-LM
Flags: bus master, fast devsel, latency 0, IRQ 124, IOMMU group 10

00:1f.6 0200: 8086:1a1c (rev 11)
DeviceName: Onboard - Ethernet
Subsystem: 17aa:330e
Kernel driver in use: e1000e
Kernel modules: e1000e
 

realg

New Member
Oct 27, 2024
4
0
1
Hi all, apologies if that has been answered already in this thread, I did try my best to search.

Does M90q Gen 3 support pcie bifurcation? I'd like to run two M.2 NVMEs on the PCIE riser, and my understanding is the cheap pcie-to-m2 adapters require bifurcation support.

Alternatively, there seem to be more expensive adapters (like the Startech PEX8M2E2) that claim to not need bifurcation support. Can anyone confirm if this would work in a M90q?
 

senso

New Member
Jul 17, 2022
28
18
3
I'm thinking about expanding my janky home server, and I would like to know if a Fujitsu LSI SAS3008 9300-8i card works fine or not in an M720Q, its basically a LSI 9300 card, I expect it to work fine-ish, the seller pre flashes it to IT mode, so I expect it to be plug and play into Open Media Vault.
Basicaly I would like to know if anyone has had any issue running an LSI SAS HBA card in this systems.

Best regards.
 

WarlockSyno

Member
Jul 8, 2023
82
93
18
I was thinking of possibly getting an M920Q / P330 and adding a GPU to it.

However I've read here - "using 9th gen Intel and below you have to have a T model CPU to get access to the pcie slot"

Does anyone know if that is that correct?

Edit - that is correct, its mentioned in the review here @ 12:20:
More than likely he's referring to the overall power limit of the unit.

Are you wanting to put a non-T one in? There's people who have ran an i9 T model in theres + GPU
https://www.reddit.com/r/sffpc/comments/1c835rj
 

senso

New Member
Jul 17, 2022
28
18
3
I was thinking of possibly getting an M920Q / P330 and adding a GPU to it.

However I've read here - "using 9th gen Intel and below you have to have a T model CPU to get access to the pcie slot"

Does anyone know if that is that correct?

Edit - that is correct, its mentioned in the review here @ 12:20:
I'm running an i7 9700 non T on my M720Q, all the CPUs have the same PCIe capabilities, only limiting factor is if your model has the slot for the PCIe riser or not, as far as I know.
 

Prophes0r

Active Member
Sep 23, 2023
101
101
43
East Coast, USA
...only limiting factor is if your model has the slot for the PCIe riser or not, as far as I know.
And even that is only technically a limit...

The CPU/Chipset is wired for the slot.
There are traces on all the motherboards for the slot.
The BIOS doesn't intentionally disable those PCIe lanes.

If you add the passive components to a board and put a slot there...

I'm working on this for my stack of M710Qs.
Hopefully I'll be able to release an open-source board for cheap that will solder right into the thru-holes of a board without a slot.
(doing the passives will still be beyond many people though...)
No proprietary slot + riser jank...
 

Parallax

Active Member
Nov 8, 2020
469
233
43
London, UK
Hi! Is there anyway to place an external ssd using the SATA port? Eg a longer zif connector
Have not seen this done. No particular reason it shouldn't, unless cable length becomes a problem, but I'm not sure why you'd want to - the USB ports are a much easier way, just attach an enclosure and power it (if needed).
 

Parallax

Active Member
Nov 8, 2020
469
233
43
London, UK
Does M90q Gen 3 support pcie bifurcation? I'd like to run two M.2 NVMEs on the PCIE riser, and my understanding is the cheap pcie-to-m2 adapters require bifurcation support.
Natively, no, bifurcation is not supported.

Alternatively, there seem to be more expensive adapters (like the Startech PEX8M2E2) that claim to not need bifurcation support. Can anyone confirm if this would work in a M90q?
Should do, since after the riser is installed it's just a standard PCIe port. I expect you will have heat issues to contend with though if you put a couple of NVMe drives in.

I think if you need a lot of storage you would be better off just using larger m.2 drives - 8TB are now available and yes, I know they're expensive - or a cheaper way may be to use the ZIF SATA port for a shucked 2.5" drive to boot off (leaving you enough space for a PCIe card) and then use the two m.2 slots on the underside. This is what I do, leaving space for a dual 10GbE card. This also runs fairly hot though.
 

Parallax

Active Member
Nov 8, 2020
469
233
43
London, UK
However I've read here - "using 9th gen Intel and below you have to have a T model CPU to get access to the pcie slot"

Does anyone know if that is that correct?

Edit - that is correct, its mentioned in the review here @ 12:20:
Not correct, T/non T CPUs are no different in terms of "getting access" to the slot.

Perhaps the confusion is that smaller power supplies (eg 90W) will not be happy if you put in a 65W TDP CPU anyway, let alone if you then add a 50+W GPU. And I don't know if earlier models like the M920q are able to recognise the higher power (170W, 230W etc) units as supporting greater power draws.
 

realg

New Member
Oct 27, 2024
4
0
1
yeah thanks @Parallax that's sound advice.

I'm debating the shucked SSD approach vs using the wifi m.2 slot with a cheap adapter to stick a boot drive there instead. Anyone know if that slot on a P360 is a PCIE x1 or x2? Either way surely it'd be faster than an SSD and probably for equivalent cost? Just that I'd have to suffer through slow reboots to wait for PXE to time out as a trade off, is that right?
 

Parallax

Active Member
Nov 8, 2020
469
233
43
London, UK
Definitely faster than an SSD. Slot should be x1 but it's at least PCIe 3.0 I would assume?

Wth a 10GbE network you could also perhaps PXE boot off your NAS at a decent speed too?
 

Prophes0r

Active Member
Sep 23, 2023
101
101
43
East Coast, USA
...I'm debating the shucked SSD approach vs using the wifi m.2 slot with a cheap adapter to stick a boot drive there instead...
In my limited experience, the 1x slot provides very little power. Well below what you might find elsewhere.
It does meet the minimum power requirements of the standard.

I have a low power 3w max SSD that seems to work with my adapter.
My 6w and 10w SSDs do not.
My 1 lane SATA adapter also works fine so I guess it is low enough power.

My A+E to M key adapter has unpopulated pads for external 3.3v power and there are certainly places I could grab that from the board if I cared, but as it is it isn't really a useful NVMe boot option for me.
 
Last edited:

realg

New Member
Oct 27, 2024
4
0
1
Wth a 10GbE network you could also perhaps PXE boot off your NAS at a decent speed too?
Believe it or not this M90q might be my NAS.

I need to replace my aging HP MSgen8, and I'm optimizing for footprint/noise more than anything else (small London flat), ideally I want to downsize my whole homelab/NAS into a single 1U shelf... hence me trying to figure out how to shove as much NVME storage into this tiny box.

Weirdly 4TB SSDs are basically the same cost as NVME in the UK, so I don't have a strong incentive to run an external SSD enclosure if I can get away with it. Yeah 4TB NVME/SSDs are twice the cost of HDD, but I'm willing to eat that cost for the noise/size tradeoff.
 

crazyASD

New Member
May 16, 2024
1
0
1
Is anyone tried to install a QTB0 Comet Lake P0 (Core i9 10900T) 35W processor in a Lenovo Thinkstation M70q-M80q nettop?
 

Uptheiron

New Member
Mar 12, 2023
3
1
3
Hi Folks.

Certified TMM nut here... I have a pile of 18 Lenovo's ranging from M700 to M710q to M720q to P330. Will have to do a write up of my home lab at some point.

A couple of random questions as I've been have been pulling my hair out today... I've had a 4 node Nutanix CE cluster running on my M700's just fine (albeit a tad slow) so have been attempting to build a new cluster on 4 M720q's (with i5-9400T's if it's relevant) but I've hit a problem or two...

First node built just fine. The other 3 fail because the CVM won't boot as the video card isn't passing through (or so it seems based on the error). I see two differences between the working node and the failing nodes, wondering if anyone has anything else to add and if they can help...

The differences I see are:

BIOS is older on the failing nodes however I can't update it - the update process starts but then the PC fails to shutdown - the BIOS upgrade triggers the shutdown but fails to power cycle the device, it just hangs with power on and no video output and no response from the keyboard (caps lock light doesn't come on). Removing the power and rebooting recovers the device but the BIOS doesn't get upgraded.

Second difference is the CPU microcode is older on the non-working machines. Not sure if relevant.

Ring any bells for anyone?
 

DrMrFancyPants

New Member
Jul 27, 2024
5
0
1
I've played with a few of these, the 920q, 920x, and m90p v3. None of these have REBAR and nothing I tried to mod the BIOS worked.
Does anybody know if any of these machines come with REBAR enabled?
 

evil_santa

Member
Apr 16, 2023
71
31
18
I've played with a few of these, the 920q, 920x, and m90p v3. None of these have REBAR and nothing I tried to mod the BIOS worked.
Does anybody know if any of these machines come with REBAR enabled?
Were there any problems modding the BIOS? How did you do it?