HP t740 Thin Client as an HP Microserver Gen7 Upgrade (finally!)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
I have tested Intel X710-T4 10GBE ethernet card.

It is extremely hot after few minutes of usage, it is not useable without active cooling solution.

I checked intel x540T2, it is not as hot as X710, seems better.


See my photo above regarding the use of a PCIe x16 extender + USB powered cooling fan for testing purposes. I try not to put anything in a machine unless it's proven not to build up heat without active cooling (something I learned working with other thin clients in my past).
 
Last edited:
  • Like
Reactions: csp-guy and Samir

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Would be good to catch some info about heat tolerance of T740 from HP
You won't get any (or at least, it won't be as direct as the power figures from the t620 Plus/t730). HP seems to have de-emphasized on the idea of running 6-8 screens on a single thin client with the t740 and chose instead to talk about installing an AT2914 gigabit fiber NIC, and those don't put out much in terms of heat. From what I remember the E9173 discrete card option from AMD uses up to 35 Watts, but it comes with a slimline fan.

The issue isn't really about the heat as long as it goes somewhere else - the Solarflare Flareon works just fine as long as you put a fan near it, or if there are nothing else nearby for it to cook (hence the PCIe extender). The issue is that unless natural convention can carry the heat somewhere else it'll just stay around, and thin client designs generally only have a single blower for the SoC heatsink. That's why you would need to figure out the vertical clearance between the heatsink on the card and the RAM shield underneath, and hack up a fan or blower to install inside the enclosure.

The thin client was never really designed with an intention for someone to put a high powered network card in and run it 24/7 like a server, and those network cards were not designed to be installed in a chassis without constant airflow, like a thin client.
 
Last edited:
  • Like
Reactions: name stolen

PD_ZFS-User

Member
Jul 13, 2018
37
11
8
Update:

The machine miraculously showed up at my doorstep early this afternoon. Turns out that the machine was shipped from Nassau County, which is about 40 miles away from my home in NYC - the estimate from Unionized Package Smashers (UPS) was a bit pessimistic. So yes, the Great Horned Owl (the code name for the Ryzen embedded V1000) has landed.

Here are some early observations and photos:

- The power supply lead has been switched from the HP 7.4mm (black ring tip) to the 4.5mm (blue ring tip), so if you are coming in from the t620/t620 plus/t630/t730 machines and your t740 do not ship with a power brick, you'll want to pick up an adapter for it. The adapter are all generic (since HP does not seem to make a 7.4mm to 4.5mm adapter, just the other way around).

Interesting side-note: HP and Dell seem to use the same dimensions and general polarity for their chargers (7.4mm for the old stuff, 4.5mm for the newer stuff, center positive polarity), however, their voltage is off by about 0.5v in general - they are theoretically interchangeable, but I am not going around plugging my HP Elitebooks to Dell PA12 power bricks just to see what happens. If you want a more future friendly way to deal with multiple brick types there are USB-PD to various barrel emulators on eBay for sale, just for operational flexibility.

Here's the power brick (HP model 710473-001, which uses the HP 4.5mm x 3.0 center positive polarity tips. Below is a comparison between the old and new power leads.





Some seller silliness is evident here - the device is advertised as "refurbished but in perfect cosmetic condition"...except when they bundled the wrong stand for it. The stand they shipped was for the t630, which is not the same. The newer model uses a t640 stand, which sits a bit taller and should be octagonal in shape. Not really a big deal. Either they can send me one, or I'll buy one later. Still, I paid only 400USD (including taxes and shipping). The cheapest I've seen it go for is between 650 to 750 USD new, so that's somef savings for you right there, and in my opinion, compared to the 200-300 USD pricing on eBay for the t730, this is definitely worth the money at that price point. Of course, we do have to keep in mind that the t740 will be considered "current" for the next 4-5 years (much like the t730 when it came out in 2015), so don't expect a replacement any time soon.


The machine serial number is in a latch on the bottom (which is also where you mount the VESA100 stand, or if you prefer the device to sit horizontal. I plan on having this one be horizontal once I get the right stand.



One of my pet peeves is that due to the rounder contour of the t740, the power lead is at a 10 degree angle from the vertical, which looks really, really odd.



Opening it isn't difficult as the instructions are printed on the inside edge of the top cover (yes, they actually made the bottom stationary and the top removable...which is opposite of the t730.



So what does it look like inside?



The t620/630 are DDR3/3L units, while the t640/740 are DDR4 Notebook DIMM units. Looks like the machine has 2 4GB DIMMs as starters, and those will have to be replaced. I have questions on whether the RAM limit is 32GB (reported for the Great Horned Owl platform) or 64GB (common to Raven Ridge machines). The seller omitted the RAM shield, which I am not all that happy about (it's a passive heatsink/EM shield, and I don't see an FRU part listing so I can't third party order this).

I am expecting significantly better performance (compared to the GX415GA, GX420CA on the t620 or the RX427BB on the t730), as rough estimates of performance (based on Passmark) has this machine equal to the Ryzen 5 2400GE. We'll have to see about that.

The boot media is on the NVMe/eMMC port. You figure HP would've been able to just buy some cheap M.2 SATA SSDs. But nope...



This boot media unit (the Mothim NVMe eMMC) looks custom made. Here it is next to my SATA SSD (Intel 600p?). Remember, Key-BM is SATA while Key-M is NVe. The slot wouldn't tell you but you'll still need to pay attention!



For those who are wondering about the t740's abilities, here's the earliest lspci -vv, dmidecode and the dmesg dumps. Note that these are taken from within PartedMagic and on the initial v1.04 BIOS.

Quick summary for those who are not about to dig through the logs -
Is the hardware SRIOV capable? Yes (but with a caveat to be covered later)
Can it boot NVMe? Yes
What does power consumption look like when you spin VMs up? Need to be tested.
What about noise? Rough estimates based on spinning the machine to 80 Celsius via stress-ng says that the t740 is about 20% noisier than the t730. I am not sure whether that is due to the missing stand, missing DIMM cover or something else. More testing is needed.

Here's an lstopo graph I made representing the machine after BIOS update to the latest, Proxmox upgraded to 6.1 (latest) and some stuff...enabled. The Vega 8 used in the thin client is allocated 1GB by default, and hence the RAM count of 6866MB.

@arglebargle, remember the fun of trying to pass 7 Solarflare VFs into the t730...? Here's the t740 passing 254 VFs (that's the max of 127 VFs per port, 2 ports.
Note: I have NO idea whether this is working or not. I'll need to fire up a VM to see how this is being consumed)



...more to come later.
WANg,

Thanks for investigating and documenting the t740 so well.

I received my HP t740 from CDW Outlet (HP product number 7NN06AT#ABA) and it was also missing the RAM shield. I wonder if HP has stopped installing them during manufacturing. There is a note on the service diagram on the inside of the case lid saying 'Only for shielding can remove' regarding the RAM shield. I did find the part number in the disassembly guide, 6053B1718101, but I've had no luck searching for it. Let me know if you find a source for this part or can confirm that it is not needed/included in HP's current builds.

A second part that was either missing/not included with my t740 was the VESA BRKT (bracket), part 6053B1655301, which the disassemble guide shows as stored under the VESA Cover. Did your t740 include the VESA bracket? I'd also like to know if you discover a source for this part. So far I've had no luck with searching HP's website or the internet in general.

Again, thanks for sharing all your research.

PD
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
WANg,

Thanks for investigating and documenting the t740 so well.

I received my HP t740 from CDW Outlet (HP product number 7NN06AT#ABA) and it was also missing the RAM shield. I wonder if HP has stopped installing them during manufacturing. There is a note on the service diagram on the inside of the case lid saying 'Only for shielding can remove' regarding the RAM shield. I did find the part number in the disassembly guide, 6053B1718101, but I've had no luck searching for it. Let me know if you find a source for this part or can confirm that it is not needed/included in HP's current builds.

A second part that was either missing/not included with my t740 was the VESA BRKT (bracket), part 6053B1655301, which the disassemble guide shows as stored under the VESA Cover. Did your t740 include the VESA bracket? I'd also like to know if you discover a source for this part. So far I've had no luck with searching HP's website or the internet in general.

Again, thanks for sharing all your research.

PD
Oh, you looked at the Pichu disassembly guide, huh.

Several items -

a) The parts number quoted from the document are from the factory - not really used by HP internal / retail. Knowing them...is not that useful.

b) My 7NN06AT didn't come with the RAM shield - I talked to my vendor (local ebay seller) and he was gracious enough to send me one - I have no idea where he sourced it from (but judging from the sticky tack on top of the RAM shield...it's probably from a t640). AFAIK, I think only certain SKUs get them (probably the ones with the E9173 PCIe graphics cards). I didn't notice much difference in terms of cooling or performance. For an idea of what it looks like, see the PCIe extender photo above - it's actually right under the PCIe extender cable.

c) No, it doesn't come with a VESA bracket (that's only offered with certain SKUs) - it was supposed to come with the Octagonal stand, keyboard, mouse and power supply.

 

PD_ZFS-User

Member
Jul 13, 2018
37
11
8
Oh, you looked at the Pichu disassembly guide, huh.

Several items -

a) The parts number quoted from the document are from the factory - not really used by HP internal / retail. Knowing them...is not that useful.

b) My 7NN06AT didn't come with the RAM shield - I talked to my vendor (local ebay seller) and he was gracious enough to send me one - I have no idea where he sourced it from (but judging from the sticky tack on top of the RAM shield...it's probably from a t640). AFAIK, I think only certain SKUs get them (probably the ones with the E9173 PCIe graphics cards). I didn't notice much difference in terms of cooling or performance. For an idea of what it looks like, see the PCIe extender photo above - it's actually right under the PCIe extender cable.

c) No, it doesn't come with a VESA bracket (that's only offered with certain SKUs) - it was supposed to come with the Octagonal stand, keyboard, mouse and power supply.

Thanks for answering my questions. Good to know about the part #s in the disassembly guide being mostly useless. I did get the keyboard, mouse and the stand with my 7NN06AT from CDW Outlet so it looks like I got everything intended. I'll figure out a different bracket if I ever use the VESA mounting points.

PD
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Okay, more fun and exciting information for the t740 -

So I ended up buying an X520-DA2 instead of the i350-T4 (since I use 10/40 SFP+/QSFP networking in my home), and as such, I got some type of SRIOV working:

Screen Shot 2020-05-21 at 4.38.07 PM.png

Unfortunately, I am not sure if the SRIOV stuff is actually sane, since I noticed that the virtual function looks funny, namely:

Screen Shot 2020-05-21 at 4.38.35 PM.png

The subdevice/subfunction are all reported as 0xffff.

Checking the logs:

Code:
[root@dash:~] grep 2020-05-21 /var/log/vmkernel.log | grep 0000:01
2020-05-21T19:05:01.580Z cpu6:65926)PCI: 1254: 0000:01:00.1 named 'vmnic2' (was '')
2020-05-21T19:05:01.581Z cpu6:65926)PCI: 1254: 0000:01:00.0 named 'vmnic0' (was '')
2020-05-21T19:05:03.166Z cpu4:65926)PCI: 1499: 0000:01:00.0: intPIN A intLine 11 irq 11 vector 0x32
2020-05-21T19:05:03.166Z cpu4:65926)VMK_PCI: 765: device 0000:01:00.0 allocated 1 INTx interrupt
2020-05-21T19:05:03.171Z cpu4:65926)PCI: 1499: 0000:01:00.1: intPIN B intLine 10 irq 10 vector 0x33
2020-05-21T19:05:03.171Z cpu4:65926)VMK_PCI: 765: device 0000:01:00.1 allocated 1 INTx interrupt
2020-05-21T19:05:05.447Z cpu0:65984)<6>ixgbe: 0000:01:00.0: ixgbe_check_options: CNA enabled, 1 queues
2020-05-21T19:05:05.447Z cpu0:65984)<6>ixgbe: 0000:01:00.0: ixgbe_check_options: Packet split is not supported.
2020-05-21T19:05:05.472Z cpu0:65984)VMK_PCI: 765: device 0000:01:00.0 allocated 11 MSIX interrupts
2020-05-21T19:05:05.472Z cpu0:65984)MSIX enabled for dev 0000:01:00.0
2020-05-21T19:05:05.472Z cpu0:65984)<3>ixgbe: 0000:01:00.0: ixgbe_alloc_queues: using rx_count = 456
2020-05-21T19:05:05.473Z cpu0:65984)<6>ixgbe: 0000:01:00.0: ixgbe_cna_enable: CNA pseudo device registered 0000:01:00.0
2020-05-21T19:05:05.473Z cpu0:65984)<6>ixgbe: 0000:01:00.0: ixgbe_probe: Registering for VMware NetQueue Ops
2020-05-21T19:05:05.473Z cpu0:65984)<6>ixgbe 0000:01:00.0: 0000:01:00.0: MAC: 2, PHY: 1, PBA No: 400900-000
2020-05-21T19:05:05.473Z cpu0:65984)<6>ixgbe 0000:01:00.0: 0000:01:00.0: Enabled Features: RxQ: 9 TxQ: 9 FCoE DCB
2020-05-21T19:05:05.473Z cpu0:65984)<6>ixgbe 0000:01:00.0: 0000:01:00.0: Intel(R) 10 Gigabit Network Connection
2020-05-21T19:05:05.473Z cpu0:65984)PCI: Registering network device 0000:01:00.0
2020-05-21T19:05:05.473Z cpu0:65984)VMK_PCI: 393: Device 0000:01:00.0 name: vmnic0
2020-05-21T19:05:05.473Z cpu2:65984)PCI: driver ixgbe claimed device 0000:01:00.0
2020-05-21T19:05:06.543Z cpu2:65984)<6>ixgbe: 0000:01:00.1: ixgbe_check_options: CNA enabled, 1 queues
2020-05-21T19:05:06.543Z cpu2:65984)<6>ixgbe: 0000:01:00.1: ixgbe_check_options: Packet split is not supported.
2020-05-21T19:05:06.567Z cpu2:65984)VMK_PCI: 765: device 0000:01:00.1 allocated 11 MSIX interrupts
2020-05-21T19:05:06.567Z cpu2:65984)MSIX enabled for dev 0000:01:00.1
2020-05-21T19:05:06.568Z cpu2:65984)<3>ixgbe: 0000:01:00.1: ixgbe_alloc_queues: using rx_count = 456
2020-05-21T19:05:06.569Z cpu2:65984)<6>ixgbe: 0000:01:00.1: ixgbe_cna_enable: CNA pseudo device registered 0000:01:00.1
2020-05-21T19:05:06.569Z cpu2:65984)<6>ixgbe: 0000:01:00.1: ixgbe_probe: Registering for VMware NetQueue Ops
2020-05-21T19:05:06.569Z cpu2:65984)<6>ixgbe 0000:01:00.1: 0000:01:00.1: MAC: 2, PHY: 1, PBA No: 400900-000
2020-05-21T19:05:06.569Z cpu2:65984)<6>ixgbe 0000:01:00.1: 0000:01:00.1: Enabled Features: RxQ: 9 TxQ: 9 FCoE DCB
2020-05-21T19:05:06.569Z cpu2:65984)<6>ixgbe 0000:01:00.1: 0000:01:00.1: Intel(R) 10 Gigabit Network Connection
2020-05-21T19:05:06.569Z cpu2:65984)PCI: Registering network device 0000:01:00.1
2020-05-21T19:05:06.569Z cpu2:65984)VMK_PCI: 393: Device 0000:01:00.1 name: vmnic2
2020-05-21T19:05:06.569Z cpu2:65984)PCI: driver ixgbe claimed device 0000:01:00.1
2020-05-21T19:09:59.570Z cpu7:65926)PCI: 1254: 0000:01:00.1 named 'vmnic2' (was '')
2020-05-21T19:09:59.571Z cpu7:65926)PCI: 1254: 0000:01:00.0 named 'vmnic0' (was '')
2020-05-21T19:10:00.959Z cpu6:65926)PCI: 1499: 0000:01:00.0: intPIN A intLine 11 irq 11 vector 0x32
2020-05-21T19:10:00.959Z cpu6:65926)VMK_PCI: 765: device 0000:01:00.0 allocated 1 INTx interrupt
2020-05-21T19:10:00.967Z cpu2:65926)PCI: 1499: 0000:01:00.1: intPIN B intLine 10 irq 10 vector 0x33
2020-05-21T19:10:00.967Z cpu2:65926)VMK_PCI: 765: device 0000:01:00.1 allocated 1 INTx interrupt
2020-05-21T19:10:03.211Z cpu4:65984)<6>ixgbe: 0000:01:00.0: ixgbe_check_options: CNA enabled, 1 queues
2020-05-21T19:10:03.211Z cpu4:65984)<6>ixgbe: 0000:01:00.0: ixgbe_check_options: CNA turned off when SR-IOV is enabled.
2020-05-21T19:10:03.211Z cpu4:65984)<6>ixgbe: 0000:01:00.0: ixgbe_check_options: VMDQ reduced to 2 when SR-IOV is enabled.
2020-05-21T19:10:03.211Z cpu4:65984)<6>ixgbe: 0000:01:00.0: ixgbe_check_options: Packet split is not supported.
2020-05-21T19:10:03.211Z cpu4:65984)LinPCI: vmklnx_enable_vfs:1371: enabling 3 VFs on PCI device 0000:01:00.0
2020-05-21T19:10:04.214Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.0 is a non-compliant VF
2020-05-21T19:10:04.214Z cpu4:65984)IOMMU: 1559: Fail to enable vmkDomain on device 0000:01:10.0
2020-05-21T19:10:04.214Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.2 is a non-compliant VF
2020-05-21T19:10:04.214Z cpu4:65984)IOMMU: 1559: Fail to enable vmkDomain on device 0000:01:10.2
2020-05-21T19:10:04.214Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.4 is a non-compliant VF
2020-05-21T19:10:04.214Z cpu4:65984)IOMMU: 1559: Fail to enable vmkDomain on device 0000:01:10.4
2020-05-21T19:10:04.214Z cpu4:65984)LinPCI: vmklnx_enable_vfs:1385: 3 VFs enabled on PCI device 0000:01:00.0
2020-05-21T19:10:04.214Z cpu4:65984)<6>ixgbe 0000:01:00.0: (unregistered net_device): FCoE offload feature is not available. Disabling FCoE offload feature
2020-05-21T19:10:04.239Z cpu4:65984)VMK_PCI: 765: device 0000:01:00.0 allocated 3 MSIX interrupts
2020-05-21T19:10:04.239Z cpu4:65984)MSIX enabled for dev 0000:01:00.0
2020-05-21T19:10:04.239Z cpu4:65984)<3>ixgbe: 0000:01:00.0: ixgbe_alloc_queues: using rx_count = 512
2020-05-21T19:10:04.240Z cpu4:65984)<6>ixgbe: 0000:01:00.0: ixgbe_probe: Registering for VMware NetQueue Ops
2020-05-21T19:10:04.240Z cpu4:65984)<6>ixgbe 0000:01:00.0: 0000:01:00.0: MAC: 2, PHY: 1, PBA No: 400900-000
2020-05-21T19:10:04.240Z cpu4:65984)<6>ixgbe 0000:01:00.0: 0000:01:00.0: Enabled Features: RxQ: 2 TxQ: 2
2020-05-21T19:10:04.240Z cpu4:65984)<6>ixgbe: 0000:01:00.0: ixgbe_probe: IOV is enabled with 3 VFs
2020-05-21T19:10:04.240Z cpu4:65984)<6>ixgbe 0000:01:00.0: 0000:01:00.0: IOV: VF 0 is enabled mac 0C:C4:7A:xx:xx:xx
2020-05-21T19:10:04.240Z cpu4:65984)<6>ixgbe 0000:01:00.0: 0000:01:00.0: IOV: VF 1 is enabled mac 0C:C4:7A:xx:xx:xx
2020-05-21T19:10:04.240Z cpu4:65984)<6>ixgbe 0000:01:00.0: 0000:01:00.0: IOV: VF 2 is enabled mac 0C:C4:7A:xx:xx:xx
2020-05-21T19:10:04.240Z cpu4:65984)<6>ixgbe 0000:01:00.0: 0000:01:00.0: Intel(R) 10 Gigabit Network Connection
2020-05-21T19:10:04.240Z cpu4:65984)PCI: Registering network device 0000:01:00.0
2020-05-21T19:10:04.240Z cpu4:65984)VMK_PCI: 393: Device 0000:01:00.0 name: vmnic0
2020-05-21T19:10:04.240Z cpu4:65984)PCI: driver ixgbe claimed device 0000:01:00.0
2020-05-21T19:10:05.310Z cpu4:65984)<6>ixgbe: 0000:01:00.1: ixgbe_check_options: CNA enabled, 1 queues
2020-05-21T19:10:05.310Z cpu4:65984)<6>ixgbe: 0000:01:00.1: ixgbe_check_options: CNA turned off when SR-IOV is enabled.
2020-05-21T19:10:05.310Z cpu4:65984)<6>ixgbe: 0000:01:00.1: ixgbe_check_options: VMDQ reduced to 2 when SR-IOV is enabled.
2020-05-21T19:10:05.310Z cpu4:65984)<6>ixgbe: 0000:01:00.1: ixgbe_check_options: Packet split is not supported.
2020-05-21T19:10:05.310Z cpu4:65984)LinPCI: vmklnx_enable_vfs:1371: enabling 3 VFs on PCI device 0000:01:00.1
2020-05-21T19:10:06.312Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.1 is a non-compliant VF
2020-05-21T19:10:06.312Z cpu4:65984)IOMMU: 1559: Fail to enable vmkDomain on device 0000:01:10.1
2020-05-21T19:10:06.312Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.3 is a non-compliant VF
2020-05-21T19:10:06.312Z cpu4:65984)IOMMU: 1559: Fail to enable vmkDomain on device 0000:01:10.3
2020-05-21T19:10:06.312Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.5 is a non-compliant VF
2020-05-21T19:10:06.312Z cpu4:65984)IOMMU: 1559: Fail to enable vmkDomain on device 0000:01:10.5
2020-05-21T19:10:06.312Z cpu4:65984)LinPCI: vmklnx_enable_vfs:1385: 3 VFs enabled on PCI device 0000:01:00.1
2020-05-21T19:10:06.312Z cpu4:65984)<6>ixgbe 0000:01:00.1: (unregistered net_device): FCoE offload feature is not available. Disabling FCoE offload feature
2020-05-21T19:10:06.337Z cpu4:65984)VMK_PCI: 765: device 0000:01:00.1 allocated 3 MSIX interrupts
2020-05-21T19:10:06.337Z cpu4:65984)MSIX enabled for dev 0000:01:00.1
2020-05-21T19:10:06.337Z cpu4:65984)<3>ixgbe: 0000:01:00.1: ixgbe_alloc_queues: using rx_count = 512
2020-05-21T19:10:06.338Z cpu4:65984)<6>ixgbe: 0000:01:00.1: ixgbe_probe: Registering for VMware NetQueue Ops
2020-05-21T19:10:06.339Z cpu4:65984)<6>ixgbe 0000:01:00.1: 0000:01:00.1: MAC: 2, PHY: 1, PBA No: 400900-000
2020-05-21T19:10:06.339Z cpu4:65984)<6>ixgbe 0000:01:00.1: 0000:01:00.1: Enabled Features: RxQ: 2 TxQ: 2
2020-05-21T19:10:06.339Z cpu4:65984)<6>ixgbe: 0000:01:00.1: ixgbe_probe: IOV is enabled with 3 VFs
2020-05-21T19:10:06.339Z cpu4:65984)<6>ixgbe 0000:01:00.1: 0000:01:00.1: IOV: VF 0 is enabled mac 0C:C4:7A:xx:xx:xx
2020-05-21T19:10:06.339Z cpu4:65984)<6>ixgbe 0000:01:00.1: 0000:01:00.1: IOV: VF 1 is enabled mac 0C:C4:7A:xx:xx:xx
2020-05-21T19:10:06.339Z cpu4:65984)<6>ixgbe 0000:01:00.1: 0000:01:00.1: IOV: VF 2 is enabled mac 0C:C4:7A:xx:xx:xx
2020-05-21T19:10:06.339Z cpu4:65984)<6>ixgbe 0000:01:00.1: 0000:01:00.1: Intel(R) 10 Gigabit Network Connection
2020-05-21T19:10:06.339Z cpu4:65984)PCI: Registering network device 0000:01:00.1
2020-05-21T19:10:06.339Z cpu4:65984)VMK_PCI: 393: Device 0000:01:00.1 name: vmnic2
2020-05-21T19:10:06.339Z cpu2:65984)PCI: driver ixgbe claimed device 0000:01:00.1
2020-05-21T20:00:49.692Z cpu4:68759)VMKPCIPassthru: 5237: Device: 0000:01:10.0 is not enabled for passthrough
2020-05-21T20:00:52.104Z cpu5:68767)VMKPCIPassthru: 5237: Device: 0000:01:10.0 is not enabled for passthrough
2020-05-21T20:01:24.501Z cpu0:68977)VMKPCIPassthru: 5237: Device: 0000:01:10.0 is not enabled for passthrough
2020-05-21T20:01:57.391Z cpu2:69123)VMKPCIPassthru: 5237: Device: 0000:01:10.0 is not enabled for passthrough
2020-05-21T20:03:14.426Z cpu2:69437)VMKPCIPassthru: 5237: Device: 0000:01:10.1 is not enabled for passthrough
2020-05-21T20:04:28.447Z cpu1:69593)VMKPCIPassthru: 5237: Device: 0000:01:10.0 is not enabled for passthrough
2020-05-21T20:52:31.375Z cpu2:70117)VMKPCIPassthru: 5237: Device: 0000:01:10.0 is not enabled for passthrough
2020-05-21T20:55:17.408Z cpu6:70293)VMKPCIPassthru: 5237: Device: 0000:01:10.0 is not enabled for passthrough



There was no issues assigning the VF out....

Screen Shot 2020-05-21 at 3.58.54 PM.png

But starting / consuming the VF shows issues as the passthrough is not seen correctly...

Screen Shot 2020-05-21 at 4.52.24 PM.png

The logs seem to confirm this belief:

Code:
[root@dash:~] grep 2020-05-21 /var/log/vmkwarning.log | grep 0000:01
2020-05-21T19:10:04.214Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.0 is a non-compliant VF
2020-05-21T19:10:04.214Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.2 is a non-compliant VF
2020-05-21T19:10:04.214Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.4 is a non-compliant VF
2020-05-21T19:10:06.312Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.1 is a non-compliant VF
2020-05-21T19:10:06.312Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.3 is a non-compliant VF
2020-05-21T19:10:06.312Z cpu4:65984)WARNING: PCI: 348: Device 0000:01:10.5 is a non-compliant VF

Pretty sure this is not a valid SRIOV passthrough device. So yeah, I don't think I can recommend the use of ESXi for the t740 series in general. The 6.x series requires slipstreaming of VMKLinux drivers for the built-in NICs, and I am not sure what's going on with the SRIOV implementation.
 
Last edited:

PD_ZFS-User

Member
Jul 13, 2018
37
11
8
Okay, more fun and exciting information for the t740 -

So I ended up buying an X520-DA2 instead of the i350-T4 (since I use 10/40 SFP+/QSFP networking in my home), and as such, I got some type of SRIOV working.....

Pretty sure this is not a valid SRIOV passthrough device. So yeah, I don't think I can recommend the use of ESXi for the t740 series in general. The 6.x series requires slipstreaming of VMKLinux drivers for the built-in NICs, and I am not sure what's going on with the SRIOV implementation.
@WANg,

Thanks again for researching the capabilities of the t740. Sharing my meager research.

I've been able to get ESXi 6.7 Update 3(Build 16075168) with the Realtek r8168 driver installed to a USB flash drive as the boot drive and it will boot without a keyboard, mouse and display connected. Link for r8168 ESXi driver: List of currently available ESXi packages - V-Front VIBSDepot Wiki

I've installed a Supermicro AOC-SGP-i4 which uses Intel i350 chipset and is reported as an i350. My goal is to passthrough the entire quad port NIC to an OPNsense VM firewall and also run low resource media and storage VMs. Using the t740 as a single always on lower power home server.

Getting SR-IOV working would be nice but I don't think it is supported in ESXi for this NIC. It shows the capability in the ESXi gui, but it fails when I attempt to enable it. I found additional info here:

SR-IOV Support

Supported NICs
All NICs must have drivers and firmware that support SR-IOV. Some NICs might require SR-IOV to be enabled on the firmware. The following NICs are supported for virtual machines configured with SR-IOV:
  • Products based on the Intel 82599ES 10 Gigabit Ethernet Controller Family (Niantic)
  • Products based on the Intel Ethernet Controller X540 Family (Twinville)
  • Products based on the Intel Ethernet Controller X710 Family (Fortville)
  • Products based on the Intel Ethernet Controller XL170 Family (Fortville)
  • Emulex OneConnect (BE3)

Are you planning on using Proxmox VE instead of ESXi due to the SR-IOV limitations?
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
@WANg,

Thanks again for researching the capabilities of the t740. Sharing my meager research.

I've been able to get ESXi 6.7 Update 3(Build 16075168) with the Realtek r8168 driver installed to a USB flash drive as the boot drive and it will boot without a keyboard, mouse and display connected. Link for r8168 ESXi driver: List of currently available ESXi packages - V-Front VIBSDepot Wiki

I've installed a Supermicro AOC-SGP-i4 which uses Intel i350 chipset and is reported as an i350. My goal is to passthrough the entire quad port NIC to an OPNsense VM firewall and also run low resource media and storage VMs. Using the t740 as a single always on lower power home server.

Getting SR-IOV working would be nice but I don't think it is supported in ESXi for this NIC. It shows the capability in the ESXi gui, but it fails when I attempt to enable it. I found additional info here:

SR-IOV Support

Supported NICs
All NICs must have drivers and firmware that support SR-IOV. Some NICs might require SR-IOV to be enabled on the firmware. The following NICs are supported for virtual machines configured with SR-IOV:
  • Products based on the Intel 82599ES 10 Gigabit Ethernet Controller Family (Niantic)
  • Products based on the Intel Ethernet Controller X540 Family (Twinville)
  • Products based on the Intel Ethernet Controller X710 Family (Fortville)
  • Products based on the Intel Ethernet Controller XL170 Family (Fortville)
  • Emulex OneConnect (BE3)

Are you planning on using Proxmox VE instead of ESXi due to the SR-IOV limitations?
Well, I am as much a Proxmox guy as I am an ESXi guy, but in terms of software, I like proxmox better (with some qualifications) - mostly better driver support, its roots from Debian, and the fact that I hacked it since version 2.0 from back in 2011. From the perspective of the end user experience I also don't see much advantage to running ESXi/VSphere, since it favors large servers (with IPMI) rather than run-of-the-mill machines (not much power management support or SMBus/I2C sensor reading). My take is that unless you plan to do some self-learning, it's not worth having it as your underlying hypervisor, and considering how many things barely work (VSphere 6) or doesn't work (VSphere 7), it's just not worth running it on a t740.
 

PD_ZFS-User

Member
Jul 13, 2018
37
11
8
Well, I am as much a Proxmox guy as I am an ESXi guy, but in terms of software, I like proxmox better (with some qualifications) - mostly better driver support, its roots from Debian, and the fact that I hacked it since version 2.0 from back in 2011. From the perspective of the end user experience I also don't see much advantage to running ESXi/VSphere, since it favors large servers (with IPMI) rather than run-of-the-mill machines (not much power management support or SMBus/I2C sensor reading). My take is that unless you plan to do some self-learning, it's not worth having it as your underlying hypervisor, and considering how many things barely work (VSphere 6) or doesn't work (VSphere 7), it's just not worth running it on a t740.
@WANg,

My thinking so far is that using VSphere/ESXi 6.7 allows for an install to a USB flash drive, thus keeping the internal m.2 SATA and m.2 NVMe slots available for use as a datastore and/or passthrough to a VM. I have tested the ability to connect an SSD (installed in an external USB 3.0 enclosure) to one of the rear USB-A 3.1 Gen 1 ports and I have successfully passed it through VSphere and mounted it in a VM.

So far in my reading re: Proxmox VE, I haven't come across any recommended solution to run it from a USB flash drive and the base install would use at least one of the internal slots on the t740.

Will most of your storage be hosted on other device(s)? Currently I'm limited to 1G speeds on my network and I'm hoping to get down to one switch, one wi-fi ap, and the t740 as the only devices that I run constantly.

Thanks again for sharing your efforts with the t740 and other low power devices.

Cheers,
PD
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
@WANg,

My thinking so far is that using VSphere/ESXi 6.7 allows for an install to a USB flash drive, thus keeping the internal m.2 SATA and m.2 NVMe slots available for use as a datastore and/or passthrough to a VM. I have tested the ability to connect an SSD (installed in an external USB 3.0 enclosure) to one of the rear USB-A 3.1 Gen 1 ports and I have successfully passed it through VSphere and mounted it in a VM.

So far in my reading re: Proxmox VE, I haven't come across any recommended solution to run it from a USB flash drive and the base install would use at least one of the internal slots on the t740.

Will most of your storage be hosted on other device(s)? Currently I'm limited to 1G speeds on my network and I'm hoping to get down to one switch, one wi-fi ap, and the t740 as the only devices that I run constantly.

Thanks again for sharing your efforts with the t740 and other low power devices.

Cheers,
PD
Well...for me the t740 gives me a bit of a conundrum. My current t730 works just fine as an ESXi 6.5 box, but sometimes you don't want "just fine" , plus the plan has always to migrate out of ESXi eventually (I don't care much for ESXi/VSphere 7). The major letdown for me regarding the t740 is the inability to use a cheap NIC on the M2 Key-E port, and the Realtek embedded NIC, which means that unless I spring for a USB3 NIC, it’ll always be restricted to ESXi 6.5U3 on VMKLinux drivers. That’s not wise for a machine meant to last another 4 years.

As for the storage...well, the t730 is connected to my iSCSI box (the Microserver G7) via a 40GbE Mellanox ConnectX3, so the t740 will likely be also be connected via Mellanox for the same iSCSI extent...but it’s just that I’ll need to migrate the VM inventory from VMX/VMDK to its KVM equivalents (VMDK is fine but the vmx will need to be translated).
 
Last edited:

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Okay, so some good news. Thanks to a 4th of July sale, I was able to pick up some extra parts for the t740. So here’s an idea of what I did.

G.Skills Ripjaws DDR4.png

A pair of 32GB G.Skills Ripjaws DDR4-2666 SODIMMs, and it's about 240 USD+taxes and shipping. So...did it work?

Bootup Message.png

Note that suspiciously high RAM count. 64GB minus a little for the Vega 8 iGPU?
Did Windows 10 IoT boot up?

Windows 10 Info.png

Yep. What does HP's hardware utility see?

HP Info.png

64GB it is. Is CPU-Z able to read the SPD info off the SODIMMs?

RAM Info.png

Game On.png


Did some...stress testing on the machine for 30 or so minutes. Everything seems fine. Now it's just the matter of migrating the iSCSI filestore from VMware ESXi to Proxmox 6.
 
  • Like
Reactions: teafarer

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Okay, so some update for this machine as the t730 replacement - it really became the t730 replacement as it has been running as my home hypervisor since Feb 2021 or so. All I had to do was swap SSDs with the t730, made sure that the Mellanox works, and that's pretty much about it for now. I did replace the pair of 8GB DDR3 RAM on the HP Microserver Gen7/N40L to DDR3 ECC RAM, not due to error correction considerations, but rather due to the fact that the G.Skill FuryX DIMM modules doesn't report capacity correctly after reboot (sometimes it reports 8GB total, other times 16GB). Replacing them with Kingston KVR16E11/8I DIMMs made that issue go away. I also added a 620w APC UPS unit to prevent against power supply related issues.

Note that the long term plan is to retire the t730 and roll the infrastructure over from VMware to Proxmox, but implementing this will take some time. Namely, I'll need to be able to boot this into Debian/Proxmox, mount the iSCSI volume, convert the VMDKs to equivalent qcow2 files, rewrite the VMX to Proxmox kosher config files, import it into Proxmox and then make sure that the migration is successful.
 
Last edited:

unmesh

Active Member
Apr 17, 2017
200
55
28
65
@WANg

Please keep us posted on your infrastructure migration journey. I too would like to consider moving from a single mini tower running ESXi to a TMM running ProxMox, maybe even a small cluster of them for HA.

I already have ProxMox running with some "toy" VMs.
 
  • Like
Reactions: Samir

fossxplorer

Active Member
Mar 17, 2016
554
97
28
Oslo, Norway
Anyone experienced issued with the fan? I got 2 of these and on 1 of these fan wouldn't adapt to the temp and it was heating up!
I powered it off and using the the other one without isuues. But the fan is noisy when the VMs are loaded, even under patching!
 
  • Like
Reactions: Samir

spiralbrain

New Member
Jan 4, 2022
3
4
3
Anyone experienced issued with the fan? I got 2 of these and on 1 of these fan wouldn't adapt to the temp and it was heating up!
I powered it off and using the the other one without isuues. But the fan is noisy when the VMs are loaded, even under patching!
I've spotted the strange fan issue too. At start sometimes the fan is fixed to 13% duty cycle. it will not spin faster with CPU load. Earlier the PC would freeze after over heating. After the new bios update the fan suddenly goes to maximum and then I restart the system. Its a BUG
 

Attachments

  • Like
Reactions: Samir

fossxplorer

Active Member
Mar 17, 2016
554
97
28
Oslo, Norway
Yeah, there are 2 things i noticed:
- the start up full spin of fan that will not slow down even after POST and loading the OS. It usually requires a reboot
- the fan is very active doing even the slightest thing that might load the system like doing a patching (dnf update) from inside a VM. This will make the fan rev for a short while, but is ofc audible when you have the thin client close to you.

The second point i am starting to suspect the way Zen and Ryzen core dies are reporting the temp. I have a read a bit around that topic and what i understand is that Zen dies don't report a smooth curve. This is my first Zen/Ryzen and have nothing else to compare with.

I assume you have the same BIOS as me:

Code:
BIOS Information
    Vendor: AMI
    Version: M42 v01.10
    Release Date: 11/11/2020
There is now a newer BIOS v.01.11 from November, but it states nothing about a fan fix..
 
  • Like
Reactions: Samir

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
If you guys are running Proxmox it might be worth trying out the newer 5.15 kernel as it has AMD specific enhancements

 
  • Like
Reactions: Samir

spiralbrain

New Member
Jan 4, 2022
3
4
3
Yeah, there are 2 things i noticed:
- the start up full spin of fan that will not slow down even after POST and loading the OS. It usually requires a reboot
- the fan is very active doing even the slightest thing that might load the system like doing a patching (dnf update) from inside a VM. This will make the fan rev for a short while, but is ofc audible when you have the thin client close to you.

The second point i am starting to suspect the way Zen and Ryzen core dies are reporting the temp. I have a read a bit around that topic and what i understand is that Zen dies don't report a smooth curve. This is my first Zen/Ryzen and have nothing else to compare with.

I assume you have the same BIOS as me:

Code:
BIOS Information
    Vendor: AMI
    Version: M42 v01.10
    Release Date: 11/11/2020
There is now a newer BIOS v.01.11 from November, but it states nothing about a fan fix..
Yes I have the same Bios as you. This morning I got a message Fan not detected, on reboot it was gone. Could be a fan issue or a simple delay or decoupling capacitor somewhere. I have managed to trace the fan supplier, it made by Delta Electronics, I have also started a thread on HP's website. I am very sure they are aware. Going to get to the bottom of this, This is my first AMD device in my 25yrs of computing journey and this isn't going to die.

You may also post your observations here: HP T740 fan issue
 
  • Like
Reactions: Samir

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Has anyone tried a X710-T2L/X710-T4L yet? The power consumption and heat should be lower than the non-L version.

Apparently 5 Gbps fiber is available in my area as of today. My current pfSense is limited to 1 Gbps as is running on a 9 year old Jetway embedded board. Been itching to upgrade the pfSense for the last few years, but procrastinated since tbh it’s overkill for 1 Gbps.

I’m mostly concerned about heat build up. Normally I’d go mini-ITX, but there aren’t many great ITX chassis. Actually the new ASRock DeskMeet series seems almost perfect as it has a PCIe slot, but not sure why they decided to go with a full ATX PSU. Probably cost considerations.
 
  • Like
Reactions: Samir