HP t730 Thin Client as an HP Microserver Gen7 Upgrade

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Yeah, it works just fine on my SFN5122s - all I had to do is load up the drivers, download the solarflare utilities (sfutils) to run on Debian (via alien) to define and configure VFs (creating VFs is done using the util only, and not via boot-time options). I just never had a chance to allocate the VFs to VMs. Note that if you got used cards, you might already be dealing with cards that had VFs allocated and its configuration hardwired into the cards. You'll need to grab the sfutils here and reconfigure accordingly.
 
Last edited:

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
Huh, I wonder what's going on then. I've followed the same process and I'm not successfully allocating VFs, I'm just getting some hard to Google errors in the kernel log. Thanks for the info, I'll see what I can do.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Okay, so it's been 2 months since I got everything up and running, so I think it's a good time to give everyone a post-action report and inform everyone how things are humming along when it comes to the t730/N40L pairing. Honestly I think the t730 is doing fine, the N40L needs some work.

a) The 8GB DDR3 RAM modules on the N40L is getting to be a problem - mostly due to the machine misidentifying the total RAM count in 75% of the boot sequences. I am not sure if I want to swap them out for ECC modules, or update the BIOS (the HPe split made BIOS update a subscription thing, which is not worth it). I also kinda want a RAC/LOM card for the N40L due to a weird reboot/failure to init issue there a few days back.

b) The 4x4TB zpool is getting to be an issue. Well, not the zpool itself, but the way how it is being utilized. I might have significantly over-provisioned the iSCSI extent on the zpool - right now, off a 4x4TB zraid1 zpool, I have about 12TB + change . Out of that, 8TB is allocated to an iSCSI extent carrying about 100GB of files on the iSCSI mounted ESXi datastore. The same zpool is also used as a SMB mount that is hosting about 1TB of media files. I honestly do not trust zraid1 entirely as it's based on RAID5, and on 4TB mechanical drives the rebuilding time after a drive failure will be significant - to the point that another drive might fail before the rebuild is complete. I might have to gradually convert it to something else - maybe a pair of mirrored 500GB SATA SSDs for the datastores, and a pair of mirrored 4TB drives for storing media. I am honestly still somewhat on the fence on this one. I don't really need that much IOPS, and buying a pair of SSDs just for virtualization in a small homelab seems rather un-necessary. However, I might reconfigure it to zraid2 for slightly better survivability.

c) Protocol issues - when I started out, I implemented multi-path iSCSI to connect the t730 to the N40L - then I realized that iSCSI is likely sub-optimal. I will want to implement multi-path NFS4 or iSER (RDMA over iSCSI)...which leads us to....

d) ...Maybe FreeNAS is not the best idea.
Don't get me wrong - I respect FreeNAS, but I don't really love it as much as I wish I would. At my previous gig I administered an Infortrend EonStor ES16 NAS that was connected via 10GbE iSCSI to a NexentaStor server. It had its issues but I suspected that it was my ex-boss (the CIO) not wanting to add more RAM or add a ZLOG drive to make things work better. I also don't recall FreeNAS supporting ISER ot Multi-path NFS4 even with the latest releases. Right now I am leaning towards Napp-it on OmniOS instead.

On the t730 end, I want to see if I can get the SolarFlare configured in Linux and then have it expose multiple VFs on the 10GbE end. What I would like to see is the ability to split the VFs to guest VMs in ESXi, but I'll still need a 16 port Mikrotik switch/router first.

So the upcoming plan now is to add NFS sharing to the FreeNAS box and do some compare-and-contrast. I also have a pair of 40GbE Mellanox cards along with orders for a pair of 40GbE QSFP DACs, which can also be used to increase pure bandwidth from iSCSI initiator to target, then possibly for ISER. That being said, if I am picking and placing bits off a couple of 7200rpm HDDs, I doubt that it'll show any substantial differences.
 
Last edited:

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
I've also moved away from FreeNAS on my home machines. The DKMS packages for Zfs on Linux are reliable enough now that I just rolled up NAS + other services into AIO Linux hosts for better consolidation and use of resources.

If you aren't tied to *BSD by FreeNAS anymore I can tell you that VF passthrough with the mlx4 driver on Linux works swimmingly. I had consistent performance/function with a connectx-2 passing VFs into Linux VMs. The FreeBSD mlx4 driver VF support is coming along, it's much improved for FreeBSD 12, but it doesn't work fully and you'll lose access to the VF until host reboot after shutting down the guest.

I'm assuming a CX-3 would work just as well here (these are unofficial features on a CX-2, but fully supported on the CX-3.) I haven't tested IB or VPI VFs and I've read that there are restrictions on what kind of VF can be used on each port but that's something you might want to look into for NFS or iSCSI over RDMA. FWIW I think the CX-3s run a little cooler than equivalent CX-2s in my machines too.

Edit: I've made some progress getting Solarflare VFs working on my t730 & sfn5122f this morning. Anything over max_vfs=3 causes SR-IOV driver initialization to fail, but any combination of 3 or fewer per port works fine. Can you confirm this when you start working with Linux? I'd like to know if I'm making a mistake somewhere, 3 VFs per port isn't all that useful to be honest.

IIRC there was some kind of limit like 8 VFs per pcie device when the hypervisor BIOS doesn't have some supporting features for SR-IOV, I wonder if that's what I'm bumping into? I'm a bit confused to be honest, I think I was able to create significantly more VFs with one of my Mellanox cards. I'll have to double check if any of the devices created used a device index greater than nn:nn.7.
 
Last edited:

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Edit: I've made some progress getting Solarflare VFs working on my t730 & sfn5122f this morning. Anything over max_vfs=3 causes SR-IOV driver initialization to fail, but any combination of 3 or fewer per port works fine. Can you confirm this when you start working with Linux? I'd like to know if I'm making a mistake somewhere, 3 VFs per port isn't all that useful to be honest.

IIRC there was some kind of limit like 8 VFs per pcie device when the hypervisor BIOS doesn't have some supporting features for SR-IOV, I wonder if that's what I'm bumping into? I'm a bit confused to be honest, I think I was able to create significantly more VFs with one of my Mellanox cards. I'll have to double check if any of the devices created used a device index greater than nn:nn.7.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
I just took another look at the Mellanox machine, I'm initializing the driver with:
Code:
options mlx4_core port_type_array=2,2 num_vfs=6,1,0 probe_vf=6,1,0 log_num_mgm_entry_size=-1
in /etc/modprobe.d/mlx4_core.conf

I'm hitting the same pcie addressing limit as with the Solarflare cards, creating VFs above xx:xx.7 isn't possible. Mellanox initializes VFs for the ports sequentially though, so it's possible to create up to 7 on either of them. Solarflare creates VFs in alternating fashion: port 1 gets IDs .2, .4, .6, etc and port 2 gets .3, .5, .7, etc so you're capped at 3 possible VFs per port on those cards.

Edit: You don't need to include probe_vf unless you want the VFs accessible as nics from the hypervisor too, they're created just fine with just num_vfs. Also, the last value in the tuple creates dual port VFs, that could save you several addresses if you have machines that need access to both ports.
 
Last edited:

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
I just took another look at the Mellanox machine, I'm initializing the driver with:
Code:
options mlx4_core port_type_array=2,2 num_vfs=6,1,0 probe_vf=6,1,0 log_num_mgm_entry_size=-1
in /etc/modprobe.d/mlx4_core.conf

I'm hitting the same pcie addressing limit as with the Solarflare cards, creating VFs above xx:xx.7 isn't possible. Mellanox initializes VFs for the ports sequentially though, so it's possible to create up to 7 on either of them. Solarflare creates VFs in alternating fashion: port 1 gets IDs .2, .4, .6, etc and port 2 gets .3, .5, .7, etc so you're capped at 3 possible VFs per port on those cards.
Eh, I think it's BIOS related in that the PCIe BAR size allocation is restricted to less than 1GB, and you cannot go above what the BIOS is willing to grant you for the VFs - of course, I am not sure if this is specific to the t730, the RX427BB, or the AMD Piledriver machines in particular. I kinda-sorta want to test the theory out by sourcing a used HP mt42/43 mobile thin client (with a similar APU) and trying to see if the same restrictions exist...except that the HP docking stations do not support PCIe slots (oldschool laptops often have a PCI slot that allows you to use external GPUs) so I canot test whether VF allocation limits are inherently baked into hardware, or just a BIOS thing.

I personally got VF allocations in the mid-twenties on SuperMicro Xeon E5v2 SuperBlades with the XFN5122s, and considering that I was only testing them on 4 VFs per port in the t730 on Proxmox (I just need to know if it works or not), I didn't know that it's an issue until you bought it up.

*Sigh*. So close, man. So close.
 
Last edited:

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
So I dug into this a bit more tonight while working with a Qlogic NetXtreme II that I picked up cheaply on eBay. This one (bnx2x driver) completely refuses to enable SR-IOV without ARI forwarding on the upstream PCIe bridge or switch, which this board doesn't support.

That's it BTW: the reason we're only able to allocate up to 8 device addresses per card is that the last chunk of the pci address is a 3 bit value. Without ARI forwarding on the upstream port there's no way to allocate more than 8 addresses per device. With ARI forwarding you get something like 254 virtual functions per device, without you're capped at 8 addresses.

That's not that bad though, I don't know that I have any real reason to need more than a couple of passthrough VFs at any given time. They can be recycled too, it shouldn't be too hard to write a script to pick a VF from the pool that isn't in use, give it an appropriate mac address and assign it to whichever VM you need to spin up.

Here're the extended attributes for the PCI bridge that the 8x (16x?) slot connects to, note ARIFwd-:
Code:
root@ted:~# lspci -s 02.1 -vvv

00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1425 (prog-if 00 [Normal decode])
   Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
   Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
   Latency: 0, Cache Line Size: 64 bytes
   Interrupt: pin A routed to IRQ 25
   Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
   I/O behind bridge: 0000f000-00000fff
   Memory behind bridge: fea00000-feafffff
   Prefetchable memory behind bridge: 00000000e0800000-00000000e2ffffff
   Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
   BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
       PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
   Capabilities: [50] Power Management version 3
       Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
       Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
   Capabilities: [58] Express (v2) Root Port (Slot+), MSI 00
       DevCap:    MaxPayload 512 bytes, PhantFunc 0
           ExtTag+ RBE+
       DevCtl:    Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
           RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
           MaxPayload 512 bytes, MaxReadReq 512 bytes
       DevSta:    CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
       LnkCap:    Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <64us
           ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
       LnkCtl:    ASPM L0s L1 Enabled; RCB 64 bytes Disabled- CommClk+
           ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
       LnkSta:    Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
       SltCap:    AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
           Slot #0, PowerLimit 0.000W; Interlock- NoCompl+
       SltCtl:    Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
           Control: AttnInd Unknown, PwrInd Unknown, Power- Interlock-
       SltSta:    Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ Interlock-
           Changed: MRL- PresDet- LinkState-
       RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible+
       RootCap: CRSVisible+
       RootSta: PME ReqID 0000, PMEStatus- PMEPending-
       DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported ARIFwd-
       DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis-, LTR-, OBFF Disabled ARIFwd-
       LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
            Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
            Compliance De-emphasis: -6dB
       LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
            EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
   Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit+
       Address: 00000000fee00000  Data: 0000
   Capabilities: [b0] Subsystem: Hewlett-Packard Company Device 8103
   Capabilities: [b8] HyperTransport: MSI Mapping Enable+ Fixed+
   Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
   Capabilities: [270 v1] #19
   Kernel driver in use: pcieport
   Kernel modules: shpchp
I popped one of my Mellanox CX3 cards back in to play around, if you use dual port VFs you can have up to 7 with access to both ports on the card -- they support single VFs that expose both ports. I'm going to see about contacting the driver maintainers for FreeBSD, if I can help them get support for VFs working in FreeBSD 12 that'll open pfSense up for virtual appliance use when 2.5 drops. It might be viable to backport some of the driver code to FreeBSD 11 too, who knows.

I haven't had a chance to really work with the Chelsio cards yet, one of them bricked itself on first boot when the driver automatically updated its firmware. There's one sticking point with these though, Chelsio T4 adapters consume all 8 device addresses right off the bat so there aren't any left for VF creation. I'm still sorting out what is bound to each address (there are 5 device IDs assigned to Ethernet functions by default) and haven't been able to test passthrough to FreeBSD on this one yet.

Also, I've been meaning to read up on DPDK for a while now, that may be another lower overhead way to share 10/40GbE ports with a variety of VMs.
 
Last edited:

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
So I dug into this a bit more tonight while working with a Qlogic NetXtreme II that I picked up cheaply on eBay. This one (bnx2x driver) completely refuses to enable SR-IOV without ARI forwarding on the upstream PCIe bridge or switch, which this board doesn't support.

That's it BTW: the reason we're only able to allocate up to 8 device addresses per card is that the last chunk of the pci address is a 3 bit value. Without ARI forwarding on the upstream port there's no way to allocate more than 8 addresses per device. With ARI forwarding you get something like 254 virtual functions per device, without you're capped at 8 addresses.

That's not that bad though, I don't know that I have any real reason to need more than a couple of passthrough VFs at any given time. They can be recycled too, it shouldn't be too hard to write a script to pick a VF from the pool that isn't in use, give it an appropriate mac address and assign it to whichever VM you need to spin up.
See, this is what I am not 100% sure about. I was digging through the BKDG (BIOS and kernel developer's guide) to the AMD Kaveri family 30-3fh trying to see if ACS and ARI are supported or not - and I have the intuition that it is (since it's done using the APU's PCIe root complex). I wonder if it's possible to change something in the BIOS (outside of the config screens) to enable it. I'll try poking around this weekend and see what I can do.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
See, this is what I am not 100% sure about. I was digging through the BKDG (BIOS and kernel developer's guide) to the AMD Kaveri family 30-3fh trying to see if ACS and ARI are supported or not - and I have the intuition that it is (since it's done using the APU's PCIe root complex). I wonder if it's possible to change something in the BIOS (outside of the config screens) to enable it. I'll try poking around this weekend and see what I can do.
Here, this might be worth looking into: Modding the Asus Prime X370 Pro BIOS

Here's someone using a Ryzen chip having issues with his board not advertising ARI support:
SR-IOV not functioning on Asus X370-PRO board (w/ Ryzen) : VFIO

He got it working with a modded BIOS, so there may be some hope there.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Here, this might be worth looking into: Modding the Asus Prime X370 Pro BIOS

Here's someone using a Ryzen chip having issues with his board not advertising ARI support:
SR-IOV not functioning on Asus X370-PRO board (w/ Ryzen) : VFIO

He got it working with a modded BIOS, so there may be some hope there.
Eh, I spent half a day digging through the BIOS options in similar products (the DFI DT122-BE for instance) trying to figure out if the ARI options are present in some other machines...eh, I don't see it. I don't want to ping someone on a BIOS modding forum for help before I can confirm that the RX427BB still have the extended PCIe functions (like ARI and ACS) within.
That being said, the DFI DT122-BE is a cheaper version of the t730 for you penny-pinching bastards out there quoting at $175 or best offer, and assuming that you don't need the quad display functionality nor the smaller footprint of this box, and want that second ethernet port (which is Intel i217 BTW). Really hilarious that the motherboard is on eBay for $750 when the box itself can be had for much less.
@Patrick maybe you could recommend the DFI DT122-BE to be a reasonable alternative to the t730 for pfsense duty when you get around to part 2 of that Thin client repurposing write-up.

Anyways, yeah, I didn't have too much bandwidth to mess with the t730 over the weekend - Proxmox 5 is working just fine (even declawed that stupid no-license popup)...and then I realized that the vendor left the Mellanox ConnectX2 VPI cards on IB mode instead of Ethernet mode (and mstconfig doesn't support anything older than ConnectX3), so I can't exactly get the PCIe passthrough tests going this session. Oh well, I guess there's always next week.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
Anyways, yeah, I didn't have too much bandwidth to mess with the t730 over the weekend - Proxmox 5 is working just fine (even declawed that stupid no-license popup)...and then I realized that the vendor left the Mellanox ConnectX2 VPI cards on IB mode instead of Ethernet mode (and mstconfig doesn't support anything older than ConnectX3), so I can't exactly get the PCIe passthrough tests going this session. Oh well, I guess there's always next week.
I can help with that, you can specify port type for the vpi cards on the module load command line or with modprobe.d

Code:
root@bill:~# cat /etc/modprobe.d/mlx4_core.conf
options mlx4_core port_type_array=2,2 num_vfs=4,0,1 log_num_mgm_entry_size=-1
That should give you both ports in Ethernet mode, 4 VFs for port 1 and one VF with access to both ports. modinfo mlx4_core gives a full list of the parameters the driver supports.

I had to jump through some hoops to get OFED installed on Proxmox because of dependencies, I think I had to edit the install script to skip removing some things that core pve packages depend on. The in-box kernel driver seems to work pretty well though.

That being said, the DFI DT122-BE is a cheaper version of the t730 for you penny-pinching bastards out there quoting at $175 or best offer, and assuming that you don't need the quad display functionality nor the smaller footprint of this box, and want that second ethernet port (which is Intel i217 BTW). Really hilarious that the motherboard is on eBay for $750 when the box itself can be had for much less.
@Patrick maybe you could recommend the DFI DT122-BE to be a reasonable alternative to the t730 for pfsense duty when you get around to part 2 of that Thin client repurposing write-up.
Damn, look at that again. It does have quad display port on the back shield, 3 or 4 sata ports and what looks like a single pcie slot on a riser just above the CPU. No ram or drive, but if they'll OBO down a bit that's a pretty good deal.

Here's a similar board minus the case and with an external power supply:
DFI ITOX BE171-77EN-427B 770-BE1711-100G Mini-ITX Motherboard | AMD RX-427BB | eBay
 
Last edited:

SwanRonson

Member
Sep 27, 2018
33
3
8
No. What does it have to do with the CPU voltage? This is an embedded systems board.

if voltage could be dropped, it would be perfect for my use case. unfortnuately I think I'll have to go wtih one of the other NUC style boards
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
if voltage could be dropped, it would be perfect for my use case. unfortnuately I think I'll have to go wtih one of the other NUC style boards
Well, the net benefit of the t730 is that:

a) It's pretty small (4 Liter chassis compared to the 6 Liter dimension f the HP EliteDesk USFF models, and 13 of the usual SFF)
b) It's very quiet (one chassis fan with a single heatsink)
c) It has quad DisplayPort output (which means that after you upgrade to a newer device, its still a decent machine with AMD Netfinity support for multiscreen gaming or as an HTPC) - remind me to do a video on its gaming capabilities for an STH "After-Hours" thread.
d) There is also native support for a Broadcom BCM5709 Fiber GigE card that hangs off the M.2 slot and uses an SC connector - and that one can be obtained very inexpensively ($13).
e) It has a PCIe x16 slot (really an x8) so you can put a quadport GigE or dualport 10GbE inside - although a few of us have Mellanox 40GbE/Infiniband cards setup for it. I have SolarFlares in mine, @arglebargle have Mellanox in his - and I think @BLinux uses his with an Intel T350-T4 quadport card. Most NUC chassis do not have PCIe x8/x16 ports - and you'll be lucky to have a breakout for a PCIe x1 (probably some Frankensteinean setup and at most it'll be playing with a Realtek or Intel i217 card).
f) It's fairly capable (picture a Haswell NUC or a Playstation 4 with more CPU cores but half the GPU cores)

Frankly, undervolting is a bit of a dead horse to beat upon in this case, since it's already fairly efficient given what you have, which is an AMD Kaveri APU, not a Pentium-D like power hungry CPU and can exist in low draw applications, but it's certainly no Broadwell-U or Y - you might be able to tweak it using your typical AMD Zen or Carrizo MSR adjustment tools, but that's unknown territory here.
I don't mess with mine since it's used as an ESXi hypervisor, and I don't think @arglebargle messes with it in Proxmox or BYHVE when he was doing SRIOV testing.
@BLinux mentioned that he tested the t730 initially at 22 Watts, but once the displays are disconnected and the embedded Radeon R7 GPU shuts down, it goes back to about 11-13 Watts. @arglebargle also mentioned that you can push the system fan RPMs to a higher setting to wick away excess heat from an external card (which usually consumes 9 to 17 watts depending on what kind of card it is). Not my preferred method - the Gigabyte i7-5775R NUC that I have actually spins up quite often to make it extremely annoying as-is. If you are looking for something that can take an external PCIe card, you'll probably want to look at an HP t620 Plus thin client - slower but more power sipping APU (GX-420CA which loads at 10w max), native AES-NI suppport, and probably even cheaper than the t730. @Patrick did a video on it - maybe he'll do something cool with Part 2?
 
Last edited:
  • Like
Reactions: Tha_14

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
You're really not going to get the power draw of the T730 much lower than just installing cpufreq-utils (to enable CPU freq. scaling) and unplugging all of the displays. There may be something clever you can do to underclock the processor with software (or just lock it at a lower maximum frequency) but if you're that desperate to drop the power draw you should probably be looking at the T620 Plus, or even a <10W arm board, instead.

I'd only go with the T730 if you have a use case for the extra processing power in mind -- my pair are setup for virtualized pfSense in HA with the spare cores/cycles available to handle multiple VPN streams.

I don't think you're going to do much better than a T620 Plus with a NUC style system, even the 35W TDP boxes are probably going to draw more juice unless you're talking about something like a J1900 Celeron (or similar very low power CPU.)
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
You're really not going to get the power draw of the T730 much lower than just installing cpufreq-utils (to enable CPU freq. scaling) and unplugging all of the displays. There may be something clever you can do to underclock the processor with software (or just lock it at a lower maximum frequency) but if you're that desperate to drop the power draw you should probably be looking at the T620 Plus, or even a <10W arm board, instead.
*Eeeh*.
I doubt that you can hang a decent 10/40GbE NIC with SRIOV on ARM boards - either you pay through the nose for a Cavium ThunderX devboard or hack a MacchiatoBin (you end up as a Marvell guinea pig), or you can pretend those Pi clones (OrangePi or whatever) have the networking/IO horsepower to be useful. They usually don't.

If you truly care about performance per watt, then you can just wait for the Embedded Ryzens to hit the market in force. The RX-427BB on the t730 is a 4 year old AMD Kaveri/Steamroller design, and those were never really known for being friendly on the power draw on a performance/watt basis, not when compared to, say, the Bobcats on the T620 Plus (the Atoms of AMD-land). It's good value for the small outlay on those t730s, but once the 740s come out they'll be outclassed.
 
Last edited:

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
*Eeeh*.
I doubt that you can hang a decent 10/40GbE NIC with SRIOV on ARM boards - either you pay through the nose for a Cavium ThunderX devboard or hack a MacchiatoBin (you end up as a Marvell guinea pig), or you can pretend those Pi clones (OrangePi or whatever) has the networking/IO horsepower to be useful. They usually don't.

If you truly care about performance per watt, then you can just wait for the Embedded Ryzens to hit the market in force. The RX-427BB on the t730 is a 4 year old AMD Kaveri/Steamroller design, and those were never really known for being friendly on the power draw on a performance/watt basis, not when compared to, say, the Bobcats on the T620 Plus (the Atoms of AMD-land). It's good value for the small outlay on those t730s, but once the 740s come out they'll be outclassed.
Oh, that recommendation wasn't for my use case. I doubt you're going to find SR-IOV on a low power arm board any time soon.

The RockPro64 is the most interesting board in the "pcie capable" arm sbc space right now, I have one on my desk that I've used with an Intel I350-T4. You won't be doing much with it beyond basic routing, but the price and power draw are pretty nice.

ROCKPro64 – PINE64

And yeah, Kaveri/Steamroller isn't terribly power friendly, the power draw on my RX-427BB systems isn't much better than the power draw on my (significantly more capable) E3 v3 Q87 SFF machine.
 

SwanRonson

Member
Sep 27, 2018
33
3
8
Well, the net benefit of the t730 is that:

a) It's pretty small (4 Liter chassis compared to the 6 Liter dimension f the HP EliteDesk USFF models, and 13 of the usual SFF)
b) It's very quiet (one chassis fan with a single heatsink)
c) It has quad DisplayPort output (which means that after you upgrade to a newer device, its still a decent machine with AMD Netfinity support for multiscreen gaming or as an HTPC) - remind me to do a video on its gaming capabilities for an STH "After-Hours" thread.
d) There is also native support for a Broadcom BCM5709 Fiber GigE card that hangs off the M.2 slot and uses an SC connector - and that one can be obtained very inexpensively ($13).
e) It has a PCIe x16 slot (really an x8) so you can put a quadport GigE or dualport 10GbE inside - although a few of us have Mellanox 40GbE/Infiniband cards setup for it. I have SolarFlares in mine, @arglebargle have Mellanox in his - and I think @BLinux uses his with an Intel T350-T4 quadport card. Most NUC chassis do not have PCIe x8/x16 ports - and you'll be lucky to have a breakout for a PCIe x1 (probably some Frankensteinean setup and at most it'll be playing with a Realtek or Intel i217 card).
f) It's fairly capable (picture a Haswell NUC or a Playstation 4 with more CPU cores but half the GPU cores)

Frankly, undervolting is a bit of a dead horse to beat upon in this case, since it's already fairly efficient given what you have, which is an AMD Kaveri APcan U, not a Pentium-D like power hungry CPU and can exist in low draw applications, but it's certainly no Broadwell-U or Y - you might be able to tweak it using your typical AMD Zen or Carrizo MSR adjustment tools, but that's unknown territory here.
I don't mess with mine since it's used as an ESXi hypervisor, and I don't think @arglebargle messes with it in Proxmox or BYHVE when he was doing SRIOV testing.
@BLinux mentioned that he tested the t730 initially at 22 Watts, but once the displays are disconnected and the embedded Radeon R7 GPU shuts down, it goes back to about 11-13 Watts. @arglebargle also mentioned that you can push the system fan RPMs to a higher setting to wick away excess heat from an external card (which usually consumes 9 to 17 watts depending on what kind of card it is). Not my preferred method - the Gigabyte i7-5775R NUC that I have actually spins up quite often to make it extremely annoying as-is. If you are looking for something that can take an external PCIe card, you'll probably want to look at an HP t620 Plus thin client - slower but more power sipping APU (GX-420CA which loads at 10w max), native AES-NI suppport, and probably even cheaper than the t730. @Patrick did a video on it - maybe he'll do something cool with Part 2?
You're really not going to get the power draw of the T730 much lower than just installing cpufreq-utils (to enable CPU freq. scaling) and unplugging all of the displays. There may be something clever you can do to underclock the processor with software (or just lock it at a lower maximum frequency) but if you're that desperate to drop the power draw you should probably be looking at the T620 Plus, or even a <10W arm board, instead.

I'd only go with the T730 if you have a use case for the extra processing power in mind -- my pair are setup for virtualized pfSense in HA with the spare cores/cycles available to handle multiple VPN streams.

I don't think you're going to do much better than a T620 Plus with a NUC style system, even the 35W TDP boxes are probably going to draw more juice unless you're talking about something like a J1900 Celeron (or similar very low power CPU.)
*Eeeh*.
I doubt that you can hang a decent 10/40GbE NIC with SRIOV on ARM boards - either you pay through the nose for a Cavium ThunderX devboard or hack a MacchiatoBin (you end up as a Marvell guinea pig), or you can pretend those Pi clones (OrangePi or whatever) have the networking/IO horsepower to be useful. They usually don't.

If you truly care about performance per watt, then you can just wait for the Embedded Ryzens to hit the market in force. The RX-427BB on the t730 is a 4 year old AMD Kaveri/Steamroller design, and those were never really known for being friendly on the power draw on a performance/watt basis, not when compared to, say, the Bobcats on the T620 Plus (the Atoms of AMD-land). It's good value for the small outlay on those t730s, but once the 740s come out they'll be outclassed.


thanks for all the info gentlemen! much appreciated

was leaning towards the 730 due to active cooling and extra room for future expansion but I believe I'll go with the 620.