HP t730 as an HP Microserver Gen7 replacement

Discussion in 'DIY Server and Workstation Builds' started by WANg, Jun 10, 2018.

  1. WANg

    WANg Member

    Joined:
    Jun 10, 2018
    Messages:
    70
    Likes Received:
    30
    Build’s Name: The stealth environment

    Operating System/ Storage Platform:

    Thin client: VMWare ESXi 6.5
    N40L: iXSystems FreeNAS 11.1

    CPU:
    Thin client: AMD RX427BB (Bald Eagle platform, Steamroller quad cores)
    N40L: AMD Turion II Neo N40L (K10/Geneva, dualcores)

    Motherboard:
    n/a (both are HP Proprietary)

    Chassis:
    Thin client: HP t730 Thin client
    N40L: HP Microserver Gen 7/N40L

    Drives:
    Thin client: 32GB Sandisk SSD, M.2 Key-BM
    N40L: 1 Toshiba 16GB consumer class Thumbdrive + 4 HGST Deskstar NAS 4TB drives

    RAM:

    Thin Client: 2x8GB DDR3L 204 Pin Laptop DIMM (G.Skill Ripjaw F3-1600C9S)
    N40L: 2x8 GB DDR3 240 Pin desktop DIMM (G.Skill Ares F3-1333C9D-16GAO)

    Add-in Cards:
    Thin Client: SolarFlare SFN5122F
    N40L: Solarflare Flareon SFN7322F

    Power Supply:

    n/a (Both are stock HP, although the thin client uses a 90W HP Notebook barrel type power brick)

    Other Bits:

    A pair of generic 1m Twinax cables

    Usage Profile: Small, quiet, low profile, secure server/VM hosting environment for the home office

    Other information…

    Okay, so here's the situation - I live in a small-ish NYC apartment, and I also run IT for a mid-size company. I do quite a bit of self-study on the side and would need to reproduce issues on a constant basis. I also have a crapload of personal digital artifacts (photos of past vacations, archives of old software) that I would need to stash away in a reasonably safe and redundant environment. The missus and I treat our home as like our oasis of relaxation, so the last thing I want is a massive rackmount environment with loads of fans making an appreciable dent on the electric bills. So back in 2011 I picked up a Microserver Gen 7, which was my VMWare ESXi5 environment with a bunch of decommissioned desktop drives ranging from 250 to 2TB. However, the machines does not do any type of RAID, and the drives are tossed into the machine as a bunch of JBOD environments as multiple data stores. Of course, it's 2018, and a netbook processor with a CPUMark bench score of 930 is not going to cut it (even though all I run are really just a CentOS kickstart environment, an rsync setup for locally caching the CentOS repositories, a DNS server (djbdns/dnscache), a Debian FAI environment, a triplet of Juniper JunOS Olive VMs, and a Windows XP VM)

    So, what to do? Well, I recently picked up an HP t730 Thin client on eBay for $200, which is a fairly high end device that can support 4 LCD panels simultaneously via Displayport - quadcore AMD, Radeon graphics, 8GB RAM and 32GB SSD-DOM that can support both Windows 10 IoT (licensed) and an HP simplified version of Ubuntu Linux called ThinPro (both comes with some nice disk image management utilities). HP RGS/RDP/NoMachine NX/ThinLinc aside, it's also more powerful than earlier assumed. I also have some leftover RAM from my gig, and 4 HGST 4TB NAS drives that I picked up from Newegg during Black Friday last year.

    Now. in terms of what was / needs to be done:

    a) Shut down and migrate the VMs off the HP Microserver G7 onto another machine via VMWare Converter.

    b) Shutdown the Microserver Gen 7, swap the VMWare ESXi OS (mounted on an 8GB thumbdrive) inside for something that can manage 4 drives - like iXSystems FreeNAS, which is FreeBSD based. Bonus - it can be installed on a thumbdrive, will support zfs (so zraid pools are totally feasible) and can do iSCSI serving of the zraid extents. Then swap the old drives out for the 4TB drives.

    c) Setup something that can serve the bits out via an iSCSI network, maybe a quadport 1GbE NIC using teaming, or perhaps 10GbE. I just so happen to have a pair of SolarFlare Flareon SFN7322F 10GbE PTP cards, and a bunch of SFN5122F 10GbE cards.

    d) Setup a VMWare ESXi 6.5 ISO that has the Realtek R8169 and Solarflare drivers pre-baked, and then image the ISO into an installer thumb drive. Install the OS into the 32GB SSD in the t730 thin client, install 16 GB of RAM into the Thin client, and then add a 10GbE card. Fire up ESXi on the thin client. test/make sure everything is working to this point.

    e) Wire the 2 machines up using a pair of SFP+ Twinax cables, configure the FreeNAS box to do raidz1 on the 4 drives, configure the 10GbE ports into an iSCSI network, then configure an iSCSI target with an extent mounted to the volume. On the thin client side, configure things to work as a multipath SCSI initiator, and then mount the extent.

    f) Once the ESXi boots up and mount the iSCSI drive, migrate the VMs back in. Test and verify accordingly.

    g) On the same big zraid1 pool, setup a SMB Share that can be used to consolidate and retire the myriad of network drives currently situated in the house.

    h) Figure out what to do about offline backups. Something LTO4 or 5 based, and preferably SATA or eSATA capable (need to hang off the MicroServer G7, and preferably can be driven by the FreeNAS GUI).

    This is how the current setup looks like in the house. I can post some photos up of the work done.

    So, why use a t730 instead of, say, a NUC, or a Microserver Gen 8 or 10? And why de-couple the storage from the processing?

    - It's cheap. Did I mention that you can pick one up on eBay for about $200, and there are quite a few available? Most thin clients have practically no resale value on the secondary market whatsoever as it is a bit of a niche device. If I choose to upgrade the ESXi environment later on, well, I still have a powerful quad screen thin client, which can be repurposed for, gosh, anything. Emulate a Playstation 4 with it one day? (no, just kidding. Not enough GPU horsepower, but definitely okay for PPSSPP). Add a pair of Mellanox Connect x4s, Chelsio Terminator T420s or SolarFlare SFN-5122Fs (remember to specify low form factor for the bracket), it'll probably be an extra $60 or so. Add a pair of generic SFP+ DAC cables and you'll be done at around $300. The t730 is usually shipped to you ready-to-go with 8GB RAM and a small SSD. If you buy a NUC you usually have to buy some extras - even for an older bundle with a Haswell or Broadwell i5/7, it's still going to be $400 at the very least. Besides, how many $200, quad DisplayPort capable NUCs with dual 10GbE networking support do you know of?

    - The thin client is absolutely silent, and have no problems with the Solarflare SFN5122F inside (the Flareon will lock up the t730 after 10 minutes of use regardless of which OS was being used...my guess is that it is just too power hungry or runs too hot). The Flareon or the 5122F have no issues inside the N40L Microserver, and performance seems decent enough. Power consumption is quite low, around 30w in most cases.

    - The performance is actually pretty good. CPUMark for the RX427BB is 4227, which is not too far from the 4420 CPUMark quoted for the Xeon E3-1220Lv2 found in some HP Microserver Gen 8s. It's definitely faster than the G1610T/G2020 on the base model Gen 8s, or oddly enough, the Opteron X3216 on the recently released Gen 10 Microservers. For the amount HPe charges for a Gen10, you are better off buying a t730 and keep the Gen 7 running as a dedicated NAS box. As for competing against NUCs, it's not that bad for the price point as well - it's roughly the same performance of an i5-6260U (4366 CPUMark), but unlike the NUC, you can get 10GbE networking running on it.

    - Why yes, it only has a 16GB practical RAM ceiling (32GB might work, but good luck getting your hands on some - 2x16GB DDR3L 204 Pin DIMM quotes for about $300 retail). The same can be said about most Intel NUCs used in home lab environments. A bigger issue though is the lack of support for a virtualization friendly IOMMU, or at least the ability to turn it on for VMotion.

    - Once you have the HP Microserver Gen7 configured as a 10GbE iSCSI box, well, you can reuse it again for the next upgrade. If I ever get around to that $2000 SuperMicro Xeon-D 1541 box (5028-TNT4), I can still use the Gen 7 as an attached NAS. I hate the idea of sending nominally useful hardware to the landfill, and this little black box has been with me since 2011 - No e-waste is allowed here. My only misgiving is that the N40L is still a 4 bay device. In theory I could use a custom caddy on the optical storage bay to add a few 2.5" 7mm SATA SSDs, but the resultant credit statement will earn me the evil eye from the missus (AKA the one who shall be obeyed).



     
    #1
    Last edited: Aug 11, 2018
    Patrick likes this.
  2. WANg

    WANg Member

    Joined:
    Jun 10, 2018
    Messages:
    70
    Likes Received:
    30
    Okay, so I have been asked regarding photos for the t730 - so here's a few.

    First. here's my "server rack" - the missus insisted on something that's a little less...obvious, so I went with an Ikea Gallant office cabinet. The t730 is seen above the HP MicroServer Gen 7 (which serves as its raidz1 iSCSI storage box).
    The t730 can be mounted horizontally using its existing stand to leave an air gap for ventilation. 90w right angle power brick is next to it along with a Powerline AV1200 adapter to connect to the rest of the house. There are currently 2 employees at the moment handling physical security on premise. If you look all the way on the lower left hand corner, you can spot the outlines of my Cobalt Qube 3, the granddaddy of all the home server appliances out there. I really should do something with it, it's a lovely little chassis.

    [​IMG]

    Here is a close-up of how it's rigged up on ground level - see how the stand raises the thin client up by several inches to allow some breathing room.
    [​IMG]

    So here's how it looks like standing up - note that I removed the plastic knock-out for the PCIe x8 port at the bottom.
    [​IMG]

    Very easy chassis to work with. All you need to do is unlock the green latch and pull on the black release tab, and the entire thing just slides right out (this is actually the BOTTOM of the chassis, the black rubber rounds on the stand attests to that). The cover plate for accessing the green latch also comes off easily enough. Once you get the cover plate out, you can knock out the plastic blankers to populate whatever you need - in my case you can see the SFP+ brackets from a dual port 10GbE card (minus its low profile bracket) poking out. Very nice design, note the 4 Displayport adapters built into the chassis along with dual serial ports, which is good for driving most things.
    In case you are wondering what the plastic blanker above the audio jack is for, that's for an M.2 PCIe x1 fiber adapter (AT27M2 or AT29M2, SC or LC both available). It can also be populated with a run-of-the-mill Intel M.2 based wireless card. I might take advantage of the optional fiber functionality to run some Multimode around my apartment later...that Powerline AV1200 setup hasn't been very reliable.
    [​IMG]

    So what's inside? Well, here's what it looks like partialy obstructed with the back cover plate. A single large diameter fan (rather quiet), an M.2 Key-E slot (you can see the mounting where the big white circle happens to be), the M.2 Key-BM slot for SATA DOM (below the fan, standard SATA), and under the cage is 2 8GB G-Skill DDR3L modules I pulled out of an old notebook PC of mine.
    [​IMG]

    Here's an even better photo of the layout. The front is to your left, and the exhaust fan points to the top. The M.2 SATA slot can accept several form factors, up to and including the 2280 form factor.
    The Sandisk U110 shipped with the unit is...not that great, but that was expected - those SSDs aren't expected to do more than simply hold an OS image for bootups, and expectations for I/O performance is rather low. After all, the SoC was designed for arcade/gaming machines with an emphasis on "good enough" visuals, and not something that would require much storage write-backs, so I doubt that NVMe support will ever be available for this particular thin client.
    [​IMG]

    Here's what the PCIe x8 slot looks like (right under my finger. Whoops, what's that nail clipper doing here?
    [​IMG]

    Here's the SolarFlare SFN5122F going into the system - note that I do not have an LP bracket for them.
    [​IMG]
    ...and what it looks like when installed.

    [​IMG]

    ...and here's how it looks like after installation. The panel has been remounted back in place, as is the back port access cover. Note that the horizontal mounting bracket sticking out of the right (bottom) side.

    [​IMG]

    And finally, what the entire thing looks like lit up with 10GBit connectivity. Huh, I guess I did not use that 90 degree angle PSU, then.

    [​IMG]

    If you need a better idea on component assembly/disassembly, HP provides a break-down guide for recyclers:

    http://h22235.www2.hp.com/hpinfo/gl...ountry/disassembly_deskto_201511423012668.pdf
     
    #2
    Last edited: Aug 11, 2018
    arglebargle, Tha_14 and Marsh like this.
  3. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    626
    Likes Received:
    215
    I know it's hard to get good staff these days, but one is clearly dozing on the job and - even more dangerous - the second is attempting to scale a company structure using no officially recognised safety equipment. This is a SEVERE health and safety violation and could leave you liable to be sued for improper working practices until your staff have been correctly trained and provided with the proper equipment.

    Also, fur coats are not proper attire for a work environment and are likely to lead to animosity.
     
    #3
    leebo_28, Tha_14 and WANg like this.
  4. WANg

    WANg Member

    Joined:
    Jun 10, 2018
    Messages:
    70
    Likes Received:
    30
    The employees have been repeatedly warned, and have their performance bonuses (treats) rescinded. However, due to the fact that management (wife) bought them in as well-being consultants they have been given blanket exemptions to standard operating procedure. I believe that their current workstations (in my bed next to my wife napping) are more appropriate to them than physical security.
     
    #4
    leebo_28 and Tha_14 like this.
  5. BLinux

    BLinux Well-Known Member

    Joined:
    Jul 7, 2016
    Messages:
    1,606
    Likes Received:
    411
    @WANg what are the expansion capabilities internally for the T730? From your photos, I see: 1x USB-A, 1x M.2, 1x mini-PCI?
     
    #5
    Patrick likes this.
  6. WANg

    WANg Member

    Joined:
    Jun 10, 2018
    Messages:
    70
    Likes Received:
    30
    Accoridng to the hardware reference:
    2 DDR3L RAM slots, 2 M.2 sockets (storage uses Key BM, WiFi/Fiber NIC uses Key E), 1 PCIex16 slot (really a PCIex8 on a 90 degree riser) and a single USB-A 3.0 port.
     
    #6
    Last edited: Aug 11, 2018
    BLinux likes this.
  7. BLinux

    BLinux Well-Known Member

    Joined:
    Jul 7, 2016
    Messages:
    1,606
    Likes Received:
    411
    ah... so what i thought was a mini-PCI slot is actually M.2 for the wireless card.

    do you know if that PCI-E slot is 2.0 or 3.0?
     
    #7
  8. WANg

    WANg Member

    Joined:
    Jun 10, 2018
    Messages:
    70
    Likes Received:
    30
    It should be PCIe 3.0 - if you look at the block diagram for the RX427BB, it shows the option to path out a single PCIe x16 3.0 port, or dual PCIe x4 ports. The official Quickspecs also mention that the optional FirePro W2100 card is PCIe 3.0. I don't recall any settings in the BIOS to change PCIe version support, and I don't think smbiosDump on the ESXi shell will show PCIe link speeds. I guess the only way to be sure is to boot it up to a Debian Live image the next time I have a downtime window and query it via lspci.
     
    #8
  9. arglebargle

    arglebargle Active Member

    Joined:
    Jul 15, 2018
    Messages:
    102
    Likes Received:
    40
    Quick question for you: I'm having trouble confirming drive type for the BM m.2 slot, it's SATA 6GB/s right?
     
    #9
  10. WANg

    WANg Member

    Joined:
    Jun 10, 2018
    Messages:
    70
    Likes Received:
    30
    Yeah, it's the 6Gb/sec SATA III standard. The original M.2 SATA SSD on those thin clients are Sandisk U110 M.2-2242 drives or its functional equivalent. The Sandisk MX500 M2 2280 drives will work just fine on them.
     
    #10
  11. arglebargle

    arglebargle Active Member

    Joined:
    Jul 15, 2018
    Messages:
    102
    Likes Received:
    40
    Awesome, thanks! I figured I'd probably want more than 16Gb storage if I'm going to take VM snapshots, heh.
     
    #11
  12. WANg

    WANg Member

    Joined:
    Jun 10, 2018
    Messages:
    70
    Likes Received:
    30
    You know the golden rule when it comes to SSDs, right? When in doubt, over-provision.
     
    #12
  13. arglebargle

    arglebargle Active Member

    Joined:
    Jul 15, 2018
    Messages:
    102
    Likes Received:
    40
    Yeah. I picked up an 180GB Intel Pro drive to drop in. I expect I'll only be using 20GB so I think I'll be ok.
     
    #13
  14. WANg

    WANg Member

    Joined:
    Jun 10, 2018
    Messages:
    70
    Likes Received:
    30
    Okay, so it's lazy rainy Saturday evening here in NYC, and as such, I had a few minutes of downtime with the t730. So what did I do? Shut down ESXi and see if I can poke around more with the Linux side of things - I want to see if PCIe passthrough and SR-IOV (PCIe Virtual functions) are functional. Considering the rather spartan BIOS environment - can this thin client be turned into a reasonably powerful little SR-IOV server?

    So, first question to be answered - accommodations for M.2 SATA SSDS - can you fit a 2230/2242 or a 2280 on the t730? Well, here's the slot:
    [​IMG]

    There's your answer right there - it can do 2230, 42 or 80. And it's definitely a SATA as it's Key B (the SanDisk U100 is a Key B+M). I should also mention that the M.2 SSD on my t730 is secured in place by a Torx T8 screw. Make sure you have the correct tool on-hand.

    Now, to get the t730 I had to bring it down the rack and set it up. That's a GeChic On-Lap 1302 portable USB powered monitor with DisplayPort-to-HDMI adapter, a USB keyboard+mouse set, and a PQI AirPen Express. Think of it as like a USB powered wireless network card that lets you NAT the traffic to a static IP on the ethernet side. Pretty useful device if you don't want to run Ethernet to a test environment. The boot media is a Debian Stretch LiveCD image on a USB 3.0 thumb drive (not seen here).
    [​IMG]

    Okay, first question to ask -

    a) Can the t730 do AMD-Vi/IO Virtualization?

    Off the dmesg:
    [ 1.065263] AMD-Vi: IOMMU performance counters supported
    [ 1.065271] AMD-Vi: Applying ATS write check workaround for IOMMU at 0000:00:00.2
    [ 1.065425] pci 0000:00:00.2: can't derive routing for PCI INT A
    [ 1.065429] pci 0000:00:00.2: PCI INT A: not connected
    [ 1.065604] iommu: Adding device 0000:00:01.0 to group 0
    [ 1.065628] iommu: Using direct mapping for device 0000:00:01.0
    [ 1.065651] iommu: Adding device 0000:00:01.1 to group 0
    [ 1.065700] iommu: Adding device 0000:00:02.0 to group 1
    [ 1.065721] iommu: Adding device 0000:00:02.1 to group 1
    [ 1.065792] iommu: Adding device 0000:00:03.0 to group 2
    [ 1.065813] iommu: Adding device 0000:00:03.2 to group 2

    ...

    [ 1.067232] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40
    [ 1.067236] AMD-Vi: Extended features (0x800090a52):
    [ 1.067240] PPR GT IA PC
    [ 1.067245] AMD-Vi: Interrupt remapping enabled
    [ 1.067508] AMD-Vi: Lazy IO/TLB flushing enabled


    So, that's a yes - which version of the BIOS, though?

    Let's check dmidecode:

    BIOS Information
    Vendor: AMI
    Version: L43 v01.10
    Release Date: 12/20/2017


    b) What speed does the PCI Express x16 slot operate under?

    Well, that's lspci -vvv

    LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <8us

    8 Gigatransfers per second is only possible under PCIe 3.0, so that's a PCIe 3.0 slot. That being said, I will be very careful about its maximum power acceptance values - the SolarFlare Flareon SFN7322F did not play nice with that slot.

    I subsequently found out that my SolarFlare SFN5122 operates at PCIe 2.0 speeds only ->

    01:00.0 Ethernet controller: Solarflare Communications SFC9020 [Solarstorm]
    Subsystem: Solarflare Communications SFN5122F-R6 SFP+ Server Adapter

    ...
    LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <512ns, L1 <64us

    Whoops. Not that I am complaining - it works just fine as an iSCSI adapter on my ESXi environment.

    c) How does the IOMMU groupings present itself for PCI passthroughs - say, if you want to do a plex box in Proxmox with a Radeon Vega, or if you want to do SR-IOV on a Mellanox card?

    Well, that one is a little more difficult, but there are URLs dedicated to identifying VT-d features - like this write-up on the ArchLinux Wiki. AFAIK it works, but in order for my SolarFlare cards to do VFs, they'll need to be configured using Solarflare's utility, and a kernel parameter must be passed at boot-time...which Ididn't do on a LiveCD. This is reflected in a short snippet on dmesg:

    [ 1.327814] sfc 0000:01:00.0 (unnamed net_device) (uninitialized): no SR-IOV VFs probed
    [ 1.328412] sfc 0000:01:00.0 (unnamed net_device) (uninitialized): no PTP support


    If I remember my SolarFlare Onload documentation, PTP support is either not present on the old 5-series cards, or it's not enabled unless you license the feature. VF on the other hand should be possible.

    There is a handy script on that Wiki that can list IOMMU groupings, and here's a snippet of the output from the script:

    IOMMU Group 0 00:01.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Kaveri [Radeon R7 Graphics] [1002:131c]
    IOMMU Group 0 00:01.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Kaveri HDMI/DP Audio Controller [1002:1308]
    IOMMU Group 10 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 15h (Models 30h-3fh) Processor Function 0 [1022:141a]
    IOMMU Group 10 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 15h (Models 30h-3fh) Processor Function 1 [1022:141b]
    IOMMU Group 10 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 15h (Models 30h-3fh) Processor Function 2 [1022:141c]
    IOMMU Group 10 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 15h (Models 30h-3fh) Processor Function 3 [1022:141d]
    IOMMU Group 10 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 15h (Models 30h-3fh) Processor Function 4 [1022:141e]
    IOMMU Group 10 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 15h (Models 30h-3fh) Processor Function 5 [1022:141f]
    IOMMU Group 1 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1424]
    IOMMU Group 1 00:02.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:1425]
    IOMMU Group 1 01:00.0 Ethernet controller [0200]: Solarflare Communications SFC9020 [Solarstorm] [1924:0803]
    IOMMU Group 1 01:00.1 Ethernet controller [0200]: Solarflare Communications SFC9020 [Solarstorm] [1924:0803]


    So yes, it looks like PCI VFs and passthroughs are possible on the t730. I'll need to pick up a 512GB Micron M.2 SSD, install proxmox and see how far I can go allocating VFs in various guest VM instances. I would like to see if it's possible to have multiple VMs fire off 10GbE packets at near-line speeds.

    Oh yeah, I should mention that my ESXi install is based on a BIOS boot, not an EFI boot (I modified the install media on the machine to use non-signed r8169 drivers, and if it's EFI based it'll trigger the purple screen of death). Not sure if I boot on EFI clean it might allow the IOMMU in the t730 to work in ESXi 6.5. That might involve me using the AT29M2 fiber NIC instead of the built-in r8169.

    Anyways, I uploaded redacted versions of the output off the dmesg/dmidecode/lstopo/lspci and the iommu-group script/script output in case you would like to take a look.
     
    #14
    Last edited: Aug 18, 2018 at 8:40 PM
    arglebargle, leebo_28 and cesmith9999 like this.
  15. arglebargle

    arglebargle Active Member

    Joined:
    Jul 15, 2018
    Messages:
    102
    Likes Received:
    40
    That's great news re: SR-IOV! My t730 should be arriving on Monday, I'm really looking forward to working with it now.

    Do you think there's room in the case to jam a 2.5" sata drive in there with an m.2 to sata adapter? That opens up more possibilities for cheap used drives than just m.2 sata.
     
    #15
  16. WANg

    WANg Member

    Joined:
    Jun 10, 2018
    Messages:
    70
    Likes Received:
    30
    What, an M.2 NGFF to SATA adapter like this? I wouldn't recommend it. If you are planning to install a PCIe adapter in there, it's going to take up quite a bit of room on the bottom. Let's just say that you are putting the M.2 to SATA adapter there, and assuming that the power feed pin-out fits (those Chinese adapters put their power leads to the left or right,, often in different spots on different production runs, and I don't see any power feed molexes inside that small chassis), the 2.5" SATA drive will likely go on top of the cooling fan. You'll probably need to secure it via cable ties. Not great for air circulation, and definitely not recommended for mechanical drives, but your typical 7mm SATA SSDs might work (not 100% sure in clearance - mine is already back on the rack so I can't even test fit it for you).

    [​IMG]
     
    #16
    Last edited: Aug 19, 2018 at 8:53 AM
  17. arglebargle

    arglebargle Active Member

    Joined:
    Jul 15, 2018
    Messages:
    102
    Likes Received:
    40
    Yeah, a cheap m.2 to sata adapter board like that. Alright, it looks like that could work out, I'll play around when my machine arrives.

    (I'm swimming in Samsung 850 Pros, I was thinking of maybe picking up another t730 and running Gluster across the two of them.)

    Edit: if you look at those boards again that's actually a power supply feed on those adapters, it's for supplying slot power to the sata drive you connect to the m.2 board.
     
    #17
    Last edited: Aug 19, 2018 at 10:45 AM
Similar Threads: t730 Microserver
Forum Title Date
DIY Server and Workstation Builds HP Microserver Gen10 Dec 18, 2017
DIY Server and Workstation Builds Can the HP Proliant MicroServer Gen8 G1610T run both FreeNAS and Pfsense? Aug 25, 2017
DIY Server and Workstation Builds Xeon CPU in HP Microserver Jun 23, 2014
DIY Server and Workstation Builds HP ProLiant Microserver Gen8 Discussion May 20, 2013

Share This Page