Triton (SmartOS) / Plex / Low Power Media Server Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Matthew Stott

New Member
Dec 28, 2016
8
4
3
52
Hardware:
- SUPERMICRO MBD-A1SRi-2758F-O Mini ITX Server Motherboard CPU included (Atom C2758 SoC FCBGA 20W 8-Core)
- 32GB RAM ECC SO-DIMM
- LSI Internal SAS SATA 9211-8i 6Gbps 8 Ports HBA PCI-E RAID Controller Card (IT Firmware flashed to JBOD)
- StarUSA TC-1U30FX8 20+4Pin 300W Single 1U Flex ATX 80 Plus High Efficiency Power Supply
- Coboc SFF8087-MM-0.5M 0.5 meter 30AWG Internal Mini SAS 36-Pin SFF-8087 Male to Mini SAS 36-Pin SFF-8087 Male Cable with Nylon Braiding
- NORCO ITX-S8 Black Black Mini-ITX Form Computer Storage Case (8 bay hot swap)
- QTY 8 - HGST Deskstar NAS 3.5" 4TB 7200 RPM 64MB Cache SATA 6.0Gb/s High-Performance Hard Drive H3IKNAS40003272SNHGST
- QTY 2 - Noctua NF-A8 PWM quiet fans - Replacing stock fans on Norco where were not PWM and rather loud for home use.

Project Goals, low power usage, quiet, 5 Gigabit Ethernet ports, large storage, reliability, Plex Media Server, Time Machine backups, act as pfSense firewall, have additional room for more virtual machines.

Surprised by the power provided by the Atom C2758 SoC 8-Core CPU. It's a lot beefier than anticipated and can handle quite a lot of load. Daily usage of this box is idle most of the time but it does handle advanced tasks now and then with ease. Latest version of Plex on an LX Branded Zone in SmartOS handles transcoding with ease. The branded zone can see all 8 cores and use them. You can throttle if you wish but its not necessary in a simple home system.

Very happy with the Norco 8 bay case, especially like the backplane and SFF8087 support, only two SAS cables from backplane to LSI HBA.

Note that Illumos/SmartOS does not support USB3 so it goes unused. PXE boot SmartOS regardless.

Install SmartOS, prep the drives. Went with basic RAIDZ2 as there will be snapshot ZFS Send Receiving critical data to a second ZFS NAS as a backup. Otherwise might have gone with 4 mirrored pairs instead of RAIDZ2. Have tried out ZFS ZIL & L2ARC on SSD in the past and I just don't have the traffic bandwidth to justify it and so it appears to make little to no difference. If I had hundreds of people hitting the server on a regular basis, it would make sense but for home use it's just not practical. There is a bay in the Norco case side for an SSD. You could run OmniOS and/or Napp-IT with ease instead of SmartOS and still use an LX Branded zone for Plex.

Created a Ubuntu 16.04 LX Branded Zone, allocated 4GB RAM but only need one or two GB of RAM for Plex, even with multiple transcodes. Plex and assorted other software (Handbrake-CLI), etc. Allocated 30GB for zone but added a /media direct mount from global zone which is shared via NFS from the global zone for ease of access to the media files from external systems.

Setup Time Capsule emulation for Apple Mac Backups
Created native minimal 64bit zone then

pkgin in netatalk-3.1.7
mkdir -p /var/db/netatalk/CNID

Using the following /opt/local/etc/netatalk/afp.conf:

[Global]
log file = /var/log/netatalk.log
server name = capsule
uam list = uams_guest.so,uams_dhx.so,uams_dhx2.so
mimic model = TimeCapsule6,116
guest account = capsule

[capsule]
path = /storage/capsule
time machine = yes

And the following smf manifest:

<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='export'>
<service name='network/netatalk' type='service' version='0'>
<create_default_instance enabled='false'/>
<single_instance/>
<dependency name='network' grouping='require_all' restart_on='error' type='service'>
<service_fmri value='svc:/milestone/network:default'/>
</dependency>
<dependency name='filesystem' grouping='require_all' restart_on='error' type='service'>
<service_fmri value='svc:/system/filesystem/local'/>
</dependency>
<dependency name='mdns' grouping='optional_all' restart_on='error' type='service'>
<service_fmri value='svc:/network/dns/multicast'/>
</dependency>
<method_context/>
<exec_method name='start' type='method' exec='/opt/local/libexec/netatalk/netatalk' timeout_seconds='60'/>
<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'/>
<property_group name='startd' type='framework'>
<propval name='duration' type='astring' value='contract'/>
<propval name='ignore_error' type='astring' value='core,signal'/>
</property_group>
<property_group name='application' type='application'>
<propval name='config_file' type='astring' value='/opt/local/etc/netatalk/afp.conf'/>
</property_group>
<stability value='Evolving'/>
<template>
<common_name>
<loctext xml:lang='C'>Netatalk AFP Server</loctext>
</common_name>
</template>
</service>
</service_bundle>

Created a pfSense KVM zone and configured for internal and external use. Straight from cable modem into the server and out to a Gigabit switch. Converted Apple Airport Extreme routers to be access points using pfSense as the router. All traffic internal to SmartOS as well as external all flow through pfSense.
- Frankly, I will likely build another box just for pfSense as rebooting the SmartOS box stops all Internet routing and that can be annoying. This motherboard is way overkill for pfSense. Likely go with something like this PC Engines apu2b2 product file

In a few years it may be practical for a similar build using all SSD (over provisioned RAID ready SSDs only or server suitable NVMe PCIe card) on ZFS. It would be completely silent and even lower power usage. But for now 24TB's on 8 4TB drives running at SATA 6Gb/s you can't beat the price.
 
  • Like
Reactions: cperalt1 and nle

Matthew Stott

New Member
Dec 28, 2016
8
4
3
52
Can handle 5 simultaneous streams while transcoding, possibly more. Haven't had a problem at all. The Xeon E3 quad core starts up a stream faster but with latest Plex no real problems on C2758. You need more than a handful of transcoding streams then consider a Pentium D SoC or Xeon.

Plex has come a long way software wise very recently. They rewrote their streaming and transcode engine. It's a lot more efficient.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Can handle 5 simultaneous streams while transcoding, possibly more. Haven't had a problem at all.
Just to clarify, you're saying you've tested have 5 simultaneous transcodes going at once, or just 5 direct streams/plays going while a single transcode is going on? I'm a little skeptical that it can handle 5 simultaneous transcodes with no issues unless the bitrates you are transcoding from/to are very close.
 

Matthew Stott

New Member
Dec 28, 2016
8
4
3
52
You are correct 5 simultaneous streams up to two full transcode streams included. Most of my clients do not require transcoding as they are direct play. I've not run into problems with friends streaming and multiple LAN clients streaming different content at the same time. If I have some time this weekend, I'll see if I can setup a test scenario and capture some actual hard statistics.
 
  • Like
Reactions: gigatexal

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
You are correct 5 simultaneous streams up to two full transcode streams included. Most of my clients do not require transcoding as they are direct play. I've not run into problems with friends streaming and multiple LAN clients streaming different content at the same time. If I have some time this weekend, I'll see if I can setup a test scenario and capture some actual hard statistics.
That's what I would expect. Direct Streams/Direct Plays have little to no impact on your CPU so they don't really speak to the performance of the CPU regardless of how many there are. The C2758 is a very solid low power option for those who don't need much transcoding or are planning to mitigate any current transcoding needs by re-encoding their files into DP formats.

I had considered that CPU when building my vSAN cluster since I wanted low power but decided to go with the Xeon D-1537's. Still very lower power (35w each) but more headroom which I'm already making use of.