Multipurpose HomeServer

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Achilles

New Member
Nov 6, 2017
2
0
1
44
Hi everyone

Been following STH a long time, the time has come for registration and first post!

Build’s Name: S.M.A.S.H. (Small Multipurpose AMD Server Hardware) :)
Operating System/ Storage Platform: CentOS, FreeNAS as a VM
CPU: AMD Ryzen 1400
Motherboard: Asus Prime B350M-A
Chassis: Fractal Design Core 1500 Mini Tower
Drives: Toshiba 128GB SSD, 4 x Seagate IronWolf ST4000 (4GB)
RAM: 16 GB G.Skill Ripjaws V (upgrade to 32GB if necessary )
Add-in Cards: Some Intel Quad Port NIC
Power Supply: Corsair SF450
Other Bits: Will be connected to existing Network Equipment (Cisco 867VAE and Cisco SG350-10)


Usage Profile:
  • MS VMs (AD, etc.)
  • FreeNAS
  • Remote Access
  • VPN
  • pfSense
  • ...
Other information…
  • The data is not critical (ZFS without ECC Memory)
  • Has to be silent!
  • The plan is to install CentOS on the SSDand FreeNAS as a VM handling the Seagates with direct hw access.
  • Dunno about the case fans, they will probably be replaced by some Noctua.
  • I'll try to leave the CPU fanless by using the Thermalright HR-02 or the Le Grande Macho RT if it gets too hot.
  • most of the stuff was bought used, could change some if needed
Some questions now
  • Which Intel NIC?
  • Raid Add-in Card or not?
  • Has anyone running a FreeNAS VM with CentOS as a host? Good or bad idea?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
1. Why do you need a Quad Port NIC in the first place?
2. Why do you need a Raid Card in the first place? - For FreeNas this is not going to work and I only see one OS disk
3. No, never heard of that. Sounds like a bad idea:p Not impossible, but why reinvent the wheel?

Basically you want 3 systems
FreeNas
AD - what for ?
pfsense (Remote Access, VPN)

FreeNas can virtualize with bhyve nowadays so you can host evertyhing there as well.
MS can host VMs (Hyper-V) as well so you can host everything there as well.

Just seems...far fetched ... to add just another OS on top if you don't need it ... or maybe you have not explained your reasoning well enough that I understand it;)
 

Cheddoleum

Member
Feb 19, 2014
103
23
18
  • Which Intel NIC?
If you want to support PCIe passthrough to VMs -- and you do because vhost-net, for all that it's faster than pure emulation, still eats up cycles and memory bandwidth -- your safest bet is to go with i340 or i350 cards made by Intel. (i.e. not Intel PHYs on a 3rd party board.) The essential thing is that each NIC is in a separate IOMMU group, so you can pass them through individually as needed. With some cheaper/older/3rd party cards like the incredibly common HP NC364T they get lumped together into a single IOMMU group, so all you can do is pass the entire card to a single VM, which is awfully inflexible. (That particular card also gets hot enough to fry bacon and is PCIe v 1.0 so it kind of wastes other resources as well. Ask me how I know.)

Edit: Addendum: I don't know how well your choice of platform supports SR-IOV in other respects, so that's worth researching if you haven't already. If you're really serious about multipurpose, springing for a mobo/chipset specifically intended for server use can head off some problems; and you get to tap into a lot of brains about how well it works for your sorts of use cases before you pull the trigger. With a desktop/gaming mobo you may be on your own. Also, support is mayfly-like.
 
Last edited:

vl1969

Active Member
Feb 5, 2014
634
76
28
I would say check out Proxmox.

as of version 4.3 native support for ZFS, even install and boot from ZFS, including ZFS raid-1(mirrored drives)
WebUI management of many aspects of virtualization and other system configuration.
ZFS and some special setups need command line but that is very limited.

full support for Linux and Windows VMs with a nice WebUI management.
supports FreeNAS vm and ZFS volume passthrough. but you can also checkout the build in TrunKey File Server LXC template container instead of FreeNAS, it provides Web Interface for storage and user access management.
build in SAMBA and owncloud (I think) sharing.

as it have been pointed out VPN and RemoteAccess can and should be provided by pfSense.
I personally do not like using primary server to directly connecting to internet, but here is where your additional nics come in.

basic network setup would be (that is how I would have setup given the hardware and your wants) :

reserver the MB onboard LAN port for server management. that would be your primary local IP for server access and proxmox interface.

take 2 ports from the addon quad port and reserve it for pfSense. this would be on port for WAN, connecting to your incoming ISP internet modem etc.., and other for LAN, connecting to the internal local switch.
other 2 ports can be bonded/teamed and used for internal network for all other needs.

if you do not want to mess around with hardware pass-through,
on proxmox create 3 vbridges.
vbr0, vbr1 and vbr2.

attach vbr1 to WAN port and WAN port only. DO NOT USE it for anything else.
attach vbr2 to LAN port.
attach vbr0 to your management port and also do not use it for anything else.

use vbr2 for all VMs that is use vbr2 when creating NIC interfaces in VMs.
one exception would be pfSense VM as you will use both vbr1 and vbr2 in there.
make sure you tell pfsense that WAN is the interface with vbr1 the way I did it in some of my test setups is I only connect one port to the network at a time and check which one gets the DHCP IP. as your WAN port should be auto IP from ISP unless you have static addressing schema.

than do whatever you want after that.

the only point of contention would be that your gateway is a VM (pfSense) and needs to start before any other VMs on reboots.
 
  • Like
Reactions: T_Minus

Achilles

New Member
Nov 6, 2017
2
0
1
44
Edit: Addendum: I don't know how well your choice of platform supports SR-IOV in other respects, so that's worth researching if you haven't already. If you're really serious about multipurpose, springing for a mobo/chipset specifically intended for server use can head off some problems; and you get to tap into a lot of brains about how well it works for your sorts of use cases before you pull the trigger. With a desktop/gaming mobo you may be on your own. Also, support is mayfly-like
Main reason for the non-server hardware is that I had most of it. HW alternative is some used Hp Microserver Gen8 with an upgraded CPU and maxxed out RAM or the new GEN 10 x3241.
Basically you want 3 systems
FreeNas
AD - what for ?
pfsense (Remote Access, VPN)
Like I said, among the things I'd like to test are some involving AD, Exchange, etc. with a few virtualized Win10 clients.
FreeNas can virtualize with bhyve nowadays so you can host evertyhing there as well.
MS can host VMs (Hyper-V) as well so you can host everything there as well.
No FreeNAS experience, haven't tried running VMs on FreeNAS tbh. I wont be trying anything demanding, maybe I'll just try.
Regarding Hyper-V, no big fan of it...
I would say check out Proxmox.
That's more or less the setup I had in mind. Prolly should do some reading about Proxmox first
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
For your purpose ESX free might also be usable o/c (if it runs but with intel nics it should)
 

986box

Active Member
Oct 14, 2017
230
41
28
44
Hi everyone

Some questions now
  • Which Intel NIC?
  • Raid Add-in Card or not?
  • Has anyone running a FreeNAS VM with CentOS as a host? Good or bad idea?
I built an all in 1 box few years back using intel dq77mk m/b. It comes with dual LAN. One of the port is mgmt port. Pfsense uses the other for WAN. I pickup a dual gigabit PCIe card for LAN.
The box has ESXI installed. M1015 flashed to IT mode and passed thru to Freenas.

Pfsense will boot up first followed by Freenas. The remaining VMs followed after.
Depending on the size of the NAS, you may need more than 16gb for the box.

I have 3 2TB drives which requires 12-13gb for max performance. I get around 100-110MB/s on both NFS and CIFS shares.
 
  • Like
Reactions: sboesch