New Server/Home lab plans - please critique

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TheHobbyist

New Member
May 11, 2016
3
2
3
Sorry for this wall of text. I've been lurking here and /homelab for a while, planning and learning. Thanks in advance for any advice. I really do appreciate it.

The time has come for be to replace my WHS (2011) box with something new. I'm not in a huge rush, but I've been doing some thinking and planning about what a new server should do. By way of background, I am not an IT guy by trade, only by hobby.

Right now, here's what I'm thinking: first, an NAS. I haven't decided on which one, but either RockStor or Open Media Vault are the front runners right now. I'm only so-so on Linux (Debian derivatives) and I'm not sure I really want to try learning FreeBSD (for FreeNAS), too.

In addition to the NAS, I want to run a Unifi controller, Plex (maybe two simultaneous transcodes), FreeRadius, DNS/PiHole, Untangle, maybe a Minecraft server occasionally, and a very basic mostly private website (mainly so that my parents stop asking me to send pictures of the kids). I think I might like to play with ELK, too, but that's less important. Right now, the Unifi controller and FreeRadius are running on a 1st gen Raspberry Pi. My intention is to virtualize all of that (XenServer or ESXi). I've got some, not necessarily a ton, of experience with Linux, FreeRadius, unifi controller (ok, that's pretty simple), and I'm pretty good with Windows (not server), which may or may not help me at all here. I would like a platform where I can play around, too. I've no desire to become a professional sysadmin, but I find this sort if thing fun and it keeps me out of trouble.

So, current plan involves a D-1518, a 6 or 8 disc array (HBA passthrough), and at least 4 GbE ports (two dedicated for untangle, two for the rest). I neither have nor plan on upgrading to 10GbE. I might buy a Ubiquiti switch, which has SFP, but that's not going to be an immediate thing, so I'm not counting SFP ports.

Don't care at all about noise. Care moderately about power use. My one real limitation is space, especially depth. I'd like to try and stay with short depth stuff (rack mount is good, though). Going full-depth rack mount isn't impossible, just really inconvenient.

Nothing is going to be really high use, so I'm assuming that a D-1518 is sufficient, if you disagree, please let me know. In SuperMicro-land, I can choose a D-1518 with either 6 GbE/4SATA (x10sdv-tp8fp) and an HBA or 2 GbE/16SAS (x10sdv-4c-7tp4f) with a networking card. I'd do either in a compact 1U chassis. Is there likely to be any practical difference between these approaches? Either way, I'd be able to pass one hdd controller through to the NAS and two Ethernet ports to Untangle. The two motherboards look more-or-less identical, otherwise - am I missing anything important there?

NAS drives (all SATA) would live externally. Maybe an SA120 (larger than I really need) or a DIY DAS solution. Having a dedicated NAS would be ideal, I know, but that's not going to fly at the moment. Maybe when my wife forgets how much all of this will cost.

I could likely fit everything in a large tower case, but I sort of want to build a rack mount system. Not sure why, but I don't think it'll be hugely more expensive, so why not. Also, at some point I'm going to have to upgrade to 24 port switch and those are rack mountable, too.

I'm sure that haven't thought everything through as well I should have, so any advice would be very welcome at this early stage of the game. What have I totally missed or seriously screwed up, etc?

Like I said, thanks for anything you'd care to share.
 
  • Like
Reactions: Patrick

GaveUp

New Member
Apr 11, 2016
19
5
3
You're about where I was a year ago. I don't know how much this will help out but here's my current setup and some of the things I've learned along the way.

12U Rack
1x48 Port Managed Switch
3xC2750 Atoms in 1U's, ESX cluster
1xX10SDV-4C-7TP4F in a RPC-4224 Chasis for NAS, 11x4TB in RAIDz3

I'm in the process of virtualizing all of my servers (D525 box and a 1st gen PI left). I just replaced a X8ST3-F/L5640 MB/CPU in the NAS with the D1518. The NAS has ESX on it with the onboard SAS and a PCIe LSI2008 HBA passed through to nas4free. Boot drive & nas4free are stored on a random old 64G ssd.

My plan is to add two more 1508 or 1518 1U's to build a cluster along with the NAS and retire the atom based cluster (if I could run the 1518 and C2750's together in a cluster I wouldn't retire them). I also plan to go 10Gb at some point.

Things I've learned along the way. Racking things is a bit more expensive, but worth it in my opinion. You end up with a cleaner setup. The 1518 is plenty of horse power for a NAS, at least in a home environment. I had looked at putting the drives in an external JBOD, but decided against it for a couple of reasons. One, I already had the RPC-4224 and two, it was another point of failure. I can't remember where I read it (may have been on these forums), but a good rule with external JBOD's is to have 1 drive per JBOD in an array. It makes sense since it prevents a loss of the array if one JBOD goes down. The down-side is, this gets really problematic with arrays with lots of drives (like I have). Also, plan for the future. I built the c2750 cluster (32G RAM each) thinking that would be plenty. I still have plenty of RAM available in the cluster, but the lack of CPU is annoying in some instances (I use one VM for dev work).

On the ESX side, why the cluster? Because I wanted the vSAN. I debated just doing iSCSI on nas4free, but that created a single point of failure and meant when reboot/upgrading the nas all the VM's had to come down. With the 3node setup I don't have this issue. That said, I'm debating the iSCSI route again since I could then get away with just 1 more 1518 and use one of the atoms to gain redundancy (it's just a cheaper route).

Power-wise the 1518 sips. With 11x4TB I'm at about 120-130W idle. My previous setup was nearly double.

NIC wise, I wouldn't be too concerned about just 2xNICS in a home environment. Traffic tends to be low. The only spot where I want/like having an extra NIC is for fault-tolerance of the VM's. Once I go 10G I wouldn't even be concerned about having that on its own NIC.
 
Last edited:
  • Like
Reactions: Sergio

TheHobbyist

New Member
May 11, 2016
3
2
3
GaveUp,
Thank you. I've got to admit, once the discussion gets to vSAN, my brain starts to overheat. :/ Hearing hope you've put your system together definitely helps. Judging by your setup, I'm on the right track, at least, which is good enough for the moment. I was thinking about the C2750 for a bit, but decided that the extra RAM and processing umph of the xeon D would give me more overhead, at least partially to compensate for imperfect judgement on my part.

A cluster would be neat, but I'm not there yet, either by skill or desired financial outlay. maybe one day.

One question about Ethernet ports, though. My understanding was that a system like Untangle required a pair of dedicated ports to connect between, in my case, the ONT and the router. That can't be shared with VMs that exist on the LAN, can it? Bandwidth isn't an issue, i agree. Also, other than the NAS, nothing I have planned uses much bandwidth at all, so trying to share doesn't bother me.
 

pc-tecky

Active Member
May 1, 2013
202
26
28
Yes: a firewall (Untangled) needs at minimum 1x (external "wild" network zone) WAN connection and 1x (internal "trusted" network zone) LAN connection (physical or virtual). The DMZ sits between firewalls or just outside of the firewall to the internal "trusted" network. (Internet - WAN <--> {firewall2; opt.} <--> DMZ (branched) <--> [firewall1] <--> LAN - Intranet)

I have/had (in transition/re-assessment of hardware roles) ESXi running on a X7DVL-E with 2x 1GbE (ethernet) ports (LAN1 was shred with IPMI functions, unknown at the time, and definitely didn't want out on the WAN side of things) with 2x PCI-X quad-port 1Gb Ethernet cards connected to a virtual switch (to the other VMs) to comprise of the LAN. LAN port initialization and labeling seemed inconsistent (like they jumped around on reboots and power outages).

The motherboard bios/(u)efi should have LAN1, LAN2, etc. labels for each physical network adapter present. How you ultimately decide to use the NICs (LAN ports) is how they should be labeled [as WANs and LANs]. I would identify if IPMI is shared with NIC/LAN1 and avoid it's use as Untagled's WAN port for many [security & connectivity] reasons. Check and re-check that the ports are consistently named, then proceed to the next step. Hardware re-direction helps eliminate the need to create virtual switches to be the go-betweens (or cables) for LAN and WAN connections (I think I had 3 virtual switches, 2x WANs and 1x LAN (DMZ not used)). The WAN switches in virtual theory could have been tapped for analysis by tools like wireshark. My logic initially was to access my VM's via thin clients (I have two: 1) AMD Sempron based HP/Compaq T5730 and 2) Intel Atom based HP/Compaq T5740), or a computer/laptop.