Future Home Server Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Roman2179

Member
Sep 23, 2013
49
14
8
To preface this post:
  1. This post is going to be a very long.
  2. I have been lurking on here for a quite a while and it has been extremely helpful, this is an amazing community.

Now that I have finally stopped lurking and created an account, I can finally start asking questions and, if possible, maybe even answer some questions.

I am going to be undertaking a pretty large project, for me anyways, at my parents’ house. The project will involve a running structured wiring, setting up a couple cameras, setting up a servers for various things and maybe automating a few things like irrigation. My plan is to run at least two network drops to all of the rooms, four in the living room and office area. A cable TV drop in every room and probably two in the living room area and finally a phone drop to every room. Don't really need the phone but it could be nice to have just in case.

The first part of this project is going to be structured wiring. I plan on getting a Leviton 48 inch panel and placing it close to the circuit breaker box in the basement. I will be adding a server rack in the area as well. So these are all of the drops that I am planning to have around the house:

Total Drops:
  • Network (Non-POE): 24
  • Network Cameras (POE): 8
  • Network AP (POE): 2-3
  • Cable: 9
  • Phone: 7
  • Fiber: 2

I would like to do fiber drops to two areas, one being my bedroom and the other being the office. No one else in the house would really take the advantage of fiber so there’s no need to have it anywhere else. The drop to my bedroom is questionable depending on how big of a pain it would be and if the cost would be worth it but there’s a good chance that I would forego the drop to my bedroom and maybe do a second drop to the office.

In general, I would rather have too many drops than too few. I prefer to use a wired connection whenever possible for things like DVD players, Xbox, Apple TV, etc.

The next part of the project would be to setup the server rack. I plan on having a total of three servers to begin with. A pfsense box, a NAS server and finally a VMware server. Ideally I would like to have 10Gbe for the NAS and VMware servers, as of now I have no idea on what switch I am going to use. Definitely will need advice for the switch.

I currently have a server that I am hosting ESXi on and doubling as a file server. Specs are as follows:
  • Supermicro X9SCM-F
  • Xeon E3-1230 V2
  • 32 GB RAM
  • IBM M1015 flashed to IT
  • 6x2TB WD Red

The M1015 is being passed through to a FreeNAS VM. So far it’s been working pretty well.

I will probably convert my current server to be strictly a file server and then get two more servers for the router and VM host.

Router:
  • Supermicro X9SCM-F or X9SCL-F (whichever is cheaper at the time)
  • Celeron G1610 or G2030
  • 2x4GB RAM
  • Install pfSense on USB
  • NORCO RPC-230 2U

NAS Server:
  • Supermicro X9SCM-F
  • Xeon E3-1230 V2
  • 32GB RAM
  • 3xM1015(Need two more)
  • Intel X520-DA2
  • Not sure on the hard drives
  • NORCO RPC-4224 4U

VM Host:
  • Another Xeon E3 machine
  • OR
  • Something with two L5639s
  • SSDs on a RAID card for the VM storage
  • Intel X520
  • NORCO 2U or 3U of some sort

So now I finally have some questions:
  1. Why are the Leviton Cat-6 patch panels so expensive? They are 160 for a 24 port, seems a bit excessive but they do fit nicely into the structured wiring panel. What are some other ones I could get that would also fit? Probably need 48 ports total in there.
  2. Should I get shielded cat6 or will regular be fine?
  3. Switch. No idea where to even begin on this. At first I settled on the HP 1810-48 but then I decided that I wanted to have 10GBe instead of truncking a bunch of 1Gbe connections together. Any ideas on switches that have 48 ethernet ports and 4-6 10GBe SFP+ ports? Also, POE would be fantastic so that way I don’t have to get a separate POE switch to run the cameras and Aps. I’s assuming I’m asking for a lot here and this will not be cheap.
  4. Should I run the management and other random things that will not take much bandwidth on a separate switch, such as a HP 1810-24?
  5. Is running three M1015s a bad idea? Should I get an expander instead?
  6. Finally, what should I do for the VM host? Another single E3 machine or get a couple of L5639s and find a motherboard to stick them in?
  7. How does one add a fiber drop to a room where a workstation would be able to get to it? Do they have keystone type jacks for fiber? No idea on this one.

This is just the beginning, if you read all of that, thank you very much.

Sorry about the extremely long post but this is my first big network project. I will be starting to gather parts in the next few months. It will be a bit of a slow build out depending on whether a new job opportunity comes through. Either way, I will make sure to document everything. I will be adding details as I remember them

Thank You
Roman
 

BlueLineSwinger

Active Member
Mar 11, 2013
181
71
28
Slow down. You're way over-specing this project.

You don't need fiber. The only reason you'd need fiber is to counter EMF noise and/or span great distances. I'm betting your home has neither.

Don't put the rack/patch panel near an electrical load center (i.e., the breaker box) if at all possible, to avoid EMF noise.

You don't need Cat6. It's expensive, harder to properly run and terminate, and Cat5e works just fine for 1Gb (or even 10Gb for short distances). Cat6 isn't even called for in any network spec. Cat6a is the next step up from Cat5e for ethernet, and it's even pricier. Stick to Cat5e. Do not buy shielded. Check Monoprice for components and cable.

You don't need 10Gb. What kind of loads will you possibly be bouncing around your network that led you to think such bandwidth was necessary? 10Gb may me a little useful in a couple places on your rack (e.g., between the VM host and NAS), but generally I doubt you'll see any real benefit over 1Gb unless you look really hard.

Your router seems fairly overpowered, unless maybe you're going to be running a load of plugins and such. Maybe an Atom-based unit would be a better choice.

There's almost no way you're going to be saturating your switch's backplane. You don't need a second for management.

Do you really need 12 cores for a VM host? Most home installs run short of RAM before CPU. What kind of guests do you expect to run, and how many? Most home installs will be fine with more guests that cores. I'd consider a hyperthreaded E3, or maybe even one of the new Atoms mentioned in other threads here. If you really want to go the 2xL5639 route, maybe this will work? And there's no need to RAID SSDs, unless you're referring to mirroring for the sake or redundancy.
 

vegaman

Member
Sep 12, 2013
60
3
8
Auckland, New Zealand
A switch that does 10GbE will raise the cost a lot - i.e. thousands instead of hundreds.
I don't see why you'd need it unless you're wanting to set up the NAS as a datastore for your VMs, but you mentioned you'll use SSDs for that. If you do want to go the SAN route I'd recommend considering Infiniband as well. Either way you can just directly connect them to save on an expensive switch.
I agree that fibre drops probably aren't warranted too. It's expensive to terminate and if you really want (the option of) 10GbE it would be easier to run cat6a or cat7a. Both are considerably more expensive and harder to work with than cat5e though.
 

33_viper_33

Member
Aug 3, 2013
204
3
18
You are talking an extremely expensive setup for a house. 10Gb is useful in a rack and maybe between your main computer and server. Consider direct connect as BlueLineSwinger said. Fiber between rooms may be nice in the future, but nothing really uses/calls for it yet. I would consider fiber between your rack and where the lines come off the pole and into your house if FIOS is or is expected to become available. If I had a chance to run my wiring again, I would consider a fiber line between my switch and office for switch to switch or cheap direct connect card capability. But it’s expensive and is of little use in most home cases. I'm running intel x540-t2 cards which are expensive for my server to media PC connection. This is overkill but is nice since I too often use the media PC for downloads (should use the vm, but laziness sometimes takes over). I have no experience with it, but from everything I read, I am considering infiniband for the rack since it is much faster and cheaper than 10gbe.

For your Ethernet runs, you are on the right path. Always overprovision! Use cat5e or better for your phone lines since it gives you more options in the event you want to utilize them for different purposes. I wired my father’s house several years ago and wanted to put multiple drops per room. At the time, we couldn't see how much technology was going to gravitate towards Ethernet and CAT 5. Now we both wish we installed more drops per room. My house has 8 drops in the office, 8 drops in the media room (4 each on opposing walls for furniture reorganization) and 2 extra line going between both walls for HDMI over CAT5 for the projector, and a minimum of 2 lines per room plus one more for phone. This is currently overkill! However, I'm slowly finding uses for them. My goal was to only have one switch for the entire house to minimize power requirements. Cat 5e is cheap. Power used for multiple switches will overtake the cost of cable after a couple years.

Centralize your patch panel and rack/servers if possible to make rewire easier. Try to keep them away from power distribution panels to avoid EFI noise. I also run SMX1500 UPS that shuts down the entire network after hours and during the day while I’m at work. It automatically turns on about 30 minutes before I normally get home for the day and at night for backups. Everything you are talking about running is going to make your electric bill sky rocket. Each enterprise grade switch is going to take at least 50 watts. POE will be even more (granted, its less power supplies per device…) That was one of my biggest mistakes when I got started since I was utilizing old hardware for everything since it was cheap. New hardware, virtualization, smart UPS with network interface, minimizing switches, minimizing overall power supplies are all things that can keep the electric bill under control.

Consider wireless especially for multiple floors. Another thing I wish I did was add cable for antennas on each floor instead of multiple accesspoints and expensive directional antennas.

Just some things to consider as you move forward.
 
Last edited:

Roman2179

Member
Sep 23, 2013
49
14
8
Don't put the rack/patch panel near an electrical load center (i.e., the breaker box) if at all possible, to avoid EMF noise.
Definitely noted on not placing this near the breaker box, will plan putting it on the opposite wall of the room.

I am mainly looking to have a 10Gbe hookup from the NAS to the network since it will be capturing the camera streams, streaming movies, and I'll more than likely be moving files around for backups and other random things. Would I be better off just aggregating a couple of 1Gbe links instead? Basically I would need a 48 1GB port + 2 SFP+ 10Gb port switch. The NAS and the VM host would really be the only things that would be connected to the 10GBe ports. It looks like aggregating a few links from the NAS may be the cheaper solution.

Would it be best to just get something like a HP1810-48G and a small POE switch? What are some other switches that are comparable? I'm leaning towards HP because of their warranty and people seem to be pretty happy with them. Any suggestions for a 12 port POE switch?

The reason I am unsure about what I want to use for the VM host is because of two heavy VMs that I run on my ESXI box now. The ones that use huge amount of CPU resources are Plex Media server and the VM I use for transcoding movies. At this point I can't do both at the same time without it impacting the performance on each machine. Those two VMs are the reason I'm leaning more towards a dual L5639 system. Transcoding currently is painfully slow. I should note that I am transcoding BluRays and each movie is about 30gigs.

The VM host will have machines for:
  1. Access Point Software
  2. Camera Capture
  3. Plex
  4. Web Server
  5. Win 7 to transcode video
  6. Random VMs for testing

In regards to the router being extremely over powered, noted. I will go with an atom based setup instead and 2-4 gigs of ram depending on price.

It looks like the difference between Cat 5e and Cat 6 is about $25 per box. I'll have to measure out how much I need and decide on which one to go with from there. I'm guessing the connectors and patch panels for Cat 5e vs Cat 6 are also a lot cheaper.

Eventually, I do see myself getting a c6100 setup. I do IT development work for my job and the farther I am getting into my career, the more I need to run VMs for testing and proofs of concept, some of which prefer clustered setups. It's just easier to setup small proofs of concept at home as opposed to trying to get the infrastructure team to spin up a couple of boxes and then getting extremely limited permissions to them. At that point would I just be better off looking for a Voltaire 4036 and setting up Infiniband between everything?

Sorry about all of the questions but I am wanting to put together a shopping list so that I can start hunting these things down as they pop up.

Thank You

Roman
 

33_viper_33

Member
Aug 3, 2013
204
3
18
Some folks out there are hell-bent on cat6 ratings for 10gbe. Unless you are doing long runs or have a lot of EFI, CAT5e is sufficient. I did cat 6 cable and cat5e patch panels and keystones in my house since I managed to get CAT6 for less than cat5. It is a PITA compared to cat 5 to work with, but it’s better shielded.

Link Aggregation will be far cheaper way to go and will be plenty for your cameras and peripherals. Me personally, I have a need for speed! If money isn’t an issue, I would go fiber from switch to server so I have 10gbe. It also would allow for 10gbe to your secondary server. Honestly for your case, I would just do link aggregation to the main server and 10gbe or infiniband direct connect to the secondary servers due to cost of switch. This won’t future proof you as well though.

If you are running a server 24/7 for cameras (I’m assuming security) why not use pfsense or the like in a VM. It would save on hardware cost as well as the electric bill.

C6100s are awesome but power hunger and noisy. It’s not something I would want to run 24/7 since its overkill for most home situations. But for your test bed, it’s hard to beat. I think my server will be a single xeon E5-2630v2 which will handle vms including pfsense, windows 7 (downloads/test bed), windows 2012 essentials (storage, AD, my movies, web host, etc), openindiana (ZFS and ISCSI), Ubuntu (test bed mainly). I’m just waiting on power figures for the V2 processors.

I know I’ve already said this, but I just wanted to reiterate… For home use, most people forget to consider power, noise, and heat. You fire up my current storage server (original core 2 quad 2.5ghz) and 11 disks, I’m idling at >130W and peaking out at well over 300W. My server room gets quite toasty! This was not good when I was living in Alabama during the summer. My power bills were approaching $200 during the peak of summer.
 
Last edited:

Roman2179

Member
Sep 23, 2013
49
14
8
Me personally, I have a need for speed! If money isn’t an issue, I would go fiber from switch to server so I have 10gbe. It also would allow for 10gbe to your secondary server. Honestly for your case, I would just do link aggregation to the main server and 10gbe or infiniband direct connect to the secondary servers due to cost of switch. This won’t future proof you as well though.
...
Completely agreed on that. While I don't have a need for 10Gbe right now, I will probably need it eventually. I would rather do a slight future proof build instead of having to rebuy things later.

I would probably never even try to push 10Gbe over copper, SFP+ and fiber would be my choice for that. Most of the drops in the house will never even need 1Gb speeds. The only drops that will actually fully use 1Gb are the office drops and the ones in my bedroom.

I would like to have 10Gbe within the rack though. I figure I will need it eventually since my needs are growing. I would rather not have to aggregate that many links. Just seems cleaner to have a single 10Gbe fiber connection going from the switch to the NAS and the VM host. While I generally won't be using the NAS as a data store, I'm sure there will be times where I will doing just that, especially if I get a C6100. It would be nice to just have a couple of network boot targets so that I can just boot up whatever image I need at the time. The C6100 would not be on at all times, only when I need to do any development or testing. Way too loud and power hungry to be on all the time.

What switch are you using that has 10Gbe?

Also, does anyone have any recommendations for POE switches?

Thank You

Roman
 

Roman2179

Member
Sep 23, 2013
49
14
8
What does everything think of using the HP 2530-24-PoE+ (J9779A) for APs and IP cams and getting either a HP 2530-48G (J9775A) or a HP 1810-48G (J9660A) for the rest of the traffic?
 

nry

Active Member
Feb 22, 2013
312
61
28
I personally have a very similar setup to what you are proposing. I just have a 10GbE switch and a few more ESXi hosts.

Having done a fiber drop from one side of the house to another I would not recommend this to anyone considering it! As I don't have the correct tools for terminating fiber I bought a 25m LC-LC cable for £6.99 off eBay, drilled far too many holes around the house to get access to run it. Then spent hours running it very slowly to not damage the cable! Unless you have a very clear run with no difficult spots :) I had to run mine up a cavity wall, through the mess in the loft, then half way down a cavity wall, through the floor boards then down through the ceiling downstairs!
Was worth it to have something faster than 1GbE on my iMac though :)

Switch wise I narrowed down my options to the following:
Dell 5224
- 2x 10GbE SFP+ ports
- 15-20w idle! (had the exact reading in my post somewhere)
- Does have some noisy fans which run 50% of the time

Cisco SG500X
- 4x 10GbE SFP+ ports
- Not too sure on power costs

Basically the Cisco was double the price so went with the Dell.
Both versions come in 24 port and 48 port, and both do some with POE support providing you buy the version with it...

Do you need layer 2 or do you need a switch which does routing too as your mentioning you use this for testing as well?

Your proposed router is almost identical to what I have on 24/7, I just have the E3 1220v2. I have pfSense VM, downloading VMs, media managing VMs and a fair few others and can honestly say it runs perfectly. Very fast and very low power considering what it does.
I even have pfSense routing some 10GbE things currently and it seems to run very smoothly.

Your NAS setup sounds fine, also very similar to what I use (wish mine has the X9SCM board!!). Have you considered one raid controller and a SAS expander?
 

Roman2179

Member
Sep 23, 2013
49
14
8
The Cisco SG500X-48P definitely has my interest. 48GbE ports plus a couple of 10GbE ports would allow me to hang the ESXi host and the FreeNAS server on faster connections. And as a bonus it supports PoE which saves me the need to have a second switch. It's definitely expensive but it seems like it would be a better choice to get one switch that will hold me over for a while instead of getting a bunch of smaller switches.

So I think I have settled on the following but input is apperciated:

Firewall
  • Supermicro CSE-512L-200B Chassis
  • Supermicro X9SBAA-F (Atom)
  • Kingston KVR13LSE9/4 4GB SO-DIMM
  • USB, SATA DOM or small SSD for pfSense

FreeNAS Server
  • Supermicro X9SCM-F
  • Xeon E3-1230 v2
  • Kingston 4x8GB
  • IBM M1015 flashed to IT Mode
  • Intel RES2SV240 Expander
  • 4x2TB WD RE Drives for CCTV storage (RAID-Z)
  • 6-8x3TB WD Red for File Storage (RAID-Z2)
  • USB drive for FreeNAS Install
  • 10GbE Network Card(Mellanox Connect-X 3 VPI?)
  • Norco RPC-4224
  • Thermaltake TPG-650M GOLD 650W PSU

ESXi Host
  • Supermicro X9SRH-7TF-O
  • Xeon E5-2620 v2
  • 2x16GB 16GB ECC/REG RAM to start
  • 2x128GB SSD RAID 1 ESXi Boot and Datastore
  • 4x180-240GB SSD RAID-10 VM Storage
  • Norco 2U of some sort

Network
  • Cisco SG500X-48P(Any other ideas??)
  • 2-3x Ubiquiti UAP-Pro
  • 8x IP Cam (Still deciding on which)

Dell 2420 Rack Enclosure
APC UPS(Haven't decided on which yet)
Switched PDU(Haven't decided on which yet)

Recommendations for for 10GbE SFP+ cards, IP CCTV cams, UPS, and PDU are welcome. I have not decided on which to go with. Would the Mellanox ConnectX-3 VPI be good? Seems like they would be good cards and allow me to use the 10GbE fiber for now and go to 40Gb Infiniband later on or am I wrong here?

I will probably run one pre-terminated fiber drop to the office. I will probably make use of the 10Gbe connection there.

The M1015 and 6x2TB WD Reds will go into another NAS that I will keep at my sister's condo as an offsite back for important files. By important I mean pictures. Don't really care about much else. Still hashing out the details on what I want there but Avoton platform seems pretty tempting. Plus I could still run ESXi on it and still make it serve as a basic VM host along with file storage.

I realize that this is kind of overkill but I will be using this for work as well. Eventually I will more than likely add a c6100 or other VM hosts. I generally do need to have the ability to spin up a few VMs to do proof of concept installs before I ask the infrastructure teams to spin up some VMs for me. Much easier to get my testing out of the way in environment in which I have full control. I would like to get a c6100 in order to do some clustered installs since a lot of software that I use for work likes to be set up in clusters instead of single boxes, but that's a future upgrade. Maybe I'll do Infiniband at that point too, who knows.

EDIT: Just occurred to me, if I am going to go with 10GbE for the NAS and ESXi host, then do I really need to have SSD storage in the ESXi host? Would I be better off getting a few more or faster HDDs instead of the SSDs and using those for all VM storage? I could probably get 8 HDDs for the price of those 6SSDs.

EDIT2: So it looks like VMWare removed the limits for RAM and CPUs on ESXi. Is that correct? If that's the case, would a dual CPU motherboard be a good idea to have an upgrade path later on? If I use the NAS for VM storage then I could definitely go with a much cheaper board since I don't need the RAID controller. Along the same lines, if I'm getting Mellanox cards, then there's no point in getting a motherboard with 10GbE over copper built in either.

Ugh, I need to stop looking at parts, things are starting to escalate haha
 
Last edited: