Was going to go dedicated NAS + Server but now thinking virtual

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Pr3dict

Member
Apr 26, 2016
41
0
6
33
Hi all,

I have been back and forth trying to research this area of computing and it has been very eye opening as well as tiring.

I currently have a Windows machine running an i7-4790 that is my NAS, media server, and a client pc for media playback.


....I wanted to split the duties of this machine and create a dedicated NAS and have my media server and client be on the same machine but after reading speaking with a friend of mine that visualizes everything I decided to go down the path that many follow here.

BUT THEN!!!!!! I was just about to post the question to you all of what kind of hardware I should look for to use my 4790 and I saw this thread Building home virtualization server, looking for quality LGA1150 motherboard

Which recommends having things separate again.

What do you all recommend? I am having a hard time finding the right hardware info for this build and I would surely like to get the ball rolling as having everything the way it is now is not working too well.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
The main problem if you add many services to a base OS
- On a crash you must recover all services
- On any update you must care of all services
- a crash of any service affects all services

This is why I prefer
- use a barebone hypervisor like ESXi, more a firmware than an OS
- use a minimalistic storage VM as filer and datastore for other VMs
- create a VM for any relevant service on their best platform, can be BSD, OSX, Linux, Solaris or Windows

see my approach for such a system
http://www.napp-it.org/doc/downloads/napp-in-one.pdf
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
Ideally, keep your existing Windows machine as a client/workstation. Now add a server for NAS and other duties. The first thing you should decide is what you want it to do. What applications and services do you want to provide? How much storage? How many drives and what level of protection from HDD failure? Budget?

gea's napp-it is a good solution. There's also something like FreeNAS with jails/plugins for extra services. It also supports full VMs. So it's a bit like combining the NAS and the hypervisor. There are pros/cons to both styles, so you have to decide for yourself. Note that server OS tend to be more picky about hardware. ESXi, for example, can't see the network ports on my client machine. However, it works great if you get "server grade" hardware, which generally means Intel network controllers and a few other brand/types of devices. The one exception here is Linux based stuff that uses a normal Linux distribution. Linux tends to support all sorts of oddball hardware that others don't, sometimes for good reasons.
 

Pr3dict

Member
Apr 26, 2016
41
0
6
33
The setup I was thinking has a base of ESXi with a few things on top.

I currently use flexraid for my parity solution, but was looking to move over to Unraid.

Then have the VMs for the different services. Mostly it will be storing and serving up media to devices on my network. I will need to have a service to transcode, one to handle my IP cameras, and anything else that may arise. I'm not doing anything strenuous I would say but there are times when I have 3-4 1080p transcodes happening at once.

The reason I like the above solution is that if I want to try a new service I can just spin up a VM and test it without having to powerdown all the other services.

Seeing as this looks like the ultimate solution, why do some people caution against having a parity/drivepool solution in a VM?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
Some software doesn't handle being in a VM as well as we would like. FreeNAS, for example, commonly recommends against it. However, Solaris doesn't seem to mind at all. I'm a huge fan of ZFS, so that's what I use. It's free, enterprise proven, and fast. I don't know much about running Unraid and similar setups in a VM, hopefully someone else can help fill that gap.

For 4 1080p transcodes, you're going to need a lot of CPU. You might want to check into forums for Plex or similar setups for good ideas of how much. Last time I looked into it, you needed good single-core performance as transcoders are often not optimized for parallel setups.
 

Pr3dict

Member
Apr 26, 2016
41
0
6
33
I have an i7-4790 that I'd like to use, it has a Passmark score of a little over 10,000. PassMark - Intel Core i7-4790 @ 3.60GHz - Price performance comparison

My issue has been finding a rackmount server chassis and motherboard that will support this type of hardware. Everything I see on newegg and the like are like consumer grade cases that have a regular power supply. I don't really understand the whole sas expander and backplane realm of products. I have been trying to research what makes a good one vs a bad one and I can't seem to hit the nail on the head. That's where I'm hoping you all come in.

Also, I have researched ZFS, issue being that I would need an ecc capable CPU and that isnt something I have unless I want to use the i3-4130 I have laying around. That won't do very nicely as a transcoding cpu.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
ZFS doesn't require ECC, but it is very highly recommended. Note that even though they don't say it, the reasons it's recommended apply to all software based RAID style setups. Any time you might write invalid data to one of the drives, it's a big issue. It's also a pretty rare issue, and ZFS will recover from it better than most other setups can. That said, I do use ECC RAM.

As for rackmount cases, backplanes, expanders, etc... How many drives do you want to be able to use? That's a good place to start. And keep in mind, server gear tends to be loud compared to normal systems. If you want to use a big Supermicro chassis, you will probably not want it in your bedroom unless you like the sound of computer fans. :) The up side is, my big Supermicro case can hold 24 3.5" drives and the large SSI or eATX motherboards.

One option is to sell the i7 and get a new Xeon and motherboard... Check out natex.us. Some good gear for reasonable prices.
 

Pr3dict

Member
Apr 26, 2016
41
0
6
33
Also doesnt ZFS require that you use all the same size drives? I have 4, 3TB WD Red drives and 4, 4TB WD Red drives sitting in my shopping cart :). That being said I wanted a server capable of having 16 drives in it but if I need to go down to 12 to give myself better options that will be fine.

I could always do that (sell the i7) but then the questions comes to why? The passmark score is good and I cant imagine using more power then it provides for anything I would need to do.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
Selling and buying a xeon was if you want ECC. The CPU you have is plenty powerful.

ZFS uses the smallest drive in a vdev as the base size. So if you add a 3tb and a 4tb drive to a mirror, you get 3tb of space.

That said, you can stripe vdevs in the pool. So, as one example, you can make raidz1 vdevs from each set of 4 drives. That gets you 12tb and 9tb usable space from each group. If they are both in the same pool, you get 21tb usable in a single pool with I/O spread out between the two. In that configuration, you can lose one 3tb and one 4tb drive with no data loss. However, should you lose one more of either, you lose it all. Unraid and similar play various tricks to give more flexibility in drive choice, but that comes with some risk as well.

You can use a pair of LSI2008 controllers for 16 drives. Those are very well supported by everything.
 

Pr3dict

Member
Apr 26, 2016
41
0
6
33
I appreciate the help so far. I am taking notes and making bookmarks!

I've been looking at old supermicro servers on ebay for the chassis but what should I be looking for in a backplane and PSU also what would you recommend for a motherboard when I am looking for at least 2 network adapters. and support for the drives. Should I be using those controllers and not the onboard controllers of the motherboard?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I can't really suggest motherboards for an i7, I don't really do much with them.

The controllers are nice in a ESXi setup as you can pass the whole controller through to the VM. This limits issues from the Hypervisor and increases performance. So I guess for motherboards, make sure they support vt-d, though I would think most anything that can take an i7 would..

The onboard ports can then be used for the host system boot drive, vm storage, etc..

There are a few backplane options. Avoid SAS1 units, they have issues with drives >2tb. SAS2 are fine, as are TQ backplanes that provide a passthrough connection. The passthrough connection is just 1 to 1, so you connect 1 cable for each drive. They tend to be a little cheaper and easier to find. SAS2 backplanes tend to add about $100 depending on the seller. They would let you use one SAS cable to the controller, so you would only need one HBA. Performance could be lower, but not enough to notice with magnetic drives.
 

Pr3dict

Member
Apr 26, 2016
41
0
6
33
You are definetly explaining this very good for me. I will post a build soon after I gather what I think I need.

Last question. What are your thoughts on PCIE ethernet adapters?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
Looks like a decent setup to me. The only question I would have is transcoding, I don't do any real time transcoding, so I don't have a good idea what it needs. I would think it would work though.
 

C@mM!

New Member
Jan 18, 2016
23
1
3
Canberra
If looking at Hyper-V rather than ESXi\Xen\KVM, you won't be able to passthru a storage controller.

I've had success though with an SR-IOV HBA (lsi 93xx's for example) running Solaris+ZFS in this scenario however.
 

Pr3dict

Member
Apr 26, 2016
41
0
6
33
I didnt even realize the processor was released so long ago (2010) .. It has a decent passmark score but the i7 can do hardware decoding via quicksync so that would probably be faster.... If I was to take out the motherboard and put in a different one with the i7 or even a modern xeon, could the backplane and sas controllers + exapnders that the deal above comes with work?

I am really looking for a chassis, backplane, powersupply, and controllers that will work with the 4tb drives I want to put in. Then I can figure out the motherboard + cpu combo that I will go with.

It's not an easy 1, 2 ,3 solution though...

@C@mM! - After my research I think ESXi/vSphere is the way I want to go.
 
Apr 13, 2016
56
7
8
54
Texas
I've been in a similar boat, but have started down the path of building my NAS box - all the pieces were finally delivered on Monday, so hoping to get assembling it when time allows. What I've gotten is:

Motherboard: Supermicro X10SDV-4C-7TP4F Embedded Processor
RAM: 2x Samsung M393A4K40BB0-CPB DDR4
Chassis: Supermicro CSE-743TQ-865B-SQ Full Tower

I went this route to have the 8 bay 3.5" hot plug SAS backplane... Boot drives are Samsung 2.5" 256GB EVO Pro's, O/S will be Ubuntu 16.04, and I'll be using ZFS as the data store file system.