[WIP] Coming to the dark side (consultation is appreciated)

foongkev

New Member
Oct 11, 2013
10
0
0
Sydney
Hi All,

Been lurking for about 2 weeks and I think STH is great. It provided a non-IT industry person some great learnings and information on servers and virtualisation. I'd say that in personal computing I'm proficient but this is 'relatively new' territory for me.

Background
I've been using a raspberry pi as an extremely budget file server solution. The external HDD started failing, and now I'm planning to build a home lab/server so that I can have something more stable and start various smaller projects through VMs.

Objective
To build a new multi-purpose home server

Setup
I've attached my (half complete) project document which outlines in detail my plan. For those that can't be bothered reading but are willing to give me feedback, here's a quick summary:
  • E3 1230V3
  • SuperMicro X10SLM-F
  • 16GBCrucial 1600Mhz UDIMM ECC
  • Samsung 128GB 840 Pro SSD
  • 3 X WD Red 3TB
  • Fractal Design Define Mini

Here's a schematic picture of how I want to implement:



Questions
1. Are there any red flags with my hardware and setup?
2. Am I overengineering the setup? I'm having a 'testbed' purely for when I want to tweak/change or install updates
3. Sufficient allocation of memory and storage on the VMs?
4. Are my proposed backup solutions suitable? (clone USB stick which boots ESXi, vMA tool for VMs and ZFS snapshots for my critical data)

If you're willing to read my project documentation (half complete), here it is which I'm hoping to get some feedback on.
ProjectGaia.docx

Much appreciated.
 

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
How did you connect a SATA drive to the Raspberry Pi?

That is a fine build. Question(s):
1. How are you going to pass disks through to Nexenta? Are you doing raw disk mapping or are you going to try using VT-d? In which case you would want a LSI controller?
2. With 9TB of storage I would probably allocate a minimum of 8GB to Nexenta
3. You should look at the all-in one threads on here. might give you an idea or two.
4. Where/ how is backup happening?
5. Did you think about getting one of those new Intel C2750 boards? Seems like they would easily handle this and give more NICS to play with using less power.
6. You should take that docx and just make a build log/ documentation here. You can get feedback constantly on the build.
7. NICE DIAGRAM! Shows you have thought this out.
 

foongkev

New Member
Oct 11, 2013
10
0
0
Sydney
How did you connect a SATA drive to the Raspberry Pi?
It wasn't a SATA drive. It was just a 'normal' external hard disk connected via the USB port. It served its purpose at that time. Surprisingly it only lasted just a bit over a year. Its definitely a hardware failure as connecting the drive to my desktop and performing read/write operations at times would present errors

That is a fine build. Question(s):
1. How are you going to pass disks through to Nexenta? Are you doing raw disk mapping or are you going to try using VT-d? In which case you would want a LSI controller?
Thanks. I think I've splurged a bit, but I want something that I can use for a 'long time' and a lot of stuff is really hard to find and really expensive in Aus.

I guess this is the biggest gap in my project which I haven't thought of, and it shows how limited my knowledge is around virtualisation (I've only been playing with VirtualBox in the last 1.5 weeks). I guess what you're saying is I need a way for NexentaStor to 'see' the 3 actual disks, so it can set up and manage the ZFS pool. I only thought ahead in terms of the Torrent Client VM accessing the NexentaStor 'share' via NFS (if that makes sense). Guess I might have to do research on Raw Disk mapping and VT-d, and the pros and cons. preferably I'd rather take a solution which wouldn't cost me any more money (I've already bought almost all the components). The E3 1230V3 has VT-d so if I 'use that' I guess I don't need a controller? Any suggestions?

With 9TB of storage I would probably allocate a minimum of 8GB to Nexenta
Yes, I believe I've documented that I would allocate 8GB. In future as I add to the pool and as 8GB isn't enough I plan to add another SSD as the L2ARC

3. You should look at the all-in one threads on here. might give you an idea or two.
Looked at a few, haven't had a heap of time but I'll browse further. As mentioned above I've already ordered the components, but obviously I'm no way close to implementing the full build and hopefully I'll get further ideas. Thanks

4. Where/ how is backup happening?\
This is I think the main part of my project planning which I haven't really thought through yet (I was thinking it will be the last part of the puzzle).
For ESXi, I guess I can just make replica USB sticks with the host configuration (really the easiest of all).
For the VMs, I think snapshots (VDMK files???) saved in an external HDD every week/month should suffice. Might be a manual process (until I have funds to build a backup server?)
For the critical ZFS filesystem/volume, I think taking a ZFS snapshot and saving it in an external HDD every week/month (I hope I got the concept of ZFS snapshots fine)
For non-critical data, I haven't planned for it and at this point its an accepted risk. My current knowledge is that If I have 3TB woth of data in there, the snapshot would be equivalent size and I would have to have equivalent sized storage in a backup. Making multiple snapshots would exponentially increase this requirement. Unless I have my idea of backups and the methodology all wrong? Keen to hear how you all approach this, especially with some of you that have insane amounts of storage on your home labs.

5. Did you think about getting one of those new Intel C2750 boards? Seems like they would easily handle this and give more NICS to play with using less power.
Honestly I haven't. Reading the review here, it seems like a great option. I wanted something that I could put a Xeon Haswell for scalability should I start increasing the number of VMs and services I run on the machine? I presume the power consumption in terms of money wouldn't make a significant difference? A bit too late anyway as I've already ordered my parts.

6. You should take that docx and just make a build log/ documentation here. You can get feedback constantly on the build.
Agree. When its not 6:20am and I can find some time I'll try to do that along with further documenting and planning out the rest of the project/build. I hope to get as much feedback as possible, so I can ensure a smooth 'implementation' once I get all the parts in.

7. NICE DIAGRAM! Shows you have thought this out.
Thanks. I used Lucid Charts (https://www.lucidchart.com). Its free and syncs with Google Drive. There's still so much more to implement in the diagram as still a WIP.

Once again thanks. It's made me think a lot more about the project and ensuring that I've had all my bases covered from this feedback. Further updates and information to come when possible.
 

foongkev

New Member
Oct 11, 2013
10
0
0
Sydney
2013/10/16 UPDATE
Haven't updated this thread/log until now as I've been doing a lot of research in learning more about virtualisation and my hardware compatibility. I've ordered all the components, and I've already received half of it. The only component delaying my last purchase order is the Fractal case.

Anyhow, I've been playing around with my setup diagram to include more information:

Notes
  • I've been researching about pfsense and I think its great. Currently testing out to see if it can work on my old Dell Inspiron Mini 9, but current testing proves that it is unlikely.
  • I've changed my web hosting VM to Debian, to prevent having to manage multiple distros.
  • Considering SnapRAID as my backup method. Still open to ideas and researching

Open action items / Steps:
  • Confirming backup solution / plan
  • Undergoing a test implementation run using VirtualBox on my workstation
  • Documenting my project is ongoing, mainly on implementation steps and recording all the config settings being my main activity until all the components arrive (ETA this weekend)

Open issues for feedback:
  • I'm debating using Nexentastor vs OmniOS/OpenIndiana with Napp-it. All of them seem like forks of Open Solaris with no guarantee of longer term free support and continual development. Perhaps Solaris free edition could be an option. Thoughts?
  • Whilst researching ESXi compatibility with the new X10 boards, the main issue to tackle is support for Intel i217LM & Intel i210AT NIC controllers. It seems that barely any OS / Hypervisors have included 'native' support for it and I have to install drivers manually. For ESXi that shouldn't be an issue (following Patrick's guide . However this presented a concern for the various guest OSes that I would install. I started to get paranoid and I didn't want to recompile kernels as that's out of my league. Long story short: I think I realised that with Hypervisors like ESXi, when I setup the VMs the hypervisor creates a virtual network adapter, so this shouldn't be an issue? So I understand the network data flow as "Debian -> Virtual Network adapter (using driver like e1000) -> ESXi host -> physical i210at NIC - > ethernet cable. Is this correct, so I shouldn't be worried?
  • Debating whether to plug internet modem directly into the server/host, and running pfsense as a VM. What are the impacts and things to consider? Is running it on its own physical machine better?
  • Have yet to decide how to manage the 2 NIC ports, e.g. load balancing, or 1 dedicated for Internet/WAN traffic and another purely for the local network. Need to also research and plan how to configure this.
 
Last edited:

MiniKnight

Well-Known Member
Mar 30, 2012
3,041
944
113
NYC
Open issues for feedback:
  • I'm debating using Nexentastor vs OmniOS/OpenIndiana with Napp-it. All of them seem like forks of Open Solaris with no guarantee of longer term free support and continual development. Perhaps Solaris free edition could be an option. Thoughts?
  • Whilst researching ESXi compatibility with the new X10 boards, the main issue to tackle is support for Intel i217LM & Intel i210AT NIC controllers. It seems that barely any OS / Hypervisors have included 'native' support for it and I have to install drivers manually. For ESXi that shouldn't be an issue (following Patrick's guide . However this presented a concern for the various guest OSes that I would install. I started to get paranoid and I didn't want to recompile kernels as that's out of my league. Long story short: I think I realised that with Hypervisors like ESXi, when I setup the VMs the hypervisor creates a virtual network adapter, so this shouldn't be an issue? So I understand the network data flow as "Debian -> Virtual Network adapter (using driver like e1000) -> ESXi host -> physical i210at NIC - > ethernet cable. Is this correct, so I shouldn't be worried?
  • Debating whether to plug internet modem directly into the server/host, and running pfsense as a VM. What are the impacts and things to consider? Is running it on its own physical machine better?
  • Have yet to decide how to manage the 2 NIC ports, e.g. load balancing, or 1 dedicated for Internet/WAN traffic and another purely for the local network. Need to also research and plan how to configure this.
You at least seem to be organized

Here's an idea:
1. Follow the guide for Napp-it in ESXi 5.5 super easy: http://www.servethehome.com/omnios-napp-it-zfs-applianc-vmware-esxi-minutes/ Just be done with it. Running in a VM makes everything much easier since you abstract hardware.
2. Just use the guide you linked. The VM guests only see the VMware virtual hardware. If you use the OmniOS napp-it image the virtual machine drivers are already installed
3. The main issue you will have with running pfsense on the server is that you will need to have the server on to maintain your internet connection. If this is going to be always on, not to worry. May want to have a cheapo second hand router around just in case something breaks or if you need to take the server apart one day and have internet access.
4. Easy to do either way. Big question is going to be if you run your router + firewall on the server
 

foongkev

New Member
Oct 11, 2013
10
0
0
Sydney
Thanks Patrick

Latest update is I've been building this for the past 9 hours, and I'm almost there.
I'm having issues creating the zpool, it seems to through a Predictive Analysis error. Doing a iostat -en shows drive c1t0d0 with 5 Hard errors (ahh!). iostat -e gives more details that it was "Device Not Ready :5". Not sure what that meant. tried doing a soft reboot of OmniOS through vSphere, reran tests and no errors. Weird!

Anyway more updates to come soon. I have to be up in 3 hours for work.
 

foongkev

New Member
Oct 11, 2013
10
0
0
Sydney
*** Latest update ***

Now that I have the 'core' services up and running, its time to recover as much data as I can from my failing external HDD I previously used on my cheap home server.

I did a rookie mistake of enabling DirectPath I/O passthrough on the USB Host controller for the guest OS VM to access the drive. For some reason I couldn't access the USB HDD, so I learnt of using the normal passthrough method. Tried removing Direct I.O passthrough settings, rebooted ESXi and... Direct I/O is still enabled for the USB Controller. What?

After some troubleshooting I realised that the same USB host controller is the same controller I stuck the USB stick which ESXi is installed on. Following some forums online still proved problematic. Even resetting ESXi to default sys config wouldn't work. After a few hours I decided to do a hard reinstall of ESXi. Hours wasted on a small change. Well its my first time learning about ESXi and virtualisation. Best way to learn is to make mistakes. Luckily with my setup everything is 'segregated' so I still retain all my datastores and information.

Now that I'm back online, I'm running ddrescue to recover as much information as possible. This should take a very long time since its over USB2 (wonky USB 3 issues with ESXi apparently?).

Once I'm done recovering and migrating the data and do further tweakings to the setup I'll definitely post a Post Implementation Report with what I've learnt.
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
This looks really nice.

How are you finding the SeaSonic fanless power supply? Does it increase the heat in the case very much?

I have just ordered the 520W version to go in my 4U X-Case RM 420 Pro case. Should hopefully be delivered next week with my CPU and cooler
 

foongkev

New Member
Oct 11, 2013
10
0
0
Sydney
*** Update***
ddrescue is chugging along so slow. I've plugged the External Seagate 3TB HDD into the USB3.0 port and its still chugging along slowly. Fastest speed I witnessed was about 35MB/s. I know I'm only 'halfway' through as I have 1.5TB of data on it. Its now way pass the 24hour mark:

Buffer Device I/O issues are slowing it down a lot. There has been a few occasions where the I/O errors are so great that the HDD just 'crashes'. Its not 'recognised' and disappears from listing as /dev/sdb1 and I have to do a soft reboot of the VM to resume. Sad to see a HDD last only a year, but I guess that's what happens when you use an external USB HDD as your 'seedbox' and file server.

This looks really nice.

How are you finding the SeaSonic fanless power supply? Does it increase the heat in the case very much?

I have just ordered the 520W version to go in my 4U X-Case RM 420 Pro case. Should hopefully be delivered next week with my CPU and cooler
Thanks. Once I have more time alongside the PIR I'll take nicer pictures with my DSLR. Those pictures were taken with my S4.

I'm loving the SeaSonic fanless power supply. Especially as this host machine sits in my living room, along with the Fractal Design Define Mini there is virtually no sound from the machine. If you read some reviews with pics you'll notice the Define Mini is 'heavily insulated', virtually no airflow (compared to my Antec Nine Hundred). Here are the readings from vmSphere for your benefit.



This is whilst ddrescue is running and putting a decent load on my machine. I'm very impressed, considering that some reviews say the PSU can reach temps of 70, 80 and even 90 degrees Celsius at times under load. The Seagate drive that I'm recovering data from is significantly hotter than the PSU or CPU temp (so warm/hot, that I got concerned and just removed the external casing to assist with cooling it. It has been chugging along nonstop for more than 24 hours).

You will definitely not regret paying the premium for a seasonic fanless PSU.

P.S. I'm still new to full server builds and chassis, I've just been daydreaming and window shopping at server chassis with hotswap 20+bays (I know, I just bought this setup!!!!) and your case looks nice. How much did you get it for? Almost anything non-retail consumer in Australia is so hard to source and ridiculously expensive. Case in hand my Supermicro mobo which I paid $300 for.
 
Last edited:

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Wow! The temps look pretty good. Thanks so much for posting all these details.

I am in the UK so we get some pretty good suppliers here. Like you I spent ages looking at cases. In the past I have bought a big tower case and the fitted the 5-in-3 types bays you can get. The fans on these are usually very loud though. I opted for the X-Case RM-420 Pro because the reviews f their cheaper cases said the fans were like jet engines in noise.

To cut a long story short, it was about £530. Which is a lot of money, until you add up all the other bits separately for a normal case with hot-swap caddies and quiet fans etc.