Advice on low power home server

Kihltech

New Member
Jan 28, 2014
4
0
1
I'm stuck in my own thoughts when it comes to my next home server.

I'm currently running out of space in the current system and the setup feels old and inefficient.
  • APC SmartUPS 1500 (bought used, probably over sized for the task, noisy when the fans runs every now and then. Efficiency? I have no idea...)
  • Desktop chassis (good noise proofing) with desktop components (Core i5 3330, 16GB RAM)
  • HW RAID, LSI 9261-8i
  • Storage
    • 2x3TB RAID1
    • 2x3TB RAID1
  • ESXi installed to USB drive
  • VMs:
    • pfsense
    • 4x Ubuntu Servers
      • "beach head" into the LAN
      • photography storage server
      • media server with MySQl for Kodi and Transmission
      • a test server (python, web development, random testing))

This would be something like the version 4.0 of something that has evolved from multiple desktop machines, single desktop with virtualization, HBA passthrough for mdadm RAID and finally HW RAID. Great fun but a pain to navigate through the jungle of NIC and VT-d support for ESXi. Also, power draw has increased for each revision.

What I really like about this is the isolation between the VMs. It feels good to have pfsense and not me trying to setup iptables on a native Ubuntu server or similar.

Sure, long intro... The solution above feels old school when I read about all the new exiting stuff out there. ZFS, containers.

Right now I'm thinking of going for a Xeon for ECC support, native Ubuntu Server, KVM for pfsense. Storage wise I'm thinking ZFS with 2x8TB in mirroring for future proof and safe storage and 4x3TB (re use the old ones to begin with) in RAIDZ + SSD cache to get better responsiveness. I guess I can run a Kodi MySQL in a container and Transmission in another one. But then, I have no idea how to isolate photos from media without moving them into VMs and losing some of the nice ZFS possibilities. MySQL and Transmission would need access to the media files, but I do not want to put all the eggs in one basket on the host. Or, do you think that's the way to go?

Please share you ideas!

My priorities would be:
  1. Low noise
  2. Simplicity of the solution
  3. Learn new technology (ZFS, KVM, containers, etc)
  4. Low power
  5. Cost
 

whitey

Moderator
Jun 30, 2014
2,774
869
113
39
Budget (this greenfield/net-new), rackmount or tower, qty of hosts/servers, AIO (all-in-one) OK?

Single Xeon-D sounds like a good fit w/ the proper chassis running possibly Proxmox or native KVM if you have the chops :-D Sounds like you may.
 

Kihltech

New Member
Jan 28, 2014
4
0
1
Budget (this greenfield/net-new), rackmount or tower, qty of hosts/servers, AIO (all-in-one) OK?

Single Xeon-D sounds like a good fit w/ the proper chassis running possibly Proxmox or native KVM if you have the chops :-D Sounds like you may.
Yes, aiming for a AIO. Tower yes. Budget, good question... I'm cost conscious. I'll pay for quality for sure but efficiency is important.

Xeon-D sounds suitable with some 16 or 32GB RAM.

I realize I could have been clear on my struggling thoughts - I don't know what to do on the layer on top of the hardware...
  • Ubuntu Server with ZFS + KVM virtual pfsense + native/host storage for Samba sharing+ native services (MySQL, Transmission)
  • Ubuntu Server with ZFS + KVM with VM = [pfsense, storage (lose ZFS?), services]
  • Ubuntu Server with ZFS + KVM virtual pfsense + Docker/LXC/LXD?/ for services?
  • Proxmox (need to learn more for sure!) with ZFS + VMs? (refreshing and exactly in line with what I wanted to hear from the thread!)
  • ESXi 6.x + HW RAID for VMs + HBA passthrough for ZFS in storage VM (please kill this thought for me once and for all!)
 

whitey

Moderator
Jun 30, 2014
2,774
869
113
39
I'll keep it short, pick hypervisor, zfs aio w/ hba NOT hw raid ctrl passed thru, vm's on hypervisor, smb/nfs mounts from vms to zfs aio stg appliance...DONE

Mobile so intentionally keeping short n' sweet

EDIT: Back home now, as for Xeon-D for memory on that thing, it may hurt but I'd start w/ two 32GB modules, that way you can slap two more in and be at max config if needed, small upfront dimm cost hike from 16 to 32 but you wont have to pull/re-sell dimms either. There are some SWEET Xeon-D board w/ HBA and 10G SFP+ networking if you're into that sort of thing, something like this:

Supermicro X10SDV-7TP4F Embedded Processor

or maybe this:

Supermicro X10SDV-4C-7TP4F Embedded Processor

Cheaper options in the $500-600 range I think as well.

Check here.

Super Micro Computer, Inc. Products - Server Systems based on Intel® Xeon®-D processors

Chassis you'll have to get others weigh-in on as I am a rack mount chassis guy, I'd opt for something w/ at least 8 ports of hot-swap or cages of some sort.
 
Last edited:

Kihltech

New Member
Jan 28, 2014
4
0
1
I'll keep it short, pick hypervisor, zfs aio w/ hba NOT hw raid ctrl passed thru, vm's on hypervisor, smb/nfs mounts from vms to zfs aio stg appliance...DONE

Mobile so intentionally keeping short n' sweet

EDIT: Back home now, as for Xeon-D for memory on that thing, it may hurt but I'd start w/ two 32GB modules, that way you can slap two more in and be at max config if needed, small upfront dimm cost hike from 16 to 32 but you wont have to pull/re-sell dimms either. There are some SWEET Xeon-D board w/ HBA and 10G SFP+ networking if you're into that sort of thing, something like this:

Supermicro X10SDV-7TP4F Embedded Processor

or maybe this:

Supermicro X10SDV-4C-7TP4F Embedded Processor

Cheaper options in the $500-600 range I think as well.

Check here.

Super Micro Computer, Inc. Products - Server Systems based on Intel® Xeon®-D processors

Chassis you'll have to get others weigh-in on as I am a rack mount chassis guy, I'd opt for something w/ at least 8 ports of hot-swap or cages of some sort.
Thank you very much whitey, great information!

Previously I have thought of 10G as over kill for a home setup, but seeing this as a 5 years solution perhaps I should reconsider it. I assume it's possible to disable the module completely to save power in the beginning.

Back to the OS side of the challenge. When I boil the solution down to it's essentials it's pretty much storage and firewall. Compared to your suggestion, what would you say are the draw backs of going:
  • Ubuntu Server on the metal
    • ZFS through onboard controller
    • KVM for pfsense
(The other services I have mentioned are less relevant and not performance or storage critical, can be handled with KVM, Docker or natively on the host.)
 

vl1969

Active Member
Feb 5, 2014
611
69
28
, zfs aio w/ hba NOT hw raid ctrl passed thru, vm's on hypervisor, smb/nfs mounts from vms to zfs aio stg appliance...DONE
may I ask what does this mean? too many abbreviations I cannot decide for
 

nk215

Active Member
Oct 6, 2015
322
98
28
47
He means:

Don't use a hardware raid card with PCI pass-thru to your ZFS VM. Use a HBA card instead. The ZFS VM also serves the nfs/smb shares back to the VMs on the hypervisor.
 

vl1969

Active Member
Feb 5, 2014
611
69
28
He means:

Don't use a hardware raid card with PCI pass-thru to your ZFS VM. Use a HBA card instead. The ZFS VM also serves the nfs/smb shares back to the VMs on the hypervisor.
ok,

I am actually trying to figure out how to build out Proxmox Server that would serve 2 roles
a Proxmox VM server a
and a Host based File Server
so I don't have to pass through anything at all.

so far plan is:
1. load up proxmox on ZFS raid-1 2 SSDs drives. (HOST)
1.1 make second ZFS raid-1 pool with 2 1TB for all VM storage needs
2. build out a BTRFS RAID-10 pool with all my data drives (3 or 4 3TB + 3 or 4 2 TB)
3. load Webmin on the HOST and setup NFS server on it.
3.1 create/export a folder on the BTRFS pool via NFS on the HOST.
this would be a single folder on the pool named DATA with all the actual structures inside.
or maybe I will setup and export several folders like Media, Data, Backup etc.
4. load a VM with FreeNas or OMV and mount the NFS shares there.
5 manage the access to the shares from with in the VM. like create SAMBA shares
maybe load a OwnCloud/NetCloud on it etc.
 

Kihltech

New Member
Jan 28, 2014
4
0
1
He means:

Don't use a hardware raid card with PCI pass-thru to your ZFS VM. Use a HBA card instead. The ZFS VM also serves the nfs/smb shares back to the VMs on the hypervisor.
From my understanding, using a hardware RAID controller for ZFS is not recommended, not only when it comes to passing it on to a VM.


ok,

I am actually trying to figure out how to build out Proxmox Server that would serve 2 roles
a Proxmox VM server a
and a Host based File Server
so I don't have to pass through anything at all.

so far plan is:
1. load up proxmox on ZFS raid-1 2 SSDs drives. (HOST)
1.1 make second ZFS raid-1 pool with 2 1TB for all VM storage needs
2. build out a BTRFS RAID-10 pool with all my data drives (3 or 4 3TB + 3 or 4 2 TB)
3. load Webmin on the HOST and setup NFS server on it.
3.1 create/export a folder on the BTRFS pool via NFS on the HOST.
this would be a single folder on the pool named DATA with all the actual structures inside.
or maybe I will setup and export several folders like Media, Data, Backup etc.
4. load a VM with FreeNas or OMV and mount the NFS shares there.
5 manage the access to the shares from with in the VM. like create SAMBA shares
maybe load a OwnCloud/NetCloud on it etc.
Interesting setup. Can you elaborate a little on the BTRFS setup? What is the advantage of doing that compared to setting up a ZFS pool of vdev mirrors? It sound complicated with that many layers compared to just exposing the storage from the host.
 

vl1969

Active Member
Feb 5, 2014
611
69
28
[QUOTE="Kihltech, post: 126623,
Interesting setup. Can you elaborate a little on the BTRFS setup? What is the advantage of doing that compared to setting up a ZFS pool of vdev mirrors? It sound complicated with that many layers compared to just exposing the storage from the host.[/QUOTE]

well it is kind of what I am trying to figure out what to do.
I want to use BTRFS because A. I know how to work with it and B. I have a mix of drives that not exactly pair up. ZFS need paired drives, I can setup pools but only with same size drives.
not all my drives pair up but I still want to use them.

C. ZFS requires lot's of RAM. I only have 50GB of ram in total. so not sure how it will play with ZFS.
and I simply do not understand ZFS, will need lots of help and time.

now, the setup I am thinking over,as described, is so that I don't have to pass through disks or controllers into VM. I am discounting it out of the bat, but just thinking a way not to do it.

it's either NFS shares from the HOST , where the actual data and access is managed by VM like FreeNas or OMV or passing the controller into the VM for full management, which will work as I do not have a cluster or anything like it, so no live migration expected. but passing through also might prevent having VM backup so just looking over any other options.

also exposing the storage fro mthe host is what I want, but I also want a GUI to manage it.
Proxmox does not have the tools, even Webmin is not overly useful. but it might do.
 

TLN

Active Member
Feb 26, 2016
448
63
28
31
IF you 're OK with 16 gigs of memory and don't need workstation performance from your desktop I'd advise HP Microserver G8.
I'm running ESXI 6.0 with Xeon 1230v2 and 16 gigs of memory.
Radeon HD8490 passedthrough to Win 7 VM
4 drives, SSD for vmstore, and SD-card to boot from.
 

nk215

Active Member
Oct 6, 2015
322
98
28
47
Using FreeNAS just for the GUI/interface is not the greatest choice. I recommend using Xpenology for that instead. It has great interface and very importantly mobile apps.

One of my test setup is somewhat similar, BTRFS, ZFS and etx4 managed by Xpenology GUI. There's an overhead cost but it's not too bad.

I don't even remember how many time I changed raid configs (from 1 to 10 to 4 to 6 etc) on BTRFS. I've added drives, removed drives etc w/o any issue. I also use offline dedupe on BTRFS backup share. That way, I do full backup (not that much data) to the target everyday and dedupe that at night to save space. It makes restoring very easy. Offline dedupe doesn't need much memory.

My BTRFS is from Rockstor VM. It's an all flash array. There's zero chance I can afford an all flash with ZFS. Deals come in on all different drive sizes, one at a time or a few at a time. Basically whatever I have on hand, from X35-E 65g to S3500s to S3700s and a few 240gb consumer drives to 480gb to 1gb drives. I remove drives from the pool when I need to the drives somewhere else.

As with all arrays, backup is the key. The speed at which I can backup and restore is also important. With btrfs, I have a very good chance of having a second all-flash backup array from the ssd I'll collect in the next many months.

As far as HW is concern, get more PCI slots than you think you'll need.
 
  • Like
Reactions: Alex L