Mixing Metal and Virtualization Confusion

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
Everyone else is moving to the cloud... and I am building a hardware lab.
Everyone else is aggregating servers... and I am building more servers.

Maybe at some point I start moving them back into one box but at this point I'm still a bit confused around what actually makes sense in the dungeon. It seems like the NAS server distros are trying to be routers, router applicances are trying to be authentication servers, virtualization servers are trying to be nas's.... and on.

My question of the moment. Does it makes sense to have the NAS reside on the Virtualization Server? ala
virtualized Freenas or does it make better sense even in a lab to separate file services from virtualization?

In terms of authentication services, for primary/secondary.. I've been thinking of Using a Centos VM with SAMBA4 as the "primary" AD Server and FreeNas as the Secondary.

In terms of zfs I'm muddled between managing the actual pools from proxmox or freenas. If freenas is virtualized under proxmox then maybe it makes better sense to be do it on Proxmox. If freenas has it's own metal than freenas makes a bit more sense.

Sorry if this it is a bit confused, but I am.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
If you're going to tinker with it and play around with it then it should IMHO be non disruptive to your core services... ie: routing and storage.

This is why I moved my home storage, and backup-live storage to their own hardware 100% separate from any lab/testing/playing hardware, and this is also why my routing/core network (pfsense) is on it's own hardware too.

I'm much more comfortable having my family data on hardware I'm not playing around with, experimenting or even adding/swapping drives monthly, and the same is true for the backup and the network. I'm in the process myself of migrating from powerful to low power CPU-wise and electrical for my home storage and home backup storage.

This will allow me to have separate storage for lab/test systems for testing software, storage platforms, performance, etc... and with persistent not-often-touched unique storage hardware I can easily backup lab/test or load ISO and other software from lab/test without having to make sure I restart the lab storage vm, configure it if I change things, etc...

This also is likely to provider greater uptime for your network and storage for yourself and family ;) which for some is super important.

Now, if you're building a family All In One and a separate lab/storage setup then do what you want with your family all in one based on how you feel comfortable... for me I'll always keep my network stuff separate, and even at that I usually have a 2nd on hand pre-configured ready to drop in :)
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
If you're going to tinker with it and play around with it then it should IMHO be non disruptive to your core services... ie: routing and storage.
+1 for this. Especially true if you have other people in your household who "just want the internet to work" and don't understand your hobby.

Router in its own box - because if you screw with it nobody's happy.
Simple switch for "house services" (WiFi, home entertainment, etc).
Other stuff that you want to work all the time on this switch too and not part of the "toy stack" (for me this includes the NVR server for my cameras and my home automation).
Primary file server (especially if you have movies ripped and the family likes them to be available).

Then - with its own switch - whatever you use for the "lab". Tear it down, build it up, rebuild it, load software, whatever - and the family stays happy. You don't have to run around the house and see who is watching netflix or gaming or whatever before you break things.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Agree with both. Anything for the home is completely separate from the Lab. Even if I completely power down the lab hen every thing from media streaming to security cams should work without getting affected.

With regards to converged storage vs separate appliance. It's your lab. Do what you feel like doing :)
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Live and learn and adapt :) That's what i'm doing now ;)
Oh, and don't work when you're tired... 10 minutes why IPMI IP not found... better plug the cable in ;) LOL!
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Don't we all have too much stuff :D
Things we got and even used for a while then decided needed something 'better' sometimes only to move back to the original :)

If you have some critical services you can always run them on something small and really low power like an NUC

I do currently have my router/fw separate but it needs and upgrade and as much as I like the ideas to vitualise on the main cluster or system pard of me is saying a seperate device is much better.
Problem is that finding a suitable cheap FW is hard, asa5506-x for example is so slow, may give the fortinet 60e a try I think, not as cheap but seems to be a bit more performance.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Ah!!! the trusty old NUC. Have had this 3r or 4th gen i5 NUC that has been repurposed so many times that it probably is in the middle of an identity crisis. And then there is my Skull Canyon NUC. Booted once. Installed Windows 10. Never touched it after that.

And in the middle of writing this I realized I have 2 NVMe drives in it that I have a better use for. thanks @Evan.
 
  • Like
Reactions: fasting

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Woke up to work onthis thing again... turns out I couldn't get into BIOS because I didn't plug the uSB keyboard in... ha ha, rough night last night LOL!!
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I must be a 'livin' on the edge' as I am more than confident in virtualizing both stg and routers and trust me...I maintain an extremely high uptime and only on very rare cases get chastised for something being down. If you configure your systems for HA that actually works and thoroughly test/validate both stg and routers can certainly be run like this in a home lab. My pfSense floats to whatever host needed during maint as I have my WAN conn vlan setup for each host in my 3-node vSphere cluster...same thing w/ CEPH stg, I can survive node failures there, only my FreeNAS AIO ZFS stg is a SPOF (same for a phys ZFS box unless you are HA configured using RSF/etc.) and guess what that is next to never that it goes down and I also have another AIO ZFS appliance on another node if I REALLY need to take the primary down I simply sVMotion workloads off the primary to secondary AIO ZFS box.

Now for business I'd be more inclined to 'maybe' dedicate HW to each function :-D

That's just me and my 2cents. To each his own!
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
Woke up to work onthis thing again... turns out I couldn't get into BIOS because I didn't plug the uSB keyboard in... ha ha, rough night last night LOL!!
I'm reimaging a nuc today... 1. figure out how to pxe boot through pfsense. ... 2. Install an Ubuntu desktop VM and fog. 3.....

I am tired of usb sticks imaging with my test machines but setting up an imaging server for the first time is definitely more involved than Rufus.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
@RobertFontaine I broke down today and did the updates and proxmox install all via iKVM and IPMIView, not sure why I waited so long to learn the 1m process it toook to figure out where to point-n-click, haha... :) Needless to say I'm hooked! Time to get some clarification from SM on their various software packages ;) and also I want to see if a MAC is already on a license for say SUM does that mean I can just 'try it' and if it works it's already licensed... or do they make us RE-LICENSE the SM software if we're not original users... I don't think I really need to update BIOS after initial update and production but their power management software may be worth it depending on what it does... we'll see :)

An imaging server for baremetal images and deployment? I'd be interested in a thread on that specifically if you want to start / update it with what you did and what worked :) I did PXE on my old Synology just to 'do it' but haven't done it, or tried it since ;)
 
  • Like
Reactions: K D

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I'm reimaging a nuc today... 1. figure out how to pxe boot through pfsense. ... 2. Install an Ubuntu desktop VM and fog. 3.....

I am tired of usb sticks imaging with my test machines but setting up an imaging server for the first time is definitely more involved than Rufus.
Would you be able to post the steps you took to set this up? Maybe a separate thread ?
 

CJRoss

Member
May 31, 2017
91
6
8
Another vote for physical router and storage. One less layer to worry about.

One benefit of virtualizing everything else is that you can share resources and reduce your power and HVAC bills.

As for why every distro is trying to do everything else, that's unfortunately what people are asking for. They don't want to buy more/better hardware.