Help me on where to start.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Oldhome7

Member
Feb 9, 2020
71
23
8
Let me preface this by saying that I know there are more efficient pieces of hardware, I just happened to aquire most of this for free over the years, this is just what I have and I figure it can be a learning experience for me to see if I want to continue and also use this to teach my littles as they are getting interested in the computer sciences. So like a beginners home lab sort of. Now onto the show as they say.

I'll start with the main objectives, first and foremost, I'm looking to move everybody off of our main Plex/media server box onto their own systems. Secondly, I'm looking to try to keep most of any processing done in the rack, except for maybe GPU usage for gaming as I don't think that'll translate very well. And third, I want to be able to learn, and therefore teach them, how to do the more advanced stuff, nothing in particular, just in general. For an example of that third point, I have an old Cisco 100mb 48 port switch, I know it's "slow" but I've tinkered enough with it to know that I can aggregate ports, setup vlans, qos, etc. which some of that on commodity hardware is lackluster or just plain lacking.

Now onto the current setup and available hardware.

Main box is a supermicro sc836 with an x10 dual 2660v3 setup with 128gb ram running W10 Pro for Workstations that handles our Plex server, CAD modeling, a little crypto mining, the occasional gaming, and about 5 different user profiles that mostly just do school work and browse the web.

Second box is a Dell R510 that is our pfSense box, running on bare metal is the correct term I think. So far I like it in there, but I've thought about maybe adding a Silicom 6 port 1g card or two and having it do more routing functions.

The last of the currently in use hardware is just a Linksys WRT3200ACM set to AP mode for my wireless clients and then a Linksys SE2800 8 port gigabit dumb switch.

Now onto the other available hardware that's not in use currently.

We'll start with the Cisco Catalyst 48 port 100mb switch, can't remember model number right now but, like I said up above, I'm envisioning using it as a learning platform for more advanced networking settings.

Next, I've got a pair of PowerEdge 1750 1u servers that the previous owner was running some file server os on, directly connected with a wide, thin cable to the next pieces.

I've got 2 10 bay SCSI enclosures and 24 36gb 10k drives, I envision these as a test bed more or less to demonstrate the differences in raid configurations.

I think my final piece I have is an HP c3000 blade enclosure with 3x BL460c blades, each with dual E5440 quad core Xeons and 16gb of RAM. I'm hoping to cluster them together somehow, I saw a post somewhere about Plex transcoding on a 2 machine cluster and I'd like to do something with all of them leveraged together.

Now onto the part I'm extra lost about, I'd like to use the blade cluster preferably, if not that than the 1750s, to serve up whatever connection is needed for everybody's individual machines, specs on those are up in the air. The research I've done has thrown terms like RDP, PCoIP, VMs, hypervisors, PXE, etc. I have very limited experience with RDP and I've used VMs to play old W95 games but that's about it. I'd also like to be able to have the individual machines be able to use local resources possibly, as I stated in the beginning, so they can run games or something while still having their "environment" on a server for, what I would assume to be, easier management and/or (re)deployment if needed.

This will all be housed in a 47u rack away from everything, if that makes any difference.

Sorry if that's all a bit wordy, possibly incorrect, and all over the place, like I said, this is a whole new world and I don't know diddly about it.

Thanks for any pointers, help, anything really.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I think myself and likely why others haven't replied is we're not clear on what you're asking?

Can you use all that hardware in a lab to learn, and plex -- you bet! and that E5 v3 system isn't something that's old and slow.

The XEON E5440 are very slow\old though ;) but probably OK to learn on. You won't be running ESXI + VMs on those though at-least not much of anything.

The E5440 I wouldn't use for plex. For lab \ learning \ testing -- sure
 

Oldhome7

Member
Feb 9, 2020
71
23
8
The E5 system is currently my plex server, I know it's not slow, I didn't plan on changing that one, just trying to get the users off of it.

I'm just trying to cluster the blades together, that'd be 6x E5440 and 48Gb of RAM, in some way and possibly use those to serve up the VMs or whatever.

The rest of the stuff, I'm just trying to build a home learning/teaching lab.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Unless electricity is near free the c3000 blade enclosure + E5440 is going to cost way more than it's worth to run.

1 of those E5-2660v3 > 6x of those E5440 :(

If you want to learn on the E5440 and power on\off as needed I don't think that's an issue, but I'm also not sure what modern hypervisor will work with that generation CPU-- someone else will need to comment or you can just download Proxmox and ESXI and try it out ???
 

TLN

Active Member
Feb 26, 2016
523
84
28
34
I've read diagonally, but overall all that e5440 systems or 100mb switches and 36Gb (drives) are going to trash imgo.
SFF with ryzen 5000 and 1tb sdd will be way more efficient and preferred option imho
 

Oldhome7

Member
Feb 9, 2020
71
23
8
Like I said from the beginning, I know this is inefficient hardware, why must everybody ignore the part where I acknowledged that, I don't know why everybody automatically hops on and has to say that somebody needs something more efficient. This is just so I can learn on and see if it's something I want to continue with, and if I decide to continue, then I'd be looking at more efficient options. For now, this is what I'm working with.

Proxmox sys reqs just say Intel EMT64 with the VT cpu flag, which the E5440 does have. I also see there is a cluster manager as well, though it only seems like they are more of an availability type thing and not something that combines them more for performance.

I can't download it and try it out right now, we're finishing up our new house and I'm just pre-planning everything for when we're in there.

Let me reiterate for those in the back. I know it's old hardware, I know it's less efficient than newer offerings. I just want to learn before jumping down this rabbit hole we've all seemed to fall into. I also want to use this as a teaching experience for my kids. Can we get past the part of it being inefficient and junk hardware now? Now if it absolutely won't work on the available hardware, that's another story, but as I saw in Porxmox's requirements, this "junk" is still supported, and I'm definitely not averse to using older software to LEARN with. Things can always be upgraded down the road.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,063
1,482
113
I think they are alluding to the fact that the electricity cost to run them may be more than just buying more modern hardware outright.
 
  • Like
Reactions: T_Minus

TLN

Active Member
Feb 26, 2016
523
84
28
34
And fact that you never said what's your end goal with all that.
If you wanna rack all that you probably need a full rack, swtich, PDU and etc. If you go smaller, you can get small server or NAS that can be easily hidden away in your house and only require two cables network and power.

But yeah, can you learn using this stuff? Easy. Most of the hypervisors and OSes will run just fine
 
  • Like
Reactions: T_Minus

Oldhome7

Member
Feb 9, 2020
71
23
8
I think they are alluding to the fact that the electricity cost to run them may be more than just buying more modern hardware outright.
Let me preface this by saying that I know there are more efficient pieces of hardware, I just happened to aquire most of this for free over the years, this is just what I have and I figure it can be a learning experience for me to see if I want to continue and also use this to teach my littles as they are getting interested in the computer sciences. So like a beginners home lab sort of. Now onto the show as they say.
I get that, but I started this out with saying that I knew people were going to jump straight on that. I shouldn't have to mention that I have zero electricity cost in my home with my solar array.

And fact that you never said what's your end goal with all that.
If you wanna rack all that you probably need a full rack, swtich, PDU and etc. If you go smaller, you can get small server or NAS that can be easily hidden away in your house and only require two cables network and power.

But yeah, can you learn using this stuff? Easy. Most of the hypervisors and OSes will run just fine
I did mention I have a 47u rack, the catalyst switch, possibly adding more ports to my pfSense box, I have a PDU in the 47u rack.

I also mentioned what I'd like to do with all the hardware, but again, I don't know what or how to do what I'm wanting, if it's even possible. I've never setup a hypervisor or VM host or networked everything like this. That was the point of this post. I know it's a massive post but I think I covered all my end goals in there.
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
There are many details in the original post, here I have tried to break it down a little, to make it more manageable :)

Workloads:
- Plex server
- CAD modeling
- Crypto mining
- Gaming
- School work etc.

Hardware in use ATM
- Supermicro Xeon v3 (running all workloads)
- Dell pfSense box
- Linksys AP
- Linksys unmanaged switch

Hardware available
- 2x Dell Poweredge 1750s with diskshelves
- HP blade server with 3x blades

What you want to achieve:
- Plex workload on HP blade servers and/or Dell 1750s
- Gaming on HP blade and/or Dell 1750s
- TBH I gave up here, it is quite fuzzy, do you want all workloads on the HP and/or Dell servers?

In any case, I believe you will probably find it difficult to do gaming and CAD inside virtualized machines.
Unless you are, as your example suggested, using them to play old games, you will want to have good support for 3D graphics.
I am not saying it can't be done, just that you need to figure it out.

As for running Plex virtualized, from what I can read (very briefly), clustering plex with two machines, will not make it able to decode twice as fast, but rather do two decodes at the same time.
This means before you do anything, you need to figure out if the hardware is fast enough to do it. I am not a Plex expert but I assume that means you can test it relatively simple with a basic install on that hardware.

In general, I suggest you start breaking it all down into smaller pieces.
Doing clustered plex AND virtualized gaming/CAD WHILE learning virtualization is all a bit much all at once, start smaller :)

And yes I agree, the hardware is antiquated, but sure you could probably learn something
 
  • Like
Reactions: T_Minus

Oldhome7

Member
Feb 9, 2020
71
23
8
So I suck at formatting I guess lol. You're almost spot on. The only change is that I plan on keeping Plex/CAD/Mining on the E5 box and pfSense on the "big" Dell R510 box.

The rest is the learning setup.

The virtualization, school work, browsing, I was hoping to put on the blade cluster. If there was a way to allow a local GPU to be used for gaming on the client side, that would be a bonus. I plan on building out not-so-thin clients for them to connect with, but I came across PXE and was hoping that could get integrated so I could have more centralized storage and not need any locally.

The "little" Dell 1750 boxes were planning on more of a test bench type thing to experiment on.

The Cisco is also for learning and experimenting.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
It's a LOT of information + a lot of not-so-clear questions + what appears to be you've attempted at doing nothing yourself already so no one can tell you what step to take next... I think if you break it down and install proxmox see if it works, report back, setup another system on proxmox report back, and ask questions when you're stuck you'll get much more specific answers that help you.
 

TLN

Active Member
Feb 26, 2016
523
84
28
34
I did mention I have a 47u rack, the catalyst switch, possibly adding more ports to my pfSense box, I have a PDU in the 47u rack.

I also mentioned what I'd like to do with all the hardware, but again, I don't know what or how to do what I'm wanting, if it's even possible. I've never setup a hypervisor or VM host or networked everything like this. That was the point of this post. I know it's a massive post but I think I covered all my end goals in there.
Yeah, you mentioned 100meg switch. Modern motherboards come with 1 or 2.5gbps NICs and some people (myself included) got 10gbps network.
Things like that will ruin your experience. Most of us getting enterprise equipment cause it's cheaper/faster/more reliable, sometimes all of it, than consumer stuff. Which is not the case here.

In any case, I believe you will probably find it difficult to do gaming and CAD inside virtualized machines.
Unless you are, as your example suggested, using them to play old games, you will want to have good support for 3D graphics.
Actually i've been using virtual desktop with passed-through graphics for quite some time for all the things, including gaming. Worked flawlessly (nvidia passthrough with esxi). Currently my HTPC is set up this way
 

Oldhome7

Member
Feb 9, 2020
71
23
8
It's a LOT of information + a lot of not-so-clear questions + what appears to be you've attempted at doing nothing yourself already so no one can tell you what step to take next... I think if you break it down and install proxmox see if it works, report back, setup another system on proxmox report back, and ask questions when you're stuck you'll get much more specific answers that help you.
I don't know what to do next or where to start. I know nothing about hypervisors except for names like Proxmox, ESXi, Hyper-V, etc, I don't know what ones can do what, there's a lot of verbiage thrown around in all the stuff I've been reading that I don't quite understand yet. I think that's why I tried to cram so much information in, in hopes that maybe some of it makes sense on what I'm trying to do.

Yeah, you mentioned 100meg switch. Modern motherboards come with 1 or 2.5gbps NICs and some people (myself included) got 10gbps network.
Things like that will ruin your experience. Most of us getting enterprise equipment cause it's cheaper/faster/more reliable, sometimes all of it, than consumer stuff. Which is not the case here.
Yes, I know it's slow, that's why it's not planned on being used in production, that's left up to the pfSense and Linksys dumb gigabit switch for now. I just know that it's a fully manageable switch that I can play around with and learn settings like port aggregation, QoS, VLANs, and such. That way if/when I do find a good deal and decide to upgrade, I'll know what I'm doing in it. For reference, the switch is a WS-C2948G.

Anything 100 mbit/Fast Ethernet only has one place: e-waste.
Or for learning and teaching. I'm pretty sure settings and configuration doesn't generally care if it's 100mb vs 1000+mb, except for QoS and limiters obviously.
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
Actually i've been using virtual desktop with passed-through graphics for quite some time for all the things, including gaming. Worked flawlessly (nvidia passthrough with esxi). Currently my HTPC is set up this way
You did not include the last sentence in that paragraph, making it sound like I said it can't be done, I do not appreciate that. (I am not offended though, so let's leave it there)

I have not heard of any solutions where a local GPU can be used to accelerate graphics on a remote VM.
As per my understanding, in VDI setups you will have a GPU on the server to do the acceleration.

With a local VM you can passthrough a GPU in various ways, but that was not what OP was getting at (unless he wants to make the blade server a local machine o_O ).
 

Oldhome7

Member
Feb 9, 2020
71
23
8
I fell into the rabbit hole guys, I fell real bad.

So after more research, I think I'm going to go with Proxmox on the blade setup, with the blades I have in an HA cluster, with RPi or similar tiny clients around the house for everybody. Kinda stuck on no GPU since those blades don't have that kinda room, even if I did update to more efficient Gen 9 blades with E5v3's in them.

One of the 1u Dells will be connected to the SCSI housings, only because it has the appropriate ports to connect to them, and it will be used as storage for the VMs and trying out RAID/ZFS, since I don't have much experience with those.

The other Dell, I may drop that into the Proxmox cluster and add my single slot WX4100 GPU into it and give at least a little acceleration to the VMs, not full on gaming but something.

The Cisco will still be used as stated before, to play with the advanced settings, and because the blade setup has about a million ethernet ports and the Cisco I know at least has aggregation.

I've also considered falling even deeper and possibly going for CCNA certification, but that's a whole other topic.

EDIT: I think I'm an idiot also, the 2948G switch is a gigabit switch, at least the front panel says 10/100/1000 anyways. I don't know why I had it stuck in my head that it was a fast ethernet switch... guess that makes it a little less obsolete.
 
Last edited:

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
I have not heard of any solutions where a local GPU can be used to accelerate graphics on a remote VM.
As per my understanding, in VDI setups you will have a GPU on the server to do the acceleration.
You can/there is, but the acceleration is not what you might think. Citrix XenDesktop has an SSL tunnel that flings HTML5 video decoding (say, playing back video on VLC) from the host hypervisor to the GPU in the Xen viewer, so the task of decoding the screen is done on the thin client running the session. Also, both RemoteFX and HP RGS encodes the video stream from the VM to a given thin client and uses the local H264 decoder hardware on the receiver end for rendering the results. For OpenGL/DX12 it’s still a server-side GPU, but in terms of decoding the display stream coming off the VM host, it’s very much done by the GPU side off the client.
 
Last edited:

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
My advice is first to separate out your "production" needs (file server, plex, VDI) from "lab" experiments (CCNA, utilization of existing hardware). You can fill your needs without feeling obligated to utilize all your hardware, and then you can play around with the remaining gear for experiments that won't affect the kids' homework.

If the X10D box feels sluggish, is it because of Plex, or something the VDIs are doing? Or perhaps it's iowait due to also being the file server? If that's the case, the simplest thing is to split out file-serving duties to a dedicated NAS -- it doesn't have to be beefy, LGA1150 would be fine and super cheap.

If the load is mostly due to Plex, are you transcoding? The cardinal rule of Plex is to direct-play as much as possible. Even 4k remuxes stream just fine across gigabit. Ensure subtitles are not causing transcoding. If you do have remote users who need transcoding, isolate them on a separate library with only 1080p, which is much easier to transcode and avoids HDR tonemapping issues. If you have more than 3-4 such users simultaneously transcoding, pick up a $125 SFF desktop with a recent CPU, like the HP 290 -- the Celeron is no powerhouse, but its QSV can easily do 20x transcodes from 1080p. TMM (uSFF) form factor is also fine, just a little more expensive. This does require Plex Pass. If you exceed 20 simultaneous remote users, add a second QSV box and split up the users. Serverbuilds has detailed guides on this.

Once you've minimized/offloaded the Plex workload, the VDIs should run just fine on the E5v3s with 128GB of RAM; they'd probably run smoother there than on the ancient Penryn blades. You can still use the blades to play around with Proxmox, k8s, etc., and power the chassis down when not in use. I, too, have ancient switches at home, but I have no illusions about running production workload on them.
 
  • Like
Reactions: itronin

Oldhome7

Member
Feb 9, 2020
71
23
8
Alright guys, sorry for the delay in reply, we finally got moved into our new home and just now got everything sort of set up.

Sean Ho, the X10D box doesn't do VDIs, but it does pretty much everything else. It's Plex, it's the computer that everybody uses, and it's my main mining rig. Even then, it's really not sluggish. I mostly just want to move the kids and wife off of it so that it can just sit and do its thing with the Plex, mining, and little bit of CAD usage.

I want the VDIs and stuff on the old blades or 1u Dells, mostly as an "evaluation" type thing. Then I figure, if I like the way that all goes, with the Proxmox clustering I can just start taking that old stuff offline as I add in newer, more efficient hardware.