Let me preface this by saying that I know there are more efficient pieces of hardware, I just happened to aquire most of this for free over the years, this is just what I have and I figure it can be a learning experience for me to see if I want to continue and also use this to teach my littles as they are getting interested in the computer sciences. So like a beginners home lab sort of. Now onto the show as they say.
I'll start with the main objectives, first and foremost, I'm looking to move everybody off of our main Plex/media server box onto their own systems. Secondly, I'm looking to try to keep most of any processing done in the rack, except for maybe GPU usage for gaming as I don't think that'll translate very well. And third, I want to be able to learn, and therefore teach them, how to do the more advanced stuff, nothing in particular, just in general. For an example of that third point, I have an old Cisco 100mb 48 port switch, I know it's "slow" but I've tinkered enough with it to know that I can aggregate ports, setup vlans, qos, etc. which some of that on commodity hardware is lackluster or just plain lacking.
Now onto the current setup and available hardware.
Main box is a supermicro sc836 with an x10 dual 2660v3 setup with 128gb ram running W10 Pro for Workstations that handles our Plex server, CAD modeling, a little crypto mining, the occasional gaming, and about 5 different user profiles that mostly just do school work and browse the web.
Second box is a Dell R510 that is our pfSense box, running on bare metal is the correct term I think. So far I like it in there, but I've thought about maybe adding a Silicom 6 port 1g card or two and having it do more routing functions.
The last of the currently in use hardware is just a Linksys WRT3200ACM set to AP mode for my wireless clients and then a Linksys SE2800 8 port gigabit dumb switch.
Now onto the other available hardware that's not in use currently.
We'll start with the Cisco Catalyst 48 port 100mb switch, can't remember model number right now but, like I said up above, I'm envisioning using it as a learning platform for more advanced networking settings.
Next, I've got a pair of PowerEdge 1750 1u servers that the previous owner was running some file server os on, directly connected with a wide, thin cable to the next pieces.
I've got 2 10 bay SCSI enclosures and 24 36gb 10k drives, I envision these as a test bed more or less to demonstrate the differences in raid configurations.
I think my final piece I have is an HP c3000 blade enclosure with 3x BL460c blades, each with dual E5440 quad core Xeons and 16gb of RAM. I'm hoping to cluster them together somehow, I saw a post somewhere about Plex transcoding on a 2 machine cluster and I'd like to do something with all of them leveraged together.
Now onto the part I'm extra lost about, I'd like to use the blade cluster preferably, if not that than the 1750s, to serve up whatever connection is needed for everybody's individual machines, specs on those are up in the air. The research I've done has thrown terms like RDP, PCoIP, VMs, hypervisors, PXE, etc. I have very limited experience with RDP and I've used VMs to play old W95 games but that's about it. I'd also like to be able to have the individual machines be able to use local resources possibly, as I stated in the beginning, so they can run games or something while still having their "environment" on a server for, what I would assume to be, easier management and/or (re)deployment if needed.
This will all be housed in a 47u rack away from everything, if that makes any difference.
Sorry if that's all a bit wordy, possibly incorrect, and all over the place, like I said, this is a whole new world and I don't know diddly about it.
Thanks for any pointers, help, anything really.
I'll start with the main objectives, first and foremost, I'm looking to move everybody off of our main Plex/media server box onto their own systems. Secondly, I'm looking to try to keep most of any processing done in the rack, except for maybe GPU usage for gaming as I don't think that'll translate very well. And third, I want to be able to learn, and therefore teach them, how to do the more advanced stuff, nothing in particular, just in general. For an example of that third point, I have an old Cisco 100mb 48 port switch, I know it's "slow" but I've tinkered enough with it to know that I can aggregate ports, setup vlans, qos, etc. which some of that on commodity hardware is lackluster or just plain lacking.
Now onto the current setup and available hardware.
Main box is a supermicro sc836 with an x10 dual 2660v3 setup with 128gb ram running W10 Pro for Workstations that handles our Plex server, CAD modeling, a little crypto mining, the occasional gaming, and about 5 different user profiles that mostly just do school work and browse the web.
Second box is a Dell R510 that is our pfSense box, running on bare metal is the correct term I think. So far I like it in there, but I've thought about maybe adding a Silicom 6 port 1g card or two and having it do more routing functions.
The last of the currently in use hardware is just a Linksys WRT3200ACM set to AP mode for my wireless clients and then a Linksys SE2800 8 port gigabit dumb switch.
Now onto the other available hardware that's not in use currently.
We'll start with the Cisco Catalyst 48 port 100mb switch, can't remember model number right now but, like I said up above, I'm envisioning using it as a learning platform for more advanced networking settings.
Next, I've got a pair of PowerEdge 1750 1u servers that the previous owner was running some file server os on, directly connected with a wide, thin cable to the next pieces.
I've got 2 10 bay SCSI enclosures and 24 36gb 10k drives, I envision these as a test bed more or less to demonstrate the differences in raid configurations.
I think my final piece I have is an HP c3000 blade enclosure with 3x BL460c blades, each with dual E5440 quad core Xeons and 16gb of RAM. I'm hoping to cluster them together somehow, I saw a post somewhere about Plex transcoding on a 2 machine cluster and I'd like to do something with all of them leveraged together.
Now onto the part I'm extra lost about, I'd like to use the blade cluster preferably, if not that than the 1750s, to serve up whatever connection is needed for everybody's individual machines, specs on those are up in the air. The research I've done has thrown terms like RDP, PCoIP, VMs, hypervisors, PXE, etc. I have very limited experience with RDP and I've used VMs to play old W95 games but that's about it. I'd also like to be able to have the individual machines be able to use local resources possibly, as I stated in the beginning, so they can run games or something while still having their "environment" on a server for, what I would assume to be, easier management and/or (re)deployment if needed.
This will all be housed in a 47u rack away from everything, if that makes any difference.
Sorry if that's all a bit wordy, possibly incorrect, and all over the place, like I said, this is a whole new world and I don't know diddly about it.
Thanks for any pointers, help, anything really.