Suggestions for completing server build for home lab

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

thecoffeeguy

Member
Mar 10, 2016
119
1
18
48
Hey folks.
Hopefully this is a pretty easy question.

The short version is, my company has said they would pay for parts to finish building out my server lab. My initial thought was, awesome, but then after that, what else could I add? I immediately thought about asking here.

First, i have this mobo:
SUPERMICRO MBD-X10SRL-F

this CPU:
Intel Xeon E5-2603 v3 Haswell 1.6 GHz LGA 2011-3 85W


I currently have 64gb of memory in it and just dropped in a T-350 quad NIC as well (BTW, running ESXi on it).


My thoughts were to definitely upgrade my memory and get to 128gb. One question to ask is, can i go to 96fb then 128? break out the costs?

Lastly, what else could I add?

Other items i am thinking....storage? more, faster storage?

Hoping to build a NAS server maybe this year as well.


Appreciate the help and recommendations!

Cheers,

TCG
 

pyro_

Active Member
Oct 4, 2013
747
165
43
First question would be what are you planing on doing with it as that would drive what you need for it


Sent from my iPhone using Tapatalk
 

thecoffeeguy

Member
Mar 10, 2016
119
1
18
48
First question would be what are you planing on doing with it as that would drive what you need for it


Sent from my iPhone using Tapatalk
good point.
Most of what i will be doing is lab/work (VM work, devops, some scripting stuff.)
would like to add some networking testing in here hence why i added the QUAD nic.

would like to add some more local storage (planning to have quite a bit, would prefer SSD drives for speed)
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
10G networking (super cheap these days), SSD pool (intel s3700/hussl/husmm dev's), NVMe (Intel P3600/P3700), the sky's the limit if the company is paying for it :-D

I have a 3 node cluster utilizing the X9SRL-F mobo's w/ 128GB memory each and could not be happier w/ them.

Pssst, if you put that gear in a proper chassis you could have a sweet AIO (all-in-one) config meaning hypervisor of your choice, and stg platform, stg shared back out to hypervisor :-D
 

thecoffeeguy

Member
Mar 10, 2016
119
1
18
48
10G networking (super cheap these days), SSD pool (intel s3700/hussl/husmm dev's), NVMe (Intel P3600/P3700), the sky's the limit if the company is paying for it :-D

I have a 3 node cluster utilizing the X9SRL-F mobo's w/ 128GB memory each and could not be happier w/ them.

Pssst, if you put that gear in a proper chassis you could have a sweet AIO (all-in-one) config meaning hypervisor of your choice, and stg platform, stg shared back out to hypervisor :-D
those are some fantastic ideas.
My company is totally cool with it as long as i break up the costs over a few pay periods. I can build as i go.

i was thinking of getting to 128gb memory first, since that seems pretty reasonable (have 4 DIMM slots left).
any issue with going from 64 to 96, then eventually 128?

The SSD stuff is the next piece for me (should have plenty of slots for cards as needed for more SATA connectors). Eventually, want to setup a FreeNAS box as well.

For my case, i have a 3U Norco case with lots of room inside. :)
 

thecoffeeguy

Member
Mar 10, 2016
119
1
18
48
Selfish bump here (expenses due wednesday).

Initial plan: Upgrade memory from 64-96gb for now, then next month grab the remaining 32 which gives me 128.

One other piece i realize i need, a switch, 16+ports is ideal (PoE capability is nice as i have some wifi points that are running on PoE injectors right now.)

Maybe pick up a storage card for this box to add more SATA connectors (HBA)....thoughts there?

Much obliged everyone.
 

thecoffeeguy

Member
Mar 10, 2016
119
1
18
48
Is a new CPU totally out of the question?

Poor thing doesn't even have hyper threading, you're going to make it cry with all that power and nowhere to really use it.
New CPU is totally in play.

Per my boss, as long as I can break out the expenses into smaller numbers instead of one huge expense, I can do whatever I need/want.

I would like to compile a list of stuff to buy and then just start buying it.

Maybe in this order:

Memory
CPU
HBA for storage
Networking


What else...

Thx !
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
good point.
Most of what i will be doing is lab/work (VM work, devops, some scripting stuff.)
would like to add some networking testing in here hence why i added the QUAD nic.

would like to add some more local storage (planning to have quite a bit, would prefer SSD drives for speed)
I have been doing a bit of that myself lately. I was helping the ops team automate the deployment of cloud servers so spent a lot of time spinning up VMs, deploying services and tearing them down. I did this to the "real cloud" for about a half a day before I thought I was going to die of old age before we were done. So I modified the scripts to go to a local VM and did my testing here.

The first thing I discovered is I needed more memory.

The second thing I discovered is I needed even more memory.

The third thing I discovered is I did not have enough memory.

Then I got around to replacing the spinning rust with SSDs.

I eventually got around to replacing the high core count, high power processor with a lower power processor. The cooling system for it was making too much noise and I didn't need it to test devops scripts.

FWIW, I have two boxes dedicated to this testing. One is running NAS4Free and the other a single LGA2011 socket with 128G running ESXi. They are connected with a single 10G DAC for the data plane. They share the same 1G network for the control plane as the rest of my lab network. I would never, ever, ever do an AIO for testing cloud deployment automation. Everything outside my lab is separate compute and storage with a network in between. I would feel foolish to eliminate a critical part of the equation in my simulation.
 
  • Like
Reactions: Marsh and Evan

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I was going to say although it's a special interest for me to make my home lab as quiet, small, Low power consuming as possible (compared to my at work shared lab that's multi racks of equipment in a DC) you may not need to go over kill on this. A full open stack setup can be virtulised in 32gb ram but as @fractal has mentioned more ram, more ram. If you think about a full cloud foundry install then as much memory as you can afford.

For reference my pure test lab is a nice compact Xeon-D 1540 (8-core) with 128gb ram, over 4tb of sata ssd. (Reasonably cheap, quiet enough for sure, Low power consumption, small)

I am about to replace my production home nas and servers though with a couple of new c3000 or Xeon-D boxes though. Again the target is small, Low power, reliable, and hopefully not too expensive systems.
 

thecoffeeguy

Member
Mar 10, 2016
119
1
18
48
I have been doing a bit of that myself lately. I was helping the ops team automate the deployment of cloud servers so spent a lot of time spinning up VMs, deploying services and tearing them down. I did this to the "real cloud" for about a half a day before I thought I was going to die of old age before we were done. So I modified the scripts to go to a local VM and did my testing here.

The first thing I discovered is I needed more memory.

The second thing I discovered is I needed even more memory.

The third thing I discovered is I did not have enough memory.

Then I got around to replacing the spinning rust with SSDs.

I eventually got around to replacing the high core count, high power processor with a lower power processor. The cooling system for it was making too much noise and I didn't need it to test devops scripts.

FWIW, I have two boxes dedicated to this testing. One is running NAS4Free and the other a single LGA2011 socket with 128G running ESXi. They are connected with a single 10G DAC for the data plane. They share the same 1G network for the control plane as the rest of my lab network. I would never, ever, ever do an AIO for testing cloud deployment automation. Everything outside my lab is separate compute and storage with a network in between. I would feel foolish to eliminate a critical part of the equation in my simulation.
very cool and very similar to stuff i will be working on. Great info to have.

That said, kind of falls in line to what i was thinking of: get to 128gb ASAP. Any problem going from 64 to 96, then to 128? or do i need to fill in the remaining DIMM slots to even out the memory sort of say.

I do have another box with a supermicro board. I would like to populate that with some SSD's for shared storage/testing capabilites. Maybe add at least one more SSD to the ESXi host itself, just to have.

any suggestions on HBA's to look at to add more SATA connectors?

much appreciated!
 

Marsh

Moderator
May 12, 2013
2,645
1,496
113
@thecoffeeguy

Start working on deployment now , then find out how much RAM you need.
Extra 64GB serveECC RDIMM is relative cheap.

The best bang (IOPS ) / $$ is Fusion-IO ( research the forum about the $240 1.2TB deals, I paid $200 each ).
Check out the FuisionIO benchmark in this form as well.
My opinion is that there is no better deals out there. You will get insane IOPS for the little $$
 

thecoffeeguy

Member
Mar 10, 2016
119
1
18
48
@thecoffeeguy

Start working on deployment now , then find out how much RAM you need.
Extra 64GB serveECC RDIMM is relative cheap.

The best bang (IOPS ) / $$ is Fusion-IO ( research the forum about the $240 1.2TB deals, I paid $200 each ).
Check out the FuisionIO benchmark in this form as well.
My opinion is that there is no better deals out there. You will get insane IOPS for the little $$
awesome...i will do that.
Thank you so much!