Living the dream, building our own server room from scratch!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
This is a build log for our project where we are building our own server room. We who are doing this are me, Legen, and my friend registered here on STH as Krazos.

We are 25 and 20 years old respectively and are based in the mid of Sweden. We have both been running home servers etc. for a long time in our closets. We met by accident and now we have finally decided to really do this. Why you might ask? Well since its darn challenging and a lot of fun!

I decided to create this thread to share what we learn, show our progress, results and get feedback from you guys to avoid the worst traps.

The server room
The room we have access to is an old storage/bombshelter. We got air-cooling and ventilation but will need a better cooling solution in the future. We have a dedicated 1 Gbit/s internet connection with possibility for up to 10 Gbit/s (but that's nasty expensive).

Goals
We want to expand our hobby project and try to build something that feels like the real deal. Our goal is to build a good IT-infrastructure on which we can deploy almost whatever we want. Our idea is to begin with hosting some small game servers, web pages etc. nothing major. We see this as a great opportunity to learn more and educate ourselves. We try to keep costs low by using as much open source and free software as possible.

We will carry out this project in small iterative steps. In each step we will focus on one key part of the IT-infrastructure and improve it. After we have improved it we will move on to the next. By doing this we will slowly, month by month, see the server room becoming more complete.

Step 1
We started out with two C6100 machines (we each owned one when we started this). A couple of older Dell- 1850, a 1950 and a 2950 with 6 TB of older hard drives. For networking we use two gigabit switches, Netgear GS724t. We installed an UPS and better ventilation.
We began with running trial Esxi hypervisors on each C6100 node. In each node we had 1 SSD drive and 1 HDD drive based on dba's great tip.
We use the 2950 for backups. The 1850's are used for pfsense firewalls. We plan to set up CARP between them.

Step 2
  • Migrate to either proxmox or citrix xenserver as a hypervisor. VMware is just too expensive.
  • Build a SAN for shared storage used by all C6100 machines. Probably one SSD pool to begin with.
  • Make a 10 Gbit/s connection between the SAN and the switch. Use LACP in each C6100 machine to the switch.

Step 3
  • Increase the SAN with secondary slow storage pool based on WD RED drives. Possibly adding one more SSD pool.
  • Buy 2 more C6100 machines.

Step 4
  • Build a high speed network, > 10 Gbit/s.
  • Build another SAN mirroring the existing for HA.

Check the end of this thread for updates. Will no longer keep the main post with updated information :)
 
Last edited:

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Pictures
Of course we have some pictures, oldest first.


The racks were to be thrown away but we got them instead.



 
Last edited:

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Great story and great project. I was just typing "we need pictures" when I saw you posted again. Fantastic job guys!
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Great story and great project. I was just typing "we need pictures" when I saw you posted again. Fantastic job guys!
Thanks :). I'm unsure how to best maintain this thread with the newest information. Maybe i should have done some *RESERVED* posts right away :rolleyes:
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
Well, here might be an idea from the question that came to mind when I was reading; What do you use it all for and why?
Sounds silly but all I can see so far is some newer gear and plenty of older gear that is not friendly to the power bill. I would like to see some justification.



A 2950 II for FreeNas..... Just a little past Overkill. (loved the comment on the working BBU)
Aluminum corrugated metal looks good but the noise in there wouldn't be a real fun time.
 

Mike

Member
May 29, 2012
482
16
18
EU
You are doing hosting and got the storage thought out pretty good i guess. How about the software. How will you keep your configurations consistent, manage servers, monitor them and the network, how are you on security? Lets get this discussion started.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,807
113
Let me know if you need help. Also, if you are interested, we could probably turn this into an main site post.
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Home from work and going to try answer your questions :)
Well, here might be an idea from the question that came to mind when I was reading; What do you use it all for and why?
Sounds silly but all I can see so far is some newer gear and plenty of older gear that is not friendly to the power bill. I would like to see some justification.



A 2950 II for FreeNas..... Just a little past Overkill. (loved the comment on the working BBU)
Aluminum corrugated metal looks good but the noise in there wouldn't be a real fun time.
I added our goals and usage to the thread start, but among other things this is just a really exciting project for us :)
For the power question. We have a great deal with the owner of the facility who is related to us. So we won't have to pay for electricity in the foreseeable future. This gives us a great opportunity to work using our agile/iterative approach.
The noise is a little high yes, we have tried to shield away part of the room where we will have some workstations. They will be used when we are working on-site.

You are doing hosting and got the storage thought out pretty good i guess. How about the software. How will you keep your configurations consistent, manage servers, monitor them and the network, how are you on security? Lets get this discussion started.
Ah software. I guess I didn't go into that. Our current plan,
  • Zabbix - Monitoring of VMs etc, Currently up and running
  • Snort - Using switch mirroring ports. Monitoring all internal network traffic, To be done
  • OpenLDAP - for authentication (linux, windows, openVPN etc), Currently up and running
  • Pfsense - Used for remote access via OpenVPN, Currently up and running
We aim to secure each server with iptables to limit the attack surface, possibly even SELinux for websevers and publically exposed VMs. We have VMs in groups separated by Vlans. A pfsense box is the bridge between these vlans. We will lock down one "Core" vlan where access will be very restricted. In the other vlan we will be running publically accessible VMs. Using the pfsense box we can control what content that can flow from our public zone to our core zone.

We have not yet looked into software for keeping configurations consistent. Might ask for tipps when we get there!

When we get closer to step 2 and start migrating over to xen/proxmox we will probably do a whole iteration over our current software configurations.


Let me know if you need help. Also, if you are interested, we could probably turn this into an main site post.
I suspect we will ask many questions over the next months. Main site post sounds fun, but I might want to wait until we are at least at our step 2. At this step I hope we will have something a little more to show people :)


SAN parts
So we are currently looking into the SAN and have decided on the following setup, based on this thread.
  • Motherboard - X8DTH-6F
  • CPU - 2x L5639
  • RAM - 32 or 64 GB ECC
  • DISK - 6 or 8 SSD Samsung Pro 512 GB
  • Chassi - ?
Like stated in that thread i hope to be able to show some interesting benchmark using different settings for dedup and compression.

Our current concern is the chassi. We are discussing if we should start with a more simple/cheap Norco 4220 or go with a more expensive chassi right away.
 
Last edited:

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Cool Bananas, can you get me a deal on my power bill?
If you move to Sweden i can put in a good word for you ;)!

We have a complete proposal for the SAN build in step 2 now.
  • Motherboard - X8DTH-6F
  • CPU - 2x L5639
  • RAM - 32 GB ECC
  • DISK - 8 X Samsung PRO 512 GB SSD
  • Chassi - Supermicro SC213A-R740LPB or CSE-219A-R920LPB

We will begin with using the build-in LSI 2008 controller for all the SSD drives. I need to find some CPU coolers but have not yet looked into that, know Supermicron has some.

Now we chose this to be ready for Step 3. In step 3 we plan to add the following.
For another SSD Pool:
- Add another LSI 2008 based raid card (i.e. LSI 9211-8i or IBM 1015).
- Add 8 more SSD drives to create a secondary SSD pool.

For a HDD pool
- Add an LSI 2008 based raid card with external ports
- Buy a JBOD chassi for up to 16 or 36 3.5'' drives.

Of course i need to look deeper into parts and details when we get closer to step 3 :)

Questions
1. One simply buys 2x SFF-8087 to SFF-8087 cables to connect the motherboard ports to the backplane?

2. Will the backplane limit my SSD throughput?

The backplane is 6Gbit/s with 4xSFF 8087 connectors. The motherboard offers pci-e v2.0 8x, 32Gbit/s. The controller offers 8x 6Gbit/s ports. Each SSD has ~ 4.1 Gbit/s for read/write.

Will the backplane limit my SSD throughput here? If i understand this correctly each backplane port connects to 4 bays, thats a total of
16,1 Gbit/s using my SSD drives. Does one backplane SFF 8087 port give me 24 Gbit/s or 6Gbit/s?

3. Is the difference between the SC213A and SC219A only the peripheral bay and PSU?
What i can see they are otherwise identical.

4. We plan to use all 16x2.5'' Bays for SSDs. Any advice on where to mount a drive used for the operating system?
I guess one can swap the peripheral bay 5.25'' space to something we can mount OS hard drives in.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,807
113
It should give you 24gbps assuming it is a 6gbps capable expander and you are using all 6gbps devices. One SFF-8087 is four ports/ links at 6.0gbps each = 6x4 = 24gbps.
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
It should give you 24gbps assuming it is a 6gbps capable expander and you are using all 6gbps devices. One SFF-8087 is four ports/ links at 6.0gbps each = 6x4 = 24gbps.
I was hoping so, thanks for the confirmation!
 

Hugovsky

New Member
Jan 27, 2014
1
0
1
Just registered to follow this thread. Great work and great site/forum. Looking foward to see this going.
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
Questions
1. One simply buys 2x SFF-8087 to SFF-8087 cables to connect the motherboard ports to the backplane?

2. Will the backplane limit my SSD throughput?

The backplane is 6Gbit/s with 4xSFF 8087 connectors. The motherboard offers pci-e v2.0 8x, 32Gbit/s. The controller offers 8x 6Gbit/s ports. Each SSD has ~ 4.1 Gbit/s for read/write.

Will the backplane limit my SSD throughput here? If i understand this correctly each backplane port connects to 4 bays, thats a total of
16,1 Gbit/s using my SSD drives. Does one backplane SFF 8087 port give me 24 Gbit/s or 6Gbit/s?

3. Is the difference between the SC213A and SC219A only the peripheral bay and PSU?
What i can see they are otherwise identical.

4. We plan to use all 16x2.5'' Bays for SSDs. Any advice on where to mount a drive used for the operating system?
I guess one can swap the peripheral bay 5.25'' space to something we can mount OS hard drives in.
  1. Reverse break-out cables (SATA to SAS), don't get caught with the normal cables that are forward break-out (SAS to SATA) Free Ship Norco C SFF8087 4s Discrete TO SFF 8087 Reverse Breakout Cable | eBay
  2. No, you have a SAS2/SATAIII connection direct to each drive. The only backplanes that are a limit are those that can't handle SAS2/SATAIII or are "Expander" backplanes. The controller will be the choke if it can only hold 8 drives (2x SAS ports), expansion is needed.
  3. I don't and haven't used these chassis's so I can't say sorry.
  4. Velcro or tape inside.
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Just registered to follow this thread. Great work and great site/forum. Looking foward to see this going.
Thanks!

  1. Reverse break-out cables (SATA to SAS), don't get caught with the normal cables that are forward break-out (SAS to SATA) Free Ship Norco C SFF8087 4s Discrete TO SFF 8087 Reverse Breakout Cable | eBay
  2. No, you have a SAS2/SATAIII connection direct to each drive. The only backplanes that are a limit are those that can't handle SAS2/SATAIII or are "Expander" backplanes. The controller will be the choke if it can only hold 8 drives (2x SAS ports), expansion is needed.
  3. I don't and haven't used these chassis's so I can't say sorry.
  4. Velcro or tape inside.
Thanks for the answers, I would definitely have gotten the wrong cable. Adding some more answers to my questions.

3. There was two major differences between the CSE-213A-R740LPB and CSE-219A-R920LPB.
First the PSU specification differs (740W against 920W). Secondly the size differs. The 219A chassi is "deeper" and made for motherboards with 24 DIMM slots.

Thus we will go with the 213A model since we don't need the bigger size or the additional wattage.

4. One can apparently use a 5.25'' to 2x 2.5'' adapter like the CSE-M14T.

But we wont go down that road. Instead we will be running OmniOS from two fast USB 3.0 drives (mirrored rpool). See napp-it to go.
 

Mike

Member
May 29, 2012
482
16
18
EU
Are there PCI backing plates of NICs and various controllers with 2.5" mounting by any chance? Would be neat.
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
Hmmm, not sure what you mean. Could you clarify?
Yeah, sure. Abstract has an idea with actual removable bay holder. What I was thinking was a little more Ghetto with nothing more than a slotted or mesh styles PCI slot blanking plate and screwing the SSD directly to it with the SATA/Power facing up.

Are there PCI backing plates of NICs and various controllers with 2.5" mounting by any chance? Would be neat.
Abstract I think has answered your question.
Here is an example of the PCI Holder for an SSD Drive. No it would be hard to fit that on top of an existing PCI Controller Card.
Nicely spotted.


https://www.google.com.au/search?q=...esiAfwnIDoAQ&ved=0CAcQ_AUoAQ&biw=2560&bih=972