[HELP] colocation setup help needed

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

uberguru

Member
Jun 7, 2013
319
18
18
Hi everyone,

I will be starting my new colocation experience in a few weeks and have never colocated any server before so kind of a noobie when it comes to setting the networking and all and just need you guys help and guidance


I will be getting a 3U colocation space and i will be given 2 power outlets, 10TB on 1Gbps uplink with 3 Ipv4 addresses.

What i want to do is get the Dell Poweredge C6100 2U with 4 node servers and then get a switch 1U making total 3U colocation space.

Now as i said before this is my first experience and i do not have much experience with networking except at my house..basic router with wireless configuration.


Now i need help in guiding me to achieve the following.


I want to power on all 4 nodes of the 2U server and connect each node with 2 ethernet ports...one for ethernet connection and the second for remote management IPMI interface...so making total 4 x 2 = 8 ethernet ports from the switch. I want the server nodes to communicate with one another.

What type of switch do i need. Yes i get 1Gbps uplink and i will like to get a switch with all port 1Gbps since it connects 10/100/1000Mbps which is a no brainer whether server node with hit 1Gbps or not.
Also will this setup be good? since i get 2 power outlets and i connect my server and the switch with each of the 2 outlets.

Also what tutorial or youtube video do i look at to learn about how to set this thing up? and be able to run my web/database/storage/backup servers all from my setup?


Thanks for the help in advance!
 

Mike

Member
May 29, 2012
482
16
18
EU
Depending on where you're going, it may not be allowed to have a simple L2 switch for this setup. Please check.
As for the setup; If you're pushing 10tb of data for a website with their backends. No youtube clip is going to suffice. More info is needed.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Hi there - wow you have a lot to consider!

I haven't had the time to write up what STH did but one major aspect is having a firewall type device in front of everything. When we moved the forums over to the colocation, we noticed that a botnet was trying a 50 or so registrations per second attack on the site within a few hours. We rate limited it by IP then saw, after an hour or so, the attack fan out to multiple nodes and IP addresses and grow rapidly. We ended up using two pfsense nodes but I also had bought a Fortinet 60C which would have been fine with low power consumption.

http://www.servethehome.com/colocation-architecture-servethehome-2013/

That will have a major impact on networking. Otherwise you are basically exposing the nodes to the outside. You will likely want to have one internal and one external network. Also, in terms of switch size, you will likely want a minimum of a 16 port switch with 4 nodes. This is because each node will have 8x data NICs and 4x IPMI NICs. You'll want the IPMI NICs to be on their own network. You also will want that network behind a VPN.

BTW: How much power are you getting with the colo?

Just as a note - I don't even admin the STH infrastructure. I installed it, and do minor patches/ web application stuff but I let an experienced admin Steven @ Rack911 do the setups. He mostly does emergency support though so server setup isn't going to be the highest priority.
 

uberguru

Member
Jun 7, 2013
319
18
18
Depending on where you're going, it may not be allowed to have a simple L2 switch for this setup. Please check.
As for the setup; If you're pushing 10tb of data for a website with their backends. No youtube clip is going to suffice. More info is needed.
Well what info do you need? so i can provide them
 

uberguru

Member
Jun 7, 2013
319
18
18
Hi there - wow you have a lot to consider!

I haven't had the time to write up what STH did but one major aspect is having a firewall type device in front of everything. When we moved the forums over to the colocation, we noticed that a botnet was trying a 50 or so registrations per second attack on the site within a few hours. We rate limited it by IP then saw, after an hour or so, the attack fan out to multiple nodes and IP addresses and grow rapidly. We ended up using two pfsense nodes but I also had bought a Fortinet 60C which would have been fine with low power consumption.

http://www.servethehome.com/colocation-architecture-servethehome-2013/

That will have a major impact on networking. Otherwise you are basically exposing the nodes to the outside. You will likely want to have one internal and one external network. Also, in terms of switch size, you will likely want a minimum of a 16 port switch with 4 nodes. This is because each node will have 8x data NICs and 4x IPMI NICs. You'll want the IPMI NICs to be on their own network. You also will want that network behind a VPN.

BTW: How much power are you getting with the colo?

Just as a note - I don't even admin the STH infrastructure. I installed it, and do minor patches/ web application stuff but I let an experienced admin Steven @ Rack911 do the setups. He mostly does emergency support though so server setup isn't going to be the highest priority.
I am getting 2 amp (460 watts). Also you mentioned firewall and VPN..but looking at your setup..i do not see any hardware firewall. Also i thought the colocation facility network is already behind some sort of firewall or is this absent for colocation?

One other question i have if you will excuse me..remember as i said i do not no much about server networking.
You mentioned internal and external network. Can you please explain this part a little more and why i need internal and external network? the difference and what features i will get from that?

Thanks
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Do you have access to the back of the rack? Could you mount something on both sides?
You also mentioned you have two outlets. Do you mean that you have two power feeds A & B? Or do you really only have two NEMA 5-15p outlets to plug a swtich, server into?

You may need to do something similar to Patrick. Run a hypervisor on one or all of your nodes and run PFSesnse or Astaro in a VM and segment your network with VLANs.

With 2a on 2 outlets with a C6100 here is what I would do:
Switch: HP 1910-24g or similar
Server: C6100
Hyper-visor: Xenserver Free or Proxmox

On two of the nodes run either PFSense or Astaro vms in HA. This way if a node fails you dont lose remote access to your box.
 

Mike

Member
May 29, 2012
482
16
18
EU
Do you have access to the back of the rack? Could you mount something on both sides?
You also mentioned you have two outlets. Do you mean that you have two power feeds A & B? Or do you really only have two NEMA 5-15p outlets to plug a swtich, server into?

You may need to do something similar to Patrick. Run a hypervisor on one or all of your nodes and run PFSesnse or Astaro in a VM and segment your network with VLANs.

With 2a on 2 outlets with a C6100 here is what I would do:
Switch: HP 1910-24g or similar
Server: C6100
Hyper-visor: Xenserver Free or Proxmox

On two of the nodes run either PFSense or Astaro vms in HA. This way if a node fails you dont lose remote access to your box.
Giving up half your nodes for a firewall/router seems a bit hardcore. I would run 1 physical node tops and run a secundary one virtual. In case of a bsd router, like pfsense, use ordinary e1000's, not virtio. I would even sneak a PI in a HDD bay to save a node if virtual is no option haha.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Yea the STH Colo is way overbuilt.

Might be able to stick a Juniper or Fortinet above the C6100.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Giving up half your nodes for a firewall/router seems a bit hardcore. I would run 1 physical node tops and run a secundary one virtual. In case of a bsd router, like pfsense, use ordinary e1000's, not virtio. I would even sneak a PI in a HDD bay to save a node if virtual is no option haha.
Your using virtualization...so your not giving up any nodes.
 

uberguru

Member
Jun 7, 2013
319
18
18
Do you have access to the back of the rack? Could you mount something on both sides?
You also mentioned you have two outlets. Do you mean that you have two power feeds A & B? Or do you really only have two NEMA 5-15p outlets to plug a swtich, server into?

You may need to do something similar to Patrick. Run a hypervisor on one or all of your nodes and run PFSesnse or Astaro in a VM and segment your network with VLANs.

With 2a on 2 outlets with a C6100 here is what I would do:
Switch: HP 1910-24g or similar
Server: C6100
Hyper-visor: Xenserver Free or Proxmox

On two of the nodes run either PFSense or Astaro vms in HA. This way if a node fails you dont lose remote access to your box.

Well the 2 outlets means two outlets to plug in server and switch. Just like if one wanted to rent a 1U you get 1 outlet and 2 servers you get 2 outlets. So for that question i have 2 outlets (not A+B feed)

What i want to do here is run linux-KVM virtualization on all 4 nodes and create VMs under each node. So will this enable me to still run pfsense or astaro or whatever firewall? Also how do i loose remote access if i have IPMI?

Yes i want to setup something like STH did with fiberhub colocation but in my case i have 3U and will only have 1U switch and 2U server.
Also please understand i do not understand networking so if i get 1Bps uplink and connect that to the switch...will i be able to connect the NICs and remote ports of all 4 nodes to this switch and manage the network and remote access IPMI? Also if i get the gigabit switch...will i be able to connect each node via 1Gbps as well even thought i know i am on shared 1Gbps uplink to the switch but lets say if am lucky i get 250Mbps on one node and 150Mbps on the other 3 nodes...is that possible...just theoretically since the ports are 1Gbps?

Also i heard about internal and external network..can someone please explain to me what those are and the feature or advantage of those?


Thanks.
 
Last edited:

uberguru

Member
Jun 7, 2013
319
18
18
Yea the STH Colo is way overbuilt.

Might be able to stick a Juniper or Fortinet above the C6100.
Still waiting on your reply concerning my last reply to you...what do you mean by internal and external network? Also i do not see any hardware firewall
 

uberguru

Member
Jun 7, 2013
319
18
18
Internal - Your LAN
External - Internet
So i can just have one switch connect the 1Gbps uplink given to me to the switch and then connect the 4 nodes of the single 2U server. So is internal like connecting the servers to each other via the ethernet ports/NICs?

If not how do io create the LAN? I assume connecting the 1Gbps uplink provided by the datacenter is the WAN?
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Just think of it like your home connection.

You have the internet from your cable/dsl modem -->router---< Home network


So in this case since you dont have space for a router and a switch. So you will setup VLANS on the switch and route the outside traffic to your firewall VM, then route the inside traffic back to the switch on a separate vlan that the rest of your network can connect to.

Will be basically identical to STH setup, except your Firewalls will be on VMs instead of bare metal.
 

Mike

Member
May 29, 2012
482
16
18
EU
There's nothing wrong with a firewall/router setup but if you are not up to it; There's nothing wrong with having all nodes wan facing. Hook up all ipmi ports to the 2nd nic of the next node, manage from node + 1 on a private subnet. Got to get a /29 for the 4 nodes though.
KVM is a very good move.
 

uberguru

Member
Jun 7, 2013
319
18
18
Just think of it like your home connection.

You have the internet from your cable/dsl modem -->router---< Home network


So in this case since you dont have space for a router and a switch. So you will setup VLANS on the switch and route the outside traffic to your firewall VM, then route the inside traffic back to the switch on a separate vlan that the rest of your network can connect to.

Will be basically identical to STH setup, except your Firewalls will be on VMs instead of bare metal.
When i get a VPS..i never heard of getting firewall hardware..when i get a dedicated server i never heard of getting firewall hardware...so my question why do i need this firewall since it has been talked abouyt so much as if it is really a must have. Can someone please explain this to me? Isn' t the datacenter already using a firewall? Or why because i am colocating do i need a firewall?

Please understand i am a noob when it comes to server networking so a good detail or explanation will be appreciated.

Thanks.
 

uberguru

Member
Jun 7, 2013
319
18
18
There's nothing wrong with a firewall/router setup but if you are not up to it; There's nothing wrong with having all nodes wan facing. Hook up all ipmi ports to the 2nd nic of the next node, manage from node + 1 on a private subnet. Got to get a /29 for the 4 nodes though.
KVM is a very good move.
I have heard of /24 and /29 and all that sort of thing with IP addresses. What exactly are those? Just have no idea to be serious. All i know if IPv4 and IPv6 addresses and never used IPv6 except just know a little info about it.
 
Last edited:

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Well honestly at a minimum you will need a NAT router/firewall just to perform separation of WAN/LAN. I am guessing with a 3U, the colo is only giving you 1 IP Address. So while you could just connect them all up, only one will have an IP address.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
You probably need a management ip to vpn into and protecting yourself? you would not get very far in a real datacenter without O&E insurance in the millions should your machine decide to get hacked and cause harm/damage to other residents of the colo.

Good practice to run some sort of security appliance software or hardware to keep the nasties away and protect yourself from others and others from yourself ;)
 

Mike

Member
May 29, 2012
482
16
18
EU
Since most of those firewalls, hardware appliance or software, run some sort of Linux/netfilter combination I would say its unnecessarily complex to add it in this case, if in any case. If you get hacked without one you would also get hacked with one in place. The only reason I can see use for them is to have some means of load balancing or translation in case of not enough ips.
I've honestly never seen people mount a firewall for every other server they co-locate. Also, your ISP is only worried about blocking at the edge of their network. If you're causing trouble they'll just pull the plug and tell you to come pick your chit up.
 
Last edited: