I have been a long STH forum user. I love to play with computer,server and network equipment. It is an expensive hobby especially if you visit STH forum regularly. But compare to other hobby, this is not too bad.
In recent months, I have bought many equipments, I finally got chance to put them together when everybody is ordered to work from home. Here is my build process.
Equipment
Noise control
SuperMicro 846
The noise is better, but now the main noise come from the 10 SAS 7200RPM hard drive. I don't think there is any way to control the noise of the hard drive. May replace with lower RPM hard drives later.
Lenovo TS440 Server
There is no change made to this server. The noise is at acceptable level
Switch
The original fan come with Aruba S2500 will scream loud when boot up, but after a while, it will a lot quieter. @ViciousXUSMC made some great videos to talk about the fan mode and the initial config. If you want to mod the switch to be dead silent, please follow his video here
Software
I initially wanted to play some vmware (esxi 7) eco system, but I have some trouble to even get it working.
Since I have access to Win server 2019 license, both servers installed with Windows Server 2019
Switch Configuration
Below are quote from ViciousXUSMC post. It has been really helpful to me.
From Putty command terminal
Setup MTU for each 10Gb SFP port
10Gb Connectivity Test
Downloaded iperf
By default Win Server 2019 firewall disable all ICMP message, so you can't even ping the server. This got me for a while. I thought I made some mistake on the switch configuration.
So to run the test, I temporarily disabled the firewall on both servers.
On server 1 run
On server 2 run
The initial result shows about 7Gbit/s connect speed
That is still far from 10Gbit/s. It is not enough to just change the switch MTU. The MTU on the network card also needs to be changed
Once the MTU of both cards are setup to 9014, the connect bandwidth is almost 10Gbit/s
Finally, I am able to see the 10Gbit/s speed.
Coping a large file from server 1 SSD to server 2 SSD.
Hard drive will cap at ~ 100MB/S
I am very happy to see that number. I plan to connect my Synology DS 1517+ with the switch so that they all can benefit from the 10Gbit bandwidth.
Concerns
The biggest concerns are noise and power consumption
Even after the modifications, the noise of the rack is still pretty high. Wife acceptance is almost at limit. I have to unplug the SAS hard drive to reduce the noise. There really is no other place in my house with AC to hide this rack since it is so big. For this reason, I need to have a dedicated server room with AC in my next house so that I don't have to worry about the noise.
Power Consumption
When everything is up and running, HS110 reports almost 700Watt from the wall. The 750W UPS is unable to catch up the power draw. Once everything boots up, the power draw stays at about 400W. That is still a lot of electricity. In Texas, the electricity is about 10 cents per KWH, that is about $400 per year. It might make sense to invest on newer more power efficient hardware, the saving on electricity will pay it back in few years.
Weight
With everything installed, the rack becomes very heavy. Due to the limitation of my house layout, I have to put it on second floor. I start to worry about the weight of the whole rack. According to some building code, the minimum weight limit per sqft for second floor is 40 pound/sqft. I think the rack weight less than 1000 pound right now. So it should be ok.
Conclusion
Overall, I am happy with what I have so far other than the noise and power consumption. Since I am on Windows Server 2019, I plan to create play with Hyper-V virtual machines and Dockers containers. It seems like a lot of fun ahead.
Thanks for your reading!
In recent months, I have bought many equipments, I finally got chance to put them together when everybody is ordered to work from home. Here is my build process.
Equipment
- Rack
- HP 22U 10622 G2 Rack
- Bought from local seller
- Server 1
- Dual Xeon E5-2670
- SuperMicro CSE-846 Chassis
- SuperMicro X9DI-F mother board
- SuperMicro AOC-STGN-i1S 10GB SFP card
- 128 GB LDRIMM
- SuperMicro PWS-920P-SQ Power supply
- SuperMicro MCP-220-84603-0N for internal SSD
- 10 3TB SAS hard drive
- 600GB Intel S3500 SSD
- LSI SAS2008 RAID card
- Server 2
- Lenovo TS440
- Intel® Xeon® CPU E3-1245 v3 @ 3.40GHz
- 32GB RAM
- 240GB Intel 730SSD
- Mellanox Connext-x2 10Gb SFP card
- 8 x 3.5 Hot swap bay
- 4 x 2.5 Hot swap bay
- UPS
- APC Smart-UPS RM 750VA
- APC AP9619
- TP-Link HS110 smart switch to measure the power consumption form the wall
- Switch
- Aruba S2500-48P
Noise control
SuperMicro 846
- Power supply
- The original power supply comes with CSE846 chassis is screaming loud even if it is powered off.
- After switch to the 920W-SQ version, it makes significant difference.
- At least it is quite when powered off
- Fan
- Replace the fan wall with 3 120MM case fan zip tie together.
- Replace the rear 2 80 MM fan with 3 80MM fan
- Add two 120 MM fan on top of each passive cooler
The noise is better, but now the main noise come from the 10 SAS 7200RPM hard drive. I don't think there is any way to control the noise of the hard drive. May replace with lower RPM hard drives later.
Lenovo TS440 Server
There is no change made to this server. The noise is at acceptable level
Switch
The original fan come with Aruba S2500 will scream loud when boot up, but after a while, it will a lot quieter. @ViciousXUSMC made some great videos to talk about the fan mode and the initial config. If you want to mod the switch to be dead silent, please follow his video here
Software
I initially wanted to play some vmware (esxi 7) eco system, but I have some trouble to even get it working.
Since I have access to Win server 2019 license, both servers installed with Windows Server 2019
Switch Configuration
Below are quote from ViciousXUSMC post. It has been really helpful to me.
Setup MTU Jumbo FrameHUGE thanks to ViciousXUSMC to get me going. Here's some text added to his comments that might help a newbie like me enable all four 10Gb ports.
These steps use standard Ethernet cables to configure.
There is a menu button on the front that can be used to launch a quick setup GUI mode. This kicks off a DHCP service and a 10 minute timer.
Connect your machine to the switch (using a regular port on the front, not the rear management port) and let your machine pull and IP from the switch's DHCP, or manually give yourself an IP that is 172.16.0.1 up to 172.16.0.253. (Do not otherwise connect your machine to your network.)
The switch itself will be at 172.16.0.254.
Use a web browser to go to: 172.16.0.254
do quick setup, mainly set passwords:
1) admin/admin123 (these are aruba defaults)
2) enable password: enable
3) set DHCP option to allow your router to do it
push setup
After it pushes, connect ethernet cables as you normally would to your network.
Remove your manually set IP if you did that step on your machine.
Use web browser to go to: 192.168.0.252 (or whatever your router/DHCP assigned to the switch)
login: admin/admin123
note yellow stacking icons on two of the 10GB interfaces
use putty to go to switch IP (192.168.0.252), SSH mode
admin / admin123
type: enable
password: enable
Then follow with the commands:
delete stacking interface stack 1/2
delete stacking interface stack 1/3
From Putty command terminal
Code:
Configure Terminal
Interface -interface
Description -name
Mtu 9216
10Gb Connectivity Test
Downloaded iperf
By default Win Server 2019 firewall disable all ICMP message, so you can't even ping the server. This got me for a while. I thought I made some mistake on the switch configuration.
So to run the test, I temporarily disabled the firewall on both servers.
On server 1 run
Code:
iperf3 -s
Code:
.\iperf3.exe -c 192.168.1.166 -P 10
The initial result shows about 7Gbit/s connect speed
That is still far from 10Gbit/s. It is not enough to just change the switch MTU. The MTU on the network card also needs to be changed
Once the MTU of both cards are setup to 9014, the connect bandwidth is almost 10Gbit/s
Finally, I am able to see the 10Gbit/s speed.
Coping a large file from server 1 SSD to server 2 SSD.
Hard drive will cap at ~ 100MB/S
1.13GB/s
I am very happy to see that number. I plan to connect my Synology DS 1517+ with the switch so that they all can benefit from the 10Gbit bandwidth.
Concerns
The biggest concerns are noise and power consumption
Even after the modifications, the noise of the rack is still pretty high. Wife acceptance is almost at limit. I have to unplug the SAS hard drive to reduce the noise. There really is no other place in my house with AC to hide this rack since it is so big. For this reason, I need to have a dedicated server room with AC in my next house so that I don't have to worry about the noise.
Power Consumption
When everything is up and running, HS110 reports almost 700Watt from the wall. The 750W UPS is unable to catch up the power draw. Once everything boots up, the power draw stays at about 400W. That is still a lot of electricity. In Texas, the electricity is about 10 cents per KWH, that is about $400 per year. It might make sense to invest on newer more power efficient hardware, the saving on electricity will pay it back in few years.
Weight
With everything installed, the rack becomes very heavy. Due to the limitation of my house layout, I have to put it on second floor. I start to worry about the weight of the whole rack. According to some building code, the minimum weight limit per sqft for second floor is 40 pound/sqft. I think the rack weight less than 1000 pound right now. So it should be ok.
Conclusion
Overall, I am happy with what I have so far other than the noise and power consumption. Since I am on Windows Server 2019, I plan to create play with Hyper-V virtual machines and Dockers containers. It seems like a lot of fun ahead.
Thanks for your reading!
Last edited: