50+TB NAS Build, vSphere Cluster and Network Overhaul

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Good point! Pretty sure it only covered about 5mm of the fan though, worth checking as changing the 120mm fans is a huge job pretty sure the motherboard has to come out! :eek:

Also need to check the BIOS isn't running them at lower RPM, don't think lm-sensors picked up RPM

Do have a feeling though that the fans simply don't have enough power to pull air over those drives though
I've got the Norco version of that same case, running it with quiet fans - Nexus fans rather than your Sharkoons, but by spec they actually move less air. The 120mm midplane and two low-speed 80s for exhaust are more than adequate to cool it, especially with that E3-1245 CPU which is a very cool runner.

But not if you block off the exhaust...

Another note about that case. The Norco/Xcase midplane design is not really optimal. What you want with Midplane fans is to create negative pressure in the front compartment (to draw outside air in through the backplane, cooling the drives) and a positive pressure environment in the rear compartment (to push hot air out the back). The rear fans just create an "assist" to this airflow - the primary driver is from the midplane.

Problem is that the Norco/Xcase midplane has so many cable penetration holes (and pretty big ones, at that). This causes the air from the (positive pressure) rear compartment to get drawn back through into the (negative pressure) front. This circulation tends to balance the pressure between the two sides and makes the midplane fans almost useless. You end up depending on the rear fans to do most of the work...which they probably can't keep up with.

I notice from your photos that all the extra cable openings in the midplane are wide open. Open up the case and close off all the places air can flow easily between compartments. Use foam, tape, whatever your material of choice is. But get the back part as close to sealed off from the front as possible, making it so that the only way air can travel is through the fans. You'll be shocked at how much more efficient those slow, quiet fans become.
 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
I measured it and I am blocking 7mm of the top of the fans. Also blocking the mesh metal above the PCI slots.

Knew it was a bad idea putting that there! Need to move into my new place so I can get it all in my 42U rack :D

Figure it should be easier to manage everything in this (panels are hiding under something else in the garage!)

 
Last edited:

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
I'd think about cabling off to the side. Will give more airflow out of the rear of the machines.
 

nry

Active Member
Feb 22, 2013
312
61
28
Not really an option at the moment as the whole stack only just fits through the door way where it currently lives
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Wow! and Double Wow! :cool:

I joined this forum basically just to say how impressed I am with what you are doing.

Initially I came across the site because I am doing almost exactly the same thing as you. I am trying to consolidate all my IT gear down and want to run multiple node hypervisors. Over the last 18 months I have been selling off all my bulky old kit. Like you I was running a huge tower case, Antec 1200, with the 5 into 3 hard disk converters. Also like you I want to drastically lower the power consumptions as well. The funniest bit is you even have the same power monitor plug as me :D

I will no doubt have loads of questions regarding the xcase cases. After spending weeks looking for tower cases that will take the 2.5 drive hot swap trays I had kind of given up. Then saw your build with the xcase one and a light bulb came on. I don't have a rack, but could use small rack cases to do what I want to achieve. They can always stack on top of each other on my desk.

I'll have to get all my parts together and start a build diary here as well I think.

Great work again on what you have done though. Gives me renewed hope for my build now.
 

nry

Active Member
Feb 22, 2013
312
61
28
Thanks, think there are some other setups on the forum which you need to check out too! Such as PigLover's setup which I am very jealous of.

One bit of advise I would offer, if its cheaper to do it one way think twice as it will probably back fire along the line somewhere! Bit me many times with this kit.

Don't know if you have noticed the Dell C6100 mentioned on this site quite a few times, these would be perfect for ESXi I think but can't find them in the UK anywhere.

Found a Supermicro SYS-6026TT-HTRF for £745 as bare bones which looks similar but guessing the final price of this would end up around £1200 with CPU and memory!

Look forward to seeing your build thread :)
 

nry

Active Member
Feb 22, 2013
312
61
28
As suggested I removed the PDU from the back of the NAS server, also placed the tower of computers in the middle of the hall with plenty of surrounding room!

Heating has been off and I would guess room temperature is just a little below 20oC

Drive temperatures idle are still the same
Areca CPU temp is 69oC
Xeon CPU temp's are a little lower...
CPU temperatures 0: 38℃, 1: 40℃, 2: 37℃, 3: 41℃

Placing my hand at the back of the case when running I can feel some air being shifted but nothing substantial.

Think these fans are pretty poor! Probably dosen't help I have 7x 7200rpm drives in there
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Do try to close up the extra gaps in the midplane before you go changing out the fans. I think you'll be surprised how much difference it makes. With all the extra cable penetrations the midplane fans just end up circulating air between the front compartment to the back one and then back. Seal the gaps (close up the extra cable penetrations) and the air has to move in the front and out the back...

Even if you decide just to change out the fans you probably still want to do this.
 

nry

Active Member
Feb 22, 2013
312
61
28
Will do, thanks for this.

I think that the right gap is that full with wires there is little I can do to block it up anymore.
The middle gap I will see what some electrical tape does.

Really do feel it needs some faster fans

Onto another system....

Node 0

I have had Node 0 running for a few days now and besides the tiny fan on the X540 card it's pretty silent, the downfall to this is it does run quite warm, under load the CPU cooler manages to keep the at a maximum I have recorded of 58oC

The top of the case above the CPU gets too hot to touch so figured I would try get some airflow from the front of the case to the back.

I found the Airen RedWings Extreme 40HH for just under £5 off eBay and figured I would give it a go. It advertised 9000RPM! at 23dbA which I thought was ridiculous but still went ahead.



At 12v my BIOS reports just under 7000rpm, the air flow on it puts some huge desk fans I have owned to shame but as expected it does sound like a jet engine!

Using a Zalman fanmate at around 7-8v it still shifts some air with very low noise. Going to try set the BIOS up to keep them quiet at all times. As the issue is there is no proper air movement in the case so only need some airflow to aid the cooling.
 
Last edited:

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Thanks, think there are some other setups on the forum which you need to check out too! Such as PigLover's setup which I am very jealous of.

One bit of advise I would offer, if its cheaper to do it one way think twice as it will probably back fire along the line somewhere! Bit me many times with this kit.

Don't know if you have noticed the Dell C6100 mentioned on this site quite a few times, these would be perfect for ESXi I think but can't find them in the UK anywhere.

Found a Supermicro SYS-6026TT-HTRF for £745 as bare bones which looks similar but guessing the final price of this would end up around £1200 with CPU and memory!

Look forward to seeing your build thread :)
Thanks for the advice. The buy cheap, buy twice has caught me out before many a time sadly.

I wanted to ask you about your NAS case. I see you are having issues with cooling and the fans you put in there. I was looking at the similar case on the Xcase website. They do a shortened 550mm version that would fit nicely on my desk. The 660mm one will still fit, but room will be tight. How do you find the drives trays etc in yours? Do they rattle at all? Xcase do a pro version which looks really nice, but it is a heck of a lot more expensive and only in the 660mm length.
 

nry

Active Member
Feb 22, 2013
312
61
28
I have the 550mm version as my original build was to fit in a small space. If I was to buy again I would get the full length!!!! As changing the fans or removing the mid panel requires the motherboard to be removed (depends on size of motherboard obviously). Generally I found that once everything was installed there just was that little bit of room missing to adjust cables etc.
Had a quick read about the pro versions of their cases and reviews seem to say the build quality is better, if I am honest the build quality of my case is one of the best I have ever seen. The only downfall I would say is that they don't come with any instructions of what all the little bits of additional brackets do.

If I ever expand to more than 24 drives I am pretty sure I will simply buy the longer version of my case and have the 550mm one as the 'empty' with a SAS expander.
 

nry

Active Member
Feb 22, 2013
312
61
28
Spent most of the week recovering from a steroid injection for my carpal tunnel so have done next to nothing. But Figured I would try silence one of the fans on the X540 NIC in Node 0.

Put a Scythe 40mm fan on the same as the one I put on my Chenbro expander. Seems to run cooler than before measured using a Dallas 1 wire temperature sensors attached to a Raspberry Pi, it's also pretty quiet now :)

I think I am going to buy 3x Scythe Mini Kaze Ultra fans to blow cold air from the front of the case over the board as the air currently just isn't moving at all.

Also did a quick speed test on the X540's linked directly with a 30m Cat6 cable

Code:
root@nas:~# iperf -c 192.168.0.1 -t 600
------------------------------------------------------------
Client connecting to 192.168.0.1, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.2 port 56414 connected with 192.168.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-600.0 sec   567 GBytes  8.11 Gbits/sec
567GB in 10 minutes, not too bad :O
 

nry

Active Member
Feb 22, 2013
312
61
28
NAS

Well I'm having a fun friday night :laugh:

In an attempt to battle the unacceptable temperatures I replaced the 3x 120mm fans and 2x 80mm fans at the back.

First off I rounded up all my spare 120mm fans, found a total of 11 fans! Got a 12v power source and went through them finding the highest air flow with acceptable noise levels.

Found the original 3 which sounded like jet planes, some Akasa ones which were next to silent, why I ever bought those golfball ones in the NAS I don't know.

Eventually came to 2x Xinruilian fans and 1x Scythe fan which shift a fair amount of air at 12v but don't make my ears bleed.



Then onto the rear fans which are 80mm I gave up rounding all these up as I must have around 20 odd. Eventually pulled some out of my old media PC which are silent at 7v but at 12v they shift a load of air while remaining fairly quiet.



Now before and after temperatures:

CPU: 55/38
5K3000 Drives: 46/33
7K3000 Drives: 60/40
Areca CPU: 69/60

I am much happier now :)
 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Basic read speeds using hdparm would return roughly 640MB/s on my RAID6 volume of 8x Hitachi 5K3000 3TB disks. I thought I would have seen a little more on this. So thought I would try without the expander, same result. Oh well guess it will do for media/backups!

SAS Expander:
Code:
root@nas:~# hdparm -t /dev/sdc

/dev/sdc:
 Timing buffered disk reads: 1920 MB in  3.00 seconds = 639.09 MB/sec
root@nas:~# hdparm -t /dev/sdc

/dev/sdc:
 Timing buffered disk reads: 1942 MB in  3.00 seconds = 647.25 MB/sec
root@nas:~# hdparm -t /dev/sdc

/dev/sdc:
 Timing buffered disk reads: 1944 MB in  3.00 seconds = 647.84 MB/sec
Direct:
Code:
/dev/sda:
 Timing buffered disk reads: 1932 MB in  3.00 seconds = 643.85 MB/sec
root@nas:~# hdparm -t /dev/sda

/dev/sda:
 Timing buffered disk reads: 1896 MB in  3.00 seconds = 631.81 MB/sec
root@nas:~# hdparm -t /dev/sda

/dev/sda:
 Timing buffered disk reads: 1910 MB in  3.00 seconds = 636.39 MB/sec
 

nry

Active Member
Feb 22, 2013
312
61
28
Remote backup

Over this weekend I found the odd half hour hear and there between spending far too much time in the sun :D to finish off the remote server.

I removed the 3ware raid card and opted for some dirt cheap PCI sil3114 PCI cards that I had in a box. Seeing as this machine will probably add 1TB disks added in the future it made more sense to go down the software raid route.

Final spec:
  • Aerocool strike case
  • Asus M3N78-VM Motherboard
  • AMD 5050e CPU
  • 4x Random 1GB DDR2
  • OCZ 550w PSU
  • 2x RackMax 5in3 HDD bays
  • 20GB SATA Boot drive (old xbox drive)
  • 8x mixed 1TB drives in RaidZ2 array

Here is my attempt at cabling the 10 SATA cables to the onboard controller and 2 PCI controllers.



From the front all lit up with 8x 1TB drives



And a little performance test..

Writes…. nothing amazing but should do the job

Code:
root@aero:/zfs# dd if=/dev/zero of=test.img bs=1024k count=10000
^@^@^@^@10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 281.608 s, 37.2 MB/s
Reads… should also be fine

Code:
root@aero:/zfs# dd if=test.img of=/dev/zero
^@^@20480000+0 records in
20480000+0 records out
10485760000 bytes (10 GB) copied, 175.015 s, 59.9 MB/s
Still need to test random reads as well.

Boot up time: 1min 15s
Power off consumption: 3w
Power consumption idle: 95w

Next I need to look at building the Raspberry Pi with OpenVPN and a relay for power control of this box!
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Very cool! I'm certainly interested in the openvpn setup.

BTW why 1TB drives? Also, does what is max power consumption?
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Are these read and write speeds with mdadm or SnapRAID? I would expect either one to provide better read and write speeds than this. Nevermind, I just re-read your last post. You are using PCI cards, so those are the bottleneck. This is a pretty nice overall setup :)
 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Will post some details on the openvpn when I get round to it. Not 100% sure how that side is going to work just yet.

Even using the PCI cards would have thought I'd get 80MB/s read, but it's more than fast enough at the current speed.

All but the case of the remote node was built using parts just knocking around in boxes. Hence the 1TB drives, I did have these in my primary server but I don't really need them there.
Power consumption with drives idle but spun up was 95w.

Think this is more than acceptable, only the raspberry pi will be 24/7.
Then using a relay to power the server up when required. Should do just fine :)
 

nry

Active Member
Feb 22, 2013
312
61
28
Haven't got round to checking that just yet. Sure I did a quick test once and it was around 200w idle.

Plan on doing a full power test of everything this week as it's almost finished now.