50+TB NAS Build, vSphere Cluster and Network Overhaul

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nry

Active Member
Feb 22, 2013
312
61
28
Posted this over at Avforums, figured I would post my build log here, some of the text Iv written below has probably changed over the last few days. Not quite just a storage build but seems like the most appropriate place to put this.

Thought I would post a log of my ongoing build process here as some users may find it interesting, plus some users may give me some useful pointers in doing things correctly :) As some of this is getting quite complex very quickly!

My home network serves both typical home use such as WiFi, streaming HD video, computer backups. But also as a self employed software developer working from home I am constantly deploying multiple VMs for testing purposes, backing up endless amounts of data. Hence the slightly over the top kit I have. Also serves as a great platform to learn on!

First some history!

This whole media streaming obsession started around 2007 when I got my first external HDD, a whole 500GB! This was connected to my laptop which connected to my 32" TV, a student at the time, it was pretty impressive!
Won't embed the image, but here is the beauty Image

I very quickly out grew the 500GB external drive and around the end of 2008 started to investigate other means to store my media, also the time where I became self employed. So using a few parts I had in boxes, some old parts my dad had, and £30 odd later for a case I had my first file server!
Only ran some version of windows with basic network shares for my laptop and media PC but it did the job perfectly. In total I had around 3TB by the end of 2008

Only image I have of it unfinished (don't think it was ever finished!)

Moving on to early 2010, I had filled this little file server up as I was now storing Bluray rips, DVD rips, music and my work backups. I bought a new case, PSU and some HDD caddys, some new disks which I seem to remember totalling around 7-8TB ending up which what I thought was a file server which would last me 5 years atleast!

Pretty impressed with my cabling job here!




Moving onto 2012 I had around 9TB of storage and was running out pretty quickly, as I was now working for myself full time by this point the amount of data I had to store had grown very quickly.
I decided to buy roughly the following:
  • Xcase 24 Bay SAS Case
  • Xeon E3 1245
  • Asus P8B-WS
  • 16GB DDR3 Ram
  • Adaptec 5805 8 port RAID card
  • 8x 3TB Hitachi drives

Setup as RAID6 this gave me plenty of storage space for some time, I had also bought a SAS expander so I could use my existing drives but this didn't work with my Adaptec controller so they were just left in the case unplugged.



I also ended up purchasing the following as my first ESXi host, which is still in use today 24/7
  • Xcase RM206 Short 2U case
  • Xeon E3 1245
  • Asus P8B-WS
  • 32GB DDR2 Ram
  • 240GB OCZ Agility SSD

Which can be seen here in the case above my file server, the pc above that is one of my media PCs




And what's happened in 2013?

Back in January I bought an Areca 1680ix-24 port RAID controller so I could utilise all the HDD bays of my case, as well as 8 more 3TB Hitachi drives I had purchased.
In the meantime I setup the newer 3TB drives in RAID10 in my 24/7 ESXi box in a new case with my other Areca 1882i-8 controller, as well as some standalone disks to store all my data while migrating from the Adaptec controller to my Areca 1680 controller.



But unfortunately the Areca 1680 controller didn't work and had to be sent back. Which has left me in a complete mess, with a server running 24/7 consuming about 200w of electricity idle! So now I have an Areca 1882ix-24 on order, I figure this should last me atleast 10 years seeing as it has 6GB/s HDD support and an external SAS connector, so I can expand to another 24 bay case if I ever needed.

The plan?

To be honest I am still not 100% sure what to do, I would like my 24/7 kit to be as low power as possible. Ideally under 100w unlike the over 300w it is currently!
I would also like to be able to expand my vSphere cluster dramatically from it's current one host to multiple hosts

With my new raid controller I plan on having 16x 3TB drives and 8x 1TB drives in one system, but for some reason decided it would be a good idea to use an old E6750 Core2Duo and 8GB DDR2 ram as the main system for this so I could use the motherboard as another ESXi host.
Now looking at it, this motherboard probably can't handle the amount of data throughput from the raid controller and my planned 10GbE network for ESXi hosts!
 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Figured I would post a complete kit list of what I have to work with here, need to somehow build the following up to be as useful as possible! Beaing in mind most of this is built up in PCs as servers and media pc's around the house

I have only build this list so I knew what I had to work with

CPUs
  • Xeon E3 1245 (x2)
  • Xeon E3 1200v2
  • FX 8150
  • E6600
  • E6750
  • AMD 5050e
  • AMD x2 something (x2)

Motherboards
  • Asus P8B-WS (x2)
  • Supermicro X9SCM-F
  • Asus P5Q-E
  • Gigabyte GA990FXA-UD3
  • Zotac 9300ITX
  • 2x Gigabyte AM2 Motherboards
  • MSI AM2 Motherboard

RAM
  • 8x Corsair DDR3 8GB
  • 2x Corsair DDR3 LP 8GB
  • 4x Corsair DDR3 4GB
  • 6x Kingston LP DDR2 2GB
  • 1x Random DDR2 2GB
  • 12x Random DDR2 1GB

PSUs
  • 1x Corsair HX650
  • 3x Tagan TG480
  • 1x Tagan TG420
  • 1x Random 300w
  • 1x picoPSU 150w + brick
  • 1x Random 550w
  • OCZ 550w

HDDs
  • 9x Hitachi 5K3000 3TB
  • 8x Hitachi 7K3000 3TB
  • 9x Mixed 1TB
  • 1x 640GB
  • 1x 500GB
  • 1x Seagate 2TB
  • 1x 120GB OCZ Agility SSD
  • 1x 240GB OCZ Agility 3 SSD
  • 2x 60GB OCZ Agility 3 SSD
  • 7x Various 2.5" HDDs

RAID Controllers
  • Areca 1882ix-24 (on order)
  • Areca 1882i-8
  • 3ware 9690SA-8i

Cases
  • Xcase 206 LP Short 2U
  • Xcase 206 HS 2U
  • Xcase 204 Short 2U (mATX only)
  • Xcase 424s
  • Xcase 100S Short 1U
  • Antec ISK150 ITX
  • Generic Medion mATX
  • Generic ATX Case x2

Graphics cards
  • Asus HD5450
  • Asus HD4350
  • Asus HD3450
  • XFI 5770
  • Random PCI Graphics card

NICs
  • 2x Intel/Dell dual gigabit PCIe
  • 1x Intel gigabit PCI
  • 3x Intel/Dell X540-T2 Dual 10GbE

Networking Kit
  • 2x HP ProCurve 1800-24G
  • 1x HP ProCurve 1810-8G
  • 1x Netgear 5 port gigabit
  • 2x Ubiquity UniFi Access Points
  • Avocent Switchview IP KVM

UPS/Power
  • APC SmartUPS 2200
  • APC Switched PDU AP9761
  • APC Switched PDU AP7901 (I think)

As you can tell I have clearly spent far too much money on everything above over the years and want to rebuild everything as cheaply as possible! Obviously to make a very low power 24/7 server I may need to spend a little :god:
 

nry

Active Member
Feb 22, 2013
312
61
28
Power Consumption

I wanted to investigate the power usage of my equipment and as my original power meter (some £10 one off eBay) seemed to fluctuate by 10-90w on about everything I did some investigating and found a so called UK version of the popular Kill-a-watt meter sold in the US. Sold at Maplin and other places for about £20 Plug-In Mains Power and Energy Monitor : Power and Energy Monitors : Maplin Electronics

I first tested my core network switch, the 24 port HP Procurve 1800

17w idle!



And the 8 port HP ProCurve 1810

3w idle



Makes me wonder if I should somehow run the 8 port as the core and only power up the 24 port when required using the APC PDU


RAID Storage

Went ahead and bought a Chenbro CK23601 Expander and BBU for my Areca RAID controller.

£410 later I have the following!

Guessing that fan is going to be very noisy!



Also have some other toys for what will hopefully be a low power server, updates on that when I get round to building that.

As mentioned previously I have a RAID array setup on my current Areca controller, as I would be moving the drives around on the ports I am guessing that my RAID10 array would end up degraded so last night setup a RAID10 array on my 3ware controller and transferred everything across last night. Got to love 10GbE :D

Generic

Stripped down the current setup, slightly unplanned downtime but as no one should be using the VMs this week I figured it was a fairly safe time to do the majority of this upgrade.

First off, this is majority of my core network cables plus some spares. Never knew I had so many :O



Power

As this needs to be a low power setup when running 24/7, and fairly low power when in full use I need to work out what equipment consumes.

Testing my APC Switched 16A PDU, with one port on it consumes 5w of electricity, with all ports on it consumes around 15w of electricity!



My APC Smart UPS2200 is fairly old and probably quite inefficient, but these things cost a fortune to replace and I got this for free if I remember correctly.
Fully charged no load this consumes 41w although I have read that as you apply load to the UPS it will consume less electricity. Charging it uses about 150w



And some pictures of the unit, also has a network management card too

 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
NAS Build

Stripped all the HDDs out of the enclosure as well as the old motherboard so I could redo the SAS cables and put the additional required ones in.

No caddys



All SAS cables in and quiet fans in place, the 3ware controller is still in here as the SAS connectors can be a pain to remove.
The SAS connectors plugged into the backplane are very squashed! Infact I broke one of the SAS sockets but managed to resolder it back in place.





First powered up the NAS with the bare essentials with 5 fans, idle it consumes a whopping 31w which is pretty impressive as I always thought with the fairly high powered CPU and fans it would have been around the 50w mark.



Areca 1882i-8 controller



All of my 3.5" disks, few of my SSDs and 2.5" drives

 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
Nas Build

I managed to pick up 3 new 10GbE cards off eBay at a fraction of there retail price.
Unfortunately they only came with low profile brackets.



Solution: Get a cheap dual port Intel card and file it down so it fits this card :) The black tape on the PCIe pins is there due to an issue I have seen where systems won't show all the installed memory.
By covering Pins 5 and 6 it seems to have solved the problem



Cable nightmare :god:

 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
ESXi Node 1

No real progress pictures of this one as it is pretty much the same as the server in my first post, minus the drives.

Full spec:
Case: Xcase 206 HS
Mobo: Asus P8B-WS
PSU: Tagan TG480-U01
CPU: Xeon E3-1245
RAM: 4x Corsair Vengeance 8GB DDR3
NIC1: Intel X540-T2
NIC2: Intel 1000/PT
Boot Disk: Kingston 16GB USB2
OS: ESXi 5.1
Power consumption idle: TBC

And tidied the cables up a little and added a 10GbE NIC



 
Last edited:

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Where does that tower go?

Awesome pics and writeup. Gave me an idea for sure!
 

nry

Active Member
Feb 22, 2013
312
61
28
You referring to the power strip? As that will live inside a 42U I have
 

nry

Active Member
Feb 22, 2013
312
61
28
Thanks :)

Im currently investigating how to keep the iSCSI traffic separate from generic traffic using VLANs but don't think this is possible with a layer 2 switch.

Should be some more updates today
 

nry

Active Member
Feb 22, 2013
312
61
28
Awesome :)
Not too sure it deserves the main page though

Not much progress yet today as I had to do some real work instead of playing around with this stuff.

DIY Rack

As this is all being setup in a temporary place (will all live in a 42U when I move) I have no access to the back. Plus the equipment is located in the one of the worst places so wanted a easy way of moving it in and out.

Solution? Go to screwfix down the road and buy some castor wheels for £16



Chop some wood up that was spare from the loft boards, I had thought about using these 12U strips I have but the end result would be 5cm too wide.



Then 30 mins later you get the following...



With the APC UPS on, which weighs almost 38KG!!!

 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
NAS Build

Pretty much finished the NAS now besides building all the arrays up and some final software tweaks.

The Chenbro expander has been replaced with a Scythe 40mm fan, hope it holds up to the job. Could do with a better heatsink really.



Cables as tidy as I care to get them





On the makeshift rack, 8 drives are currently connected to the 3ware controller...

 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
In the process of copying data from the 3ware array to my new Areca RAID6 array.

Thankfully for 10GbE I can easily hit 250MB/s transfer, think that it could be faster but probably limited by the protocol I am using and the motherboard.

What the setup currently looks like




ESXi Node 0 Build

This will be my server which runs 24/7, tried to make it as low powered as possible.



Full planned spec:

  • Xcase 100S Short 1U
  • Supermicro X9SCM-F
  • Xeon E3 1200v2
  • 16GB DDR3 Corsair
  • picoPSU 120w + 150w brick
  • OCZ 240GB Agility 3 SSD
  • Hitachi 5K3000 3TB drive

Looking a little lost inside the 1U case, using a Tagan PSU for testing

 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
ESXi Node 5

Not really doing the node builds in any particular order, as some kit is in use while preparing data etc.

Full spec:
Case: Xcase 204 Short 2U
Mobo: Gigabyte GA-MA74GM-S2H
PSU: Tagan TG480-U01
CPU: AMD AM2 5050e
RAM: 2x 2GB DDR2 (one is missing in the below photos)
Boot Disk: iSCSI on NAS
OS: ESXi 5.1
Power consumption idle: TBC

Case is fairly small!



About 20 cable ties later... we have some improvement



All installed

 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
ESXi Node 2 / Media PC

This will be my primary media PC, dual booted via iPXE from a iSCSI LUN on the file server. It has an AMD FX8150 CPU with a TDP of 125w, which is impossible to cool in a 2U case as the availability of low profile coolers for AM3+ sockets is very small.

Start of the build...
The Xcase 206 LP case, fitted with LG Bluray writer and Tagan PSU



My prototype cardboard shroud to see if I could cool the CPU with the Scythe Shuriken Rev.B cooler. Having the fan on the top of the cooler as it should be leaves about 4mm of clearance between the top of the fan and the case as the cooler is 64mm tall!

I have considered the Scythe Shuriken BIG 2 Rev.B which is 58mm tall, and supports all TDPs where the current one I think is 95w (no wonder it doesn't cool it). This would have about 12mm clearance to the top of the case which Im not too sure if even that would be enough

 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
12mm clearance above the fan on the Shuriken should be plenty to draw in air to blow down through the cooler (I do use Shuriken/Big Shuriken on several of my builds - and was very sad the day their USA distributor arm closed shop).

I'd be more worried about the air intake on the PSU being blocked in that front-mounted upside-down configuration. I have the Norco version of that same case and was never completely comfortable with the PSU mount.
 

nry

Active Member
Feb 22, 2013
312
61
28
I have two possible options here:

1) Buy the new version of the Scythe cooler for £30+ delivery

2) Swap the CPU with a friend who has a FX6300 with stock cooler as this is a 95w TDP processor, total cost £0 plus I would probably get a pint out of it :D

If the stock cooler fits I may just go for it as CPU power isnt a huge issue for what I am using the cluster for, it's more about available memory!
 

nry

Active Member
Feb 22, 2013
312
61
28
ESXi Node 6

Finished another node with the following spec:

Full spec:
Case: Medion mATX
Mobo: MSI mATX
PSU: Mercury 300w
CPU: AMD AM2 x2 240
RAM: 4GB as 4x 1GB DDR2 (upgrade to 8GB possible)
Boot Disk: iSCSI on NAS
OS: ESXi 5.1
Power consumption idle: TBC



 
Last edited:

nry

Active Member
Feb 22, 2013
312
61
28
I'd be more worried about the air intake on the PSU being blocked in that front-mounted upside-down configuration. I have the Norco version of that same case and was never completely comfortable with the PSU mount.
Only just realised I didn't reply to this. With the Tagan PSUs I did an experiment to see if switching the fan direction of the internal fans caused any issues. All seems fine :)
Strictly follow the rule of cold air through the front of the cases and hot air out the back.