Possible Upgrade?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Fusionhost

New Member
May 24, 2013
8
0
1
Hi,

I am a brand new member, so "hello everyone".

This is my current setup:

12Tb File Server: Fractal Design Array R2, Intel D510MO / D510 Intel MN10, 4Gb DDR2 800Mhz. This server runs UnRAID.

I love my setup mainly due to it's extremely low powered! But file transfers are painfully slow and it doesn't handle multi tasking well... So I am thinking of upgrading and using ESXi to combine systems into one unit (unsure what systems these will be though lol). My major bugbear is power hungry systems especially when electric is not cheap in the UK.... So I want the build to draw as little power as possible!

My proposed new setup:

Case: Norco 4224 - Need to find the 120mm fan plate
Motherboard: SUPERMICRO MBD-X9SCM-F-O
RAM: 2x 8GB KVR1333D3E9SK2/8G
CPU: Intel Xeon E3-1230
Power Supply: ?
SATA Expansion Card(s): 3x IBM M1015 - Purchased: £261.37 New
Cables:

Total Cost:

Hard Drives I already have:

Parity Drive: 2TB
Data Drives: 17x either 1TB, 1.5TB or 2TB
Cache Drive: 1x 500GB 2.5"
Total Drive Capacity: Unknown at present as it depends on the state of the hard drives I have

Although I am thinking of swapping them all for 3TB WD Red as power consumption is tiny!

WD Red: 2TB/3TB: Read/Write 4.4W, Idle 4.1W, Sleep/Standby .6W
WD Green: 2TB: Read/Write 4.5W, Idle 2.5W Idle, Sleep/Standby .7W

The primary uses of the new server will be:

- Computer backups (rsync) - UnRAID
- Web server (to test my websites before they go live) - UnRAID
- Virtual machine (so I can use Windows at university as it is all Macs) - ESXi
- Film, TV, Music Storage - UnRAID
- XBMC server/streaming in my flat - UnRAID
- Run the usual addons to source content for XBMC (Media Center) - UnRAID
- And be able to run a Home Automation system - ESXi/UnRAID

Do you think I actually need this? What would you do/change? Any ideas on how I can work out the potential power consumption on my proposed build as this will control want actual happens

Any other advice/tips etc?
 

Mike

Member
May 29, 2012
482
16
18
EU
That 1230 (get the v2, or wait for the v3) will probably run just as low on power as the Atom, when running the same OS. Vmwarez power saving options probably still suck though.
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
Hi,

I am a brand new member, so "hello everyone".
Hello to you to sir.
This is my current setup:

12Tb File Server: Fractal Design Array R2, Intel D510MO / D510 Intel MN10, 4Gb DDR2 800Mhz. This server runs UnRAID.

I love my setup mainly due to it's extremely low powered! But file transfers are painfully slow and it doesn't handle multi tasking well... So I am thinking of upgrading and using ESXi to combine systems into one unit (unsure what systems these will be though lol). My major bugbear is power hungry systems especially when electric is not cheap in the UK.... So I want the build to draw as little power as possible!
Extremely small processing power, results will be less than exciting.
My proposed new setup:

  1. Case: Norco 4224 - Need to find the 120mm fan plate
  2. Motherboard: SUPERMICRO MBD-X9SCM-F-O
  3. RAM: 2x 8GB KVR1333D3E9SK2/8G
  4. CPU: Intel Xeon E3-1230
  5. Power Supply: ?
  6. SATA Expansion Card(s): 3x IBM M1015 - Purchased: £261.37 New
  7. Cables:
  1. The 120mm fan plate idea is one that as others maybe aware of, is something I find to be a little in the WOFTAM area. Good, quiet 80mm fans can result in better airflow than 120mm fans while keeping noise to a dull roar that won't embarrass a Boeing 747. Ensure the case has the Yellow backplanes, not the troubled greens.
  2. I haven't used the board but see below regarding the PCI-E slots.
  3. 8GB DIMMs is a good choice to allow the max 32GB later on if you need it.
  4. As already commented, the V2 is best option currently. A lot less power use due to being more efficient. ARK | Intel® Xeon® Processor E3-1230 v2 (8M Cache, 3.30 GHz) 20 PCI-E lanes ONLY
  5. A good quality 650W will see good life and keeping efficiency.
  6. The M1015's still are awesome options however, I would suggest maybe just one and the use of a Expander or two. Each M1015 can hold up to 64 drives when flashed to LSI IT firmware. Something like the Chenbro Expanders will also give a external facing SFF-8088 socket for expansion later on.
  7. The best eBay can supply, lol

You commented on claims of M1015's not working with the v2 CPU's. I would be very much inclined to not pay a lot of attention to the claims as others have probably made the same mistake you are about to. Take note of the hints I have dropped above regarding PCI-E lanes. As most should be aware, PCI-E lanes come directly from the s2011/s1155 CPU's rather than the NB chipsets on earlier designs. The s1155 CPU's however can only supply 20 lanes. The SM board you are looking at is in direct violation of this and the figures are telling you porkies. You cant have 2x 8 & 2x 4 (24 lanes). Some board manufacturers will drive some lanes from the SB chipsets but these are slower and add headaches to other devices that are I/O intensive.
Hence a good reason for me suggesting just one or maybe two M1015's flashed to LSI. Put the two M1015's in the 2x 4 slots, and place a 10GbE NIC or a quad-GbE nic in one of the 8x slots.

The primary uses of the new server will be:

- Computer backups (rsync) - UnRAID
- Web server (to test my websites before they go live) - UnRAID
- Virtual machine (so I can use Windows at university as it is all Macs) - ESXi
- Film, TV, Music Storage - UnRAID
- XBMC server/streaming in my flat - UnRAID
- Run the usual addons to source content for XBMC (Media Center) - UnRAID
- And be able to run a Home Automation system - ESXi/UnRAID

Do you think I actually need this? What would you do/change? Any ideas on how I can work out the potential power consumption on my proposed build as this will control want actual happens

Any other advice/tips etc?
Personally, I would be inclined to run Windows 8 Pro as a bare-metal host and then tick the box in features for Hyper-V. Use Win8 for storage spaces and shares and then you can pass arrays through to any VM's you may wish to run or native shares on the network. Pools can be tickled to see good write speeds and stupid read speeds that are more than enough to keep a GbE network in the home flooded.
The CPU is a hyper-threader and supports virtualization, plenty of RAM and your on a winner.

Just my 2 cents.
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
The 2 golden rules with hardware, don't use exotic or overly expensive gear.

If you aim for something that you can source easily and is gentle on the wallet then your on a winner. The current V2 1155 are very efficient, they run virtualization and can take a reasonable amount of RAM.

Aim for the Norco 4224 (get the ones with yellow backplanes) and place a system in the back-end of it. The PSU should be a solid 750-850W that has a good,solid design. I pick up a PSU and feel it's weight, usually you will find that better units will be very heavy. PSU's that have long 5+ year warranties usually hit the mark. The system in the rear should have a decent amount of PCI-E slots and sadly though, s1155 is rather limited in this department. s1366 or s2011 will allow a lot more options. The reason for the slots is to allow the system to grow later.
As for HBA/RAID cards, your choice but choose wisely. If you are going to run HW-RAID then the use of cheap drives has nearly died. If you want to run ZFS or Windows Storage Spaces then HBA's is the winner. Now most will either give you SFF-8087 or SFF-8088 for either internal or external connectivity. Now most would aim for something like the M1015 for local drives but this means you will need 3 just to do local drives meaning wasted PCI-E slots. This is where you would use a single M1015 and feed it into a 36-port (24-ports for drives, rest are for linking) Chenbro SAS expander to hold all the local drives while it also gives a SFF-8088 for external expansion (DAS options).
The other PCI-E lanes can hold better things like quad-port GbE NICs or 10GbE/Infiband options for networking e-pennis joy. If the system is to do media serving duties, the addition of a nVidia video card doesn't hurt to allow GPU processing when doing transcoding or media format conversions.

Now cooling, be very wise here. 24 enterprise drives need more power to run and generate loads more heat. If you want to keep them cool, then you will need ballsy fans (lots of noise) to keep airflow up. If you aim for cheaper storage with green drives or low-power options then you can replace the 4x 80mm stock fans with quieter options like the Nocturas or Arctics. The PSU is where most rack-case users fail, domestic ATX supplies use big, slow fans to move air. They DO NOT HAVE ENOUGH TRACTIVE GRUNT. You need to either look for PSU's like the bigger Antecs that have a end-fed 80mm fan that can pull against the partial vacuum that WILL be found in the case. Other option is to mod the PSU to speed the fan up or replace with fan that allows external lead to connect to MB header.
I usually disconnect the fan from the normal internal connection and tap it into the 12V rail seeing it ramp to full speed and able to move enough airflow to keep cool. If you are not savy with electronics and mains power, get an expert to do the job for you for less than a 6-pack of beer.

The last thing I usually do is to line the inside larger surfaces with foam rubber padding usually found in motherboard boxes. This is to help control some of the sound or fans.