take my money and build it!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,828
113
Ok. Only question is RAID 0 from a data protection standpoint.
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
Ok. Only question is RAID 0 from a data protection standpoint.
Good point, I had misread that as raid1... Redundancy ftw there unless restoring from backup isn't am issue :)

That's another thing - have you got a backup plan for the VMs and the bulk data amount?
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
213
63
New Zealand
The M1015 doesn't support RAID6, it supports RAID5 (with a key) but that is so slow you could write the binary code by hand faster.
So only real redundancy options are RAID1 and 10, which the M1015 kicks butt at
 

_Adrian_

Member
Jun 25, 2012
48
5
8
Leduc, AB
The M1015 doesn't support RAID6, it supports RAID5 (with a key) but that is so slow you could write the binary code by hand faster.
So only real redundancy options are RAID1 and 10, which the M1015 kicks butt at
Check out the HP Smart Array P800 or P400 (even have 2 spares with cables and BBWC)

I attached the user manual below:
http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01609699/c01609699.pdf

And on a sidenote from HP...
Smart Array P400, P400i, And P800 Controllers - Maximum Logical Drive Size Capability Has Been Increased From 2.2 Terabytes (TB) To 8 Zettabytes (ZB)
ISSUE:
Recent Quickspecs changes suggests that the maximum logical drive size for the Smart Array P400, P400i, and P800 controllers now can exceed 2.2 Terabytes (TB). However, no theoretical limit is defined.
SOLUTION:
The theoretical limit is listed in the firmware documentation for the respective controllers. At a theoretical 8 Zettabytes (ZB) maximum logical drive capability, there is no storage system and hard drive combination possible to exceed the maximum possible logical drive size. The feature was added to Firmware v 2.08.
Have fun :)
 
Last edited:

wookienz

Member
Apr 2, 2012
98
4
8
i edited my post for mistakes.... SSD's will be in raid1 for VM's, 3x m1015's for eventual 24 drives, 2 x 80mm fans (thought the back of the box had three).

HDD's will be in raidz2 (zfs) 6 drives per pool, eventual 4 pools. 24 HDD @2TB = 48TB - 2x4x 2TB = 32Tb capacity at the end of the day. 2 HDD's spare per pool
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
213
63
New Zealand
With the M1015's, ZFS is your best bet for both speed and redundancy.

RAID1 for SSDs to me a waste, as you gain very little.
TRIM will be doubled also with RAID1, best to probably use the Intel Controller for this as it should allow to pass TRIM commands via RAID shortly (if not already)
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
213
63
New Zealand
Just keep it in mind SSDs bring issues that HDD don't have.
Garbage collection is one of them, TRIM commands attempt to get around these issues.

Best to google it and see the exact ins and outs on the subject.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
213
63
New Zealand
You could get a SSD for VM's and one for snapshots

or
2x for RAID 0 for VM's
1x for snapshots

I think the VM's on RAID0 will make them nice and snappy
 

wookienz

Member
Apr 2, 2012
98
4
8
unless there is huge TRIM issues with two in raid1 then im using 2 x ssds anyway for snapshots and vm storage...same SSD's, same $ but less hassle if in raid 1. Is this TRIM issue a biggy?
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
unless there is huge TRIM issues with two in raid1 then im using 2 x ssds anyway for snapshots and vm storage...same SSD's, same $ but less hassle if in raid 1. Is this TRIM issue a biggy?
We ran two SSDs in raid1 hanging off a M1015 for ~12 months with no dramas - didn't miss the lack of TRIM, they performed extremely well long-term. It was only for one ZFS VM, though, and all the other VMs were stored inside a ZFS pool and shared back to ESXi via NFS... so performance wasn't our primary goal, more redundancy first and performance as a secondary goal after that - we may have noticed a difference were we to be running all of our VMs on the SSD array, maybe not. I appreciate the responsiveness of the SSD vs. a mechanical ESXI datastore and would happily go that way again.

I would take the redundancy at the cost of TRIM if restoring from backup would be a painful process - I'm not always in the office and if the main SSD died in the server I'd hate to be days away from being back and trying to talk someone else through replacing it and restoring from backup.
 

wookienz

Member
Apr 2, 2012
98
4
8
i did some googling, and yes it appears it is a problem. So much so that it could slow the drive down and defeat the purpose of the ssd. I think one ssd is the way to go with snapshots to the zfs pool. Saves me some $ anyway!
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,828
113
Double-check the usage scenarios folks see the write wall at. There is a difference between consumer drives being hammered 24/7 versus lower utilization arrays. LSI does not enable TRIM on its RAID arrays from what I understand.
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Seems lots of suggestions for builds are flying around with few requests on usage.

From the initial post the current system uses Q9450 and 4GB ram. The reason for upgrading was lack of storage. Lack of processing power was not mentioned.

  1. Have you measured your current CPU usage ?
  2. Have you measured your current disk bandwidth usage ?
  3. Have you measured you current network bandwidth usage ?

With those three key metrics you have a starting point.

From there you can start working out;
  • What is the expected storage capacity requirement over the next year.
  • What is the expected network usage growth over the next year.
  • What else are you looking to do with this server and what is the projected CPU requirements over the next year.
  • How many years is this upgrade required to sustain operations.

With these estimations we have a end of year target to match and we can multiple by years of expected service for a very rough guide of where you need this machine to be over its lifetime.

From what I have read so far.... my initial suggestion would be;
  • Supermicro SC846BE26-R920B Chassis inc dual SAS expander.
  • Intel S1200BTLR motherboard (ESXi Certified).
  • Intel AXXRMS2LL040 4 port mezzanine SAS controller.
  • Intel AXXRMM4 IPMI KVMoIP remote access module with dedicated NIC.
  • 32GB Kingston Unbuffered ECC ram (4x8GB).
  • Intel ET dual port NICs x2
  • Intel RS2WC080 (basically a IBM M1015).
  • Intel E3-1230 (4 cores, 8 threads).
  • 2xSSD for datastore.
  • XX HDD for storage.

The Supermicro chassis has dual expanders so you only need two SFF 8087 connectors to hook up the 24 hotswap drives. Using mechanical drives which tend to burst at around 150MB/s (SATA mechanical) you can have 4 per channel. Two SFF-8087 connectors give 8 channels 8*4 = max 32drives before hitting contention for bandwidth. In reality 24 drives will be good. The chassis also has the dual redundant PSUs you mentioned wanting.

The Intel S1200BTL is ESXi certified and has dual nics. One is supported for ESXi management network and both are supported for ESXi networking. The board also has a mezzanine slot to add a 4xSAS controller without taking a PCIe slot. This 4 port controller can be used for connecting the 2 SSD drives for the VM data stores and will handle raid 1 fine. The board also has 3xPCIe 4 (x8 mechanical) and 1xPCIe 8 (x16 mechanical). This means that if you are doing passthrough you could use the LSI1068 based cards (LSI 3081E-R for example) which are dirt cheap as they will only run in PCIe v2 x4 slots but are x8 cards (need an x8 mechanical slot). Using those cards will cause bandwidth contention faster as you add more drives though but you really should work out how likely that is to impact the server based on usage patterns for the disks concurrently.

The two dual port ET NICs will handle your 2x2 network redundancy and each card can be passed to a separate VM if desired.

Intel AXXRMM4 will give you KVM over IP so you can control your server from bootup to bios options to anything after via a remote machine.

The Intel E3-1230 is the sweet spot IMO from the E3 range. Low cost, pretty powerful, 4 cores and hyperthreading. Depending on your forecast of requirements you can up the specs if you need too.

The brunt of the damage, financially, will come on the cost of the chassis. The inbuilt dual expander and redundant platinum PSU really add to the cost. Having had a Norco 4020 for a few years I would not get another. The backplane has developed faults and the warranty support where I am is terrible.

If you have some answers to the two sets of questions above then maybe this build can be fine tuned more. I also appreciate your requirements may have changed from the first post but trying to at least get somewhere close to your budget, this should do. My own ESXi server runs on a setup like this (Norco chassis though :( ) and only shows load when recombining newgroup articles (par2). The rest of the time my VMs (Linux and WHS 2011) quite happily run on the E30-1230. Anything more for me would be overkill.

RB
 

wookienz

Member
Apr 2, 2012
98
4
8
OK....long time but this project is almost done.

I have the m1015 cards in hand (x3) and the norco 4224 case.

The rest of my proposed list is:

ASUS P9X79 Pro Mobo
Intel i7 3930K
Seasonic 860 PSU
32 Gb of ram yet to be determined
intel 330 ssd 120gb x1 with snapshots to zfs volume
corsair h100 cooler
noctua fans 120mm and 80 mm
120mm fan bracket


Questions:
do i need a specific 32gb quad channel set or will any 16gb set x2 do (1600mhz)
the m1015 cards are going to be used for zfs, do they need to be flashed?
i7 3930k does not support vt-d, is this going to be an issue for a esxi server and passing raid cards to Napp it OS and back again as a data store?
i was planning to use the wd caviar green 2tb drives for this 24x7 NAS - bad idea?


Anything i have missed?

cheers as always.