take my money and build it!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

wookienz

Member
Apr 2, 2012
98
4
8
Quick intro - presently i have a dell t3400 running multiple VM's and a synology 1511+ NAS in raid 6. NAS is running PLEX media server. I am running out of space on the NAS and thought i would upscale. Initially i was going to make a fileserver and have it connect to the ESXi box, but why not have both in one box... save some power and space (not really after I add in 24 hdds!)

So final build must be a Esxi server and storage box.

Present Esxi is a Dell t3400 quad intel core2 Q9450 and 4gb of RAM. Software is ESXi 4

Budget: $3000 USD.

My initial thoughts are a Norco 24 bay box with SATA HDD's into a Hardware Raid Card. I am happy to sacrifice some HDD's to keep $ down

Build’s Name: Massive Online Media Center (MOMC)
Operating System/ Storage Platform: ESXi 4
CPU: Xeon? Can i get Dual?
Motherboard: dual CPU possible? Will have to check ESX licence on this aswell
Chassis: NORCO 24 Bay
Drives: SATA non server grade is fine
RAM: as much as i can
Add-in Cards: min 2 nics, prefer 4 for 2x2 failover (on board or pci)
Power Supply: Dual redundant if possible
Other Bits:

Usage Profile: Media Storage, VM storage.


Software: I have no idea how to present the sets to the ESX host - cant be that hard!

I have no idea for the underlying FS: ie ZFS etc. I run linux on all machines at home so am very comfortable around command line.

If you can dream it, i can buy it!

Thanks for your help.


WookieNZ
 
Last edited:

wookienz

Member
Apr 2, 2012
98
4
8
some thoughts - more to put down somewhere so they dont fall out...


ESXi will live on a RAID 1 set, rest of the drives will be RAIDed in something like RAID60. Open to suggestions.
 

wookienz

Member
Apr 2, 2012
98
4
8
some reaserch later, ie reading reviews by patrick, i think i might fall into the AMD category to allow me to get into dual cpu action.

Presently i am thinking:

Supermicro H8DG6-F board
Opteron 6128 x 2
16Gb ECC RAM.
IBM ServeRAID M1015 x3 (24 HDD's) - if i am correct, each controller will handle HDD's, i lan to raid 6 the 8 HDD's and hopefully then stripe the 3 sets. Is this a possible scenario? I assume ZFS sits on top if i go with an ESXi5 +OI +napp-it build.

Still the Norco 4224 case
PSU TBA


How is it shaping up so far?
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
I've not looked into the free ESXi licensing and more than one CPU so I can't comment there; all of our systems are single-CPU only. Anyone else?

What we have done in our chassis is flashed the M1015s to IT mode and hooked three up to the backplanes of the Norco chassis, allowing you to connect 24 drives total. These controllers are passed through to the openIndiana+napp-it virtual machine and each set of drives forms a raidz2 vdev (two disks redundancy in the virtual device, the same number of redundant disks as raid6) and then you could have all three vdevs in a single pool, striping your data across all three vdevs. Performance would be nice. :p

You need an additional controller capable of raid1 that will be made available to ESXi to store your OI+napp-it VM on. Additional virtual machines could be stored within the OI VM and shared back to ESXi via a NFS share.

We have had quite a few good experiences with Seasonic PSUs in our servers; the most recent build we did in a Norco is with a Seasonic X-560 power supply and it powers a S2011 CPU/motherboard/24 green drives just fine, though it lacks a second 8-pin CPU connector so be wary of that when selecting a PSU. The 760W model has that, though. The Gold rating is something we were keen on (or Platinum as it's available now) to try and minimise running costs and environmental impact.





some reaserch later, ie reading reviews by patrick, i think i might fall into the AMD category to allow me to get into dual cpu action.

Presently i am thinking:

Supermicro H8DG6-F board
Opteron 6128 x 2
16Gb ECC RAM.
IBM ServeRAID M1015 x3 (24 HDD's) - if i am correct, each controller will handle HDD's, i lan to raid 6 the 8 HDD's and hopefully then stripe the 3 sets. Is this a possible scenario? I assume ZFS sits on top if i go with an ESXi5 +OI +napp-it build.

Still the Norco 4224 case
PSU TBA


How is it shaping up so far?
 

wookienz

Member
Apr 2, 2012
98
4
8
actually the free esxi licence wont fly on a dual cpu board, so back to single CPU. may as well go back to intel with the cheaper board but more expensive CPU.

supermicro - X8SIL-F
cpu - intel xeon UP X3470
psu - seasonic x750W Gold
HBA - IBM 1015's x3
 
Last edited:

sotech

Member
Jul 13, 2011
305
1
18
Australia
actually the free esxi licence wont fly on a dual cpu board, so back to single CPU. may as well go back to intel with the cheaper board but more expensive CPU.

supermicro - X8SIL-F
cpu - intel xeon UP X3470
X3470 is almost two generations old now - the sandy bridge xeons came after that and the ivy bridge xeons can't be far away with the launch ~this month. Plenty of solid motherboards with a newer socket... or a socket 2011 option for 32GB RAM to max out the ESXi 5.0 license using 4GB DIMMs.

Edit: We went with S2011 for a couple of reasons; the 40 PCI-E 3.0 lanes and 8 DIMM slots were pretty significant factors for us as future-proofing as much as for their usefulness now.
 
Last edited:

wookienz

Member
Apr 2, 2012
98
4
8
ok great stuff, il look into it. Chose those two from reviews here and elsewhere...but as you correctly state no use buying into the low end of the market.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,518
5,820
113
ok great stuff, il look into it. Chose those two from reviews here and elsewhere...but as you correctly state no use buying into the low end of the market.
I think new the X34xx series is a harder sell. Then again, you can use RDIMMs and get up to 6 slots so there are a lot of options there. Idle power consumption wise SB E3's are not THAT much better. If you are building a VMware + OpenIndiana storage platform, the X34xx series is pretty darn good since you can hit 32GB inexpensively, with server boards you are not using on-die GPUs, and AES-NI does not work in OI (and there are issues in Solaris 11 also with the SB chips.)

I am excited for IB, but with server boards I don't mind going 1-2 generations older if I know it is going to be rock solid. If you can afford LGA 2011, the extra memory slots/ capacity plus tons of PCIe 3.0 connectivity is really good.
 

wookienz

Member
Apr 2, 2012
98
4
8
here is the updated server parts list:


Motherboard: MBD-X9SRA Intel® C602 chipset Single Socket R (LGA 2011)
CPU: Intel Core i7-3820 (3.6G,10M Cache,LGA 2011)
RAM: 4GB DDR3 1333 ECC Registered memory x8

Chassis: CSE-846TQ-R900B 4U Rackmount chassis (Black)
Power Supply: 900W (1 + 1) Redundant AC to DC power supply w/ PFC

I would IMPI but none of the lGA2011 boards have enough x8 PCIe slots for the 3x 1015's i would need. At 300 total for the three cards, should i just buy a single HBA for 24 drives?

Should i be concerned about using non server Mobo's, ie i could get an ASUS that fits nicely but is it up to spec?

Comments appreciated.
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
I'd be interested to see numbers from anyone with regard to the performance of a M1015 in a x4 slot vs. a x8 slot - particularly for mechanical drives where it's less likely to make a difference. If anyone can actually show whether or not it makes a difference that may sway your decision. I keep on meaning to do some proper testing of that but haven't had the spare time.

I've been extremely happy with the Asus workstation boards for our last two server builds; the P9X79 WS has been great for the past couple of months and the P8B WS was good for the >12 months (I think) prior to that.

The i7-3820 technically doesn't support ECC RAM according to the Intel spec sheet - though there are reports out there of people getting other current Intel chips working with ECC RAM which supposedly aren't able to; so that may not be as black and white as that. I prefer Xeon processors which are guaranteed to support it. I'm hanging out for the E5-1650 to be released, which is supposed to be not much more expensive yet 6-core and similarly clocked. I was told by one of the Australian suppliers that they'd be released this week but I've not seen any evidence of that happening yet. We're running an i7-3820 in our server for now until that CPU comes out - though it has been in there a lot longer than I was anticipating.


here is the updated server parts list:


Motherboard: MBD-X9SRA Intel® C602 chipset Single Socket R (LGA 2011)
CPU: Intel Core i7-3820 (3.6G,10M Cache,LGA 2011)
RAM: 4GB DDR3 1333 ECC Registered memory x8

Chassis: CSE-846TQ-R900B 4U Rackmount chassis (Black)
Power Supply: 900W (1 + 1) Redundant AC to DC power supply w/ PFC

I would IMPI but none of the lGA2011 boards have enough x8 PCIe slots for the 3x 1015's i would need. At 300 total for the three cards, should i just buy a single HBA for 24 drives?

Should i be concerned about using non server Mobo's, ie i could get an ASUS that fits nicely but is it up to spec?

Comments appreciated.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I did a rough test of PCIE2 x4 versus x8 this March and the results were not that bad.

My tests of the LSI SAS2008 controller (IBM M1015, LSI9200-8, etc.) with IT firmware (no hardware RAID) in a data warehouse environment using SSD drives shows that each controller can push 1440MB/Second worth of reads in an x8 or x16 slot on an Asus KGPE-D16 motherboard. In the x4 slot on the same motherboard I saw 1120MB/Sec - about 80% as fast. These were database tests, not raw disk benchmarks, so look at the relative throughput, not the absolute numbers; a benchmark test would have shown bigger numbers.


I'd be interested to see numbers from anyone with regard to the performance of a M1015 in a x4 slot vs. a x8 slot - particularly for mechanical drives where it's less likely to make a difference. If anyone can actually show whether or not it makes a difference that may sway your decision. I keep on meaning to do some proper testing of that but haven't had the spare time.

I've been extremely happy with the Asus workstation boards for our last two server builds; the P9X79 WS has been great for the past couple of months and the P8B WS was good for the >12 months (I think) prior to that.

The i7-3820 technically doesn't support ECC RAM according to the Intel spec sheet - though there are reports out there of people getting other current Intel chips working with ECC RAM which supposedly aren't able to; so that may not be as black and white as that. I prefer Xeon processors which are guaranteed to support it. I'm hanging out for the E5-1650 to be released, which is supposed to be not much more expensive yet 6-core and similarly clocked. I was told by one of the Australian suppliers that they'd be released this week but I've not seen any evidence of that happening yet. We're running an i7-3820 in our server for now until that CPU comes out - though it has been in there a lot longer than I was anticipating.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,518
5,820
113
dba: How did you have those disks configured? Were they RAID 0, 10 on the LSI controller, through software, or setup as independent disks?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
From the point of view of the controller they were JBOD - thus the IT firmware. I was using RAID, but implemented in software - specifically Oracle ASM RAID1E. Some of my earlier tests were RAID10 on the card, which showed similar performance overall, which isn't surprising since RAID10 is pretty "easy" for a card to handle.

That said, the RAID level almost certainly does not matter to the x4/x8/x16 discussion since it's the card that eventually limits throughput. All of the current generation LSI RAID controllers appear to be bandwidth limited significantly below the theoretical 4GB/Second throughput for PCIe2.0 x8 slots. Whether single-core 2008 controller or dual-core 800Mhz 2208, none of the cards are able to push more than about 2.5GB/Second. Even more strange is the fact that *all* of the cards seem to be able to push right around 2.5GB/Second in JBOD, RAID0, or RAID10. In my testing, the 9200-8e and the newer 9205-8e had *exactly* the same maximum throughput, for example, and my numbers were surprisingly close to the maximum throughput quoted for the 9285-8e in a benchmark published by LSI. Buying a dual-core LSI RAID card gets you fast RAID5/6 - which is a good thing - but strangely does not buy you any more throughput for simple RAID levels.

By the way, each LSI card can keep up with only five SSD drives before becoming a bottleneck. I attach only 5 SSD drives to each RAID card for this reason, which means that I am wasting a great many disk drive ports. I am looking forward to the dual-controller x16 LSi 9202-16e in a few months and also to a future LSI PCIe3.0-based card, both of which should provide more throughput - and I always need more.

dba: How did you have those disks configured? Were they RAID 0, 10 on the LSI controller, through software, or setup as independent disks?
 
Last edited:

wookienz

Member
Apr 2, 2012
98
4
8
for for us minions.... a 1015 in x4 slot with SATA off the back in a zfs type raid6 config going to be a problem?
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
for for us minions.... a 1015 in x4 slot with SATA off the back in a zfs type raid6 config going to be a problem?
Based on the numbers above I'd say you'd be unable to hit the x4 limitation (~80% of x8) using mechanical SATA drives, i.e. not SSDs

Btw, if you're after M1015s there's a good link via mobilenvidia in the "Good Deals" section where it's 4 for $150 at the moment from the eBay seller linked.