Upcoming build for Win Server 2012 file/media storage server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

9jack9

New Member
Aug 18, 2012
27
5
3
Hello everyone,

I'm looking to upgrade my HP Media Smart EX490 platform. I already have the 5 bay attachment, and using nothing but 2TB drives, and I'm still running out of space. Recently it's been acting up, and I've decided it's time for a replacement.

The world of server hardware is not my area of expertise, so before I pulled the trigger on a build, I thought I would post here what I was looking at, and have you guys tear it all apart :)

Anyway with that:

Build’s Name: WHS Replacement
Operating System/ Storage Platform: Windows Server 2012
CPU: Intel Core i3-2120T
Motherboard: SUPERMICRO MBD-X9SCM-F-O
Chassis: Norco 4224
Drives: Variety including WD Green, Red, Samsung, Seagate and Hitachi
RAM: 8GB Kingston DDR3 1333 ECC ( KVR1333D3Q8R9S/8G)
Add-in Cards: (up to 3x as needed) AOC-SAS2LP-MV8
Power Supply: Rosewill HIVE Series HIVE-750 750W (I'm not partial to this)
Other Bits:

Usage Profile: This box will be used as a media storage platform, providing redundancy across all drives. In addition, I will be using this device to handle media library management, things of that nature.

I'm basically looking to replace my HP MediaSmart Server with something that is more upgradable, more reliable, and more versatile. I plan on utilizing the storage spaces feature of Windows Server. I'd like to keep the power usage down, as this platform will be in a residential area, running around the clock (or mostly around the clock). There will be no more than 2 users accessing data simultaneously, however I would like to be able to transfer data as rapidly as possible.

Any insights, would be greatly appreciated!
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
Looks like a sound plan. Only question is why those particular controllers. The marvell controllers I think have good user feedback in windows but outside of that there is not good support.
 

9jack9

New Member
Aug 18, 2012
27
5
3
Looks like a sound plan. Only question is why those particular controllers. The marvell controllers I think have good user feedback in windows but outside of that there is not good support.
Those controllers offered 8 SATA connections, ... really that was about it. I'll glance over to the Marvell ones and see if anything catches my attention about them.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Hello 9jack9,

IMHO, you cannot go wrong with that MB for your intended use.

One possibility for storage controllers/expansion is to get an IBM-M1015 controller (or even a few) and/or an Intel expander. You can find the M1015 for under $100.00. Also, there are deals to be found on Ebay for Intel 24 port exanders (Intel RES2SV240) for right around $200.00:

If you shopped around, I suspect you could acquire a M1015 and Intel expander (plus some 8087 cables) for right around $300.00 and it would get you to 24 internal drives with solid and capable gear. Multiple M1015's can be had for cheaper total price, but you are using up more slots on the MB and might be a consideration for future expansion (for instance, if you want to add more NIC's for bonding/teaming or SMB 3 Multichannel, etc...)

PSU: Dont know much about that PSU so cannot comment. I have always have reliable performance from Seasonic branded ATX PSU's. Whichever PSU you choose, if you can afford it, a Gold/Platinum rated PSU will definately help to reduce power bills.

Chassis: You can find some good deals on Ebay if you shop around. The Norco is priced very competitively for a new 24 bay chassis, but there are deals around if you are interested in alternatives. For example, here is a very nice 16 bay for less than half the price of the Norco:

http://www.ebay.com/itm/Chenbro-RM3...ultDomain_0&hash=item257467bcfe#ht_1893wt_888

In this case, if you went 16 bays, you can simply pick up two M-1015's and that will also save money over the controller/expander route.

You might also notice that chassis included the PSU (and a darn good one at that) but these are "server grade" components and, as such, they will likely be noisy(er) than custom selected "quiet" components for a living space. Depending on where the box will live, noise is certainly something to consider when building a 24/7 box. But people can and do get creative on modding to reduce noise.

Nevertheless, There are many such deals around which would allow some of that hard earned money to be redeployed into other components (if that was a chosen route).

Best wishes.

peace,
 
Last edited:

9jack9

New Member
Aug 18, 2012
27
5
3
Hello 9jack9,

IMHO, you cannot go wrong with that MB for your intended use.

One possibility for storage controllers/expansion is to get an IBM-M1015 controller (or even a few) and/or an Intel expander. You can find the M1015 for under $100.00. Also, there are deals to be found on Ebay for Intel 24 port exanders (Intel RES2SV240) for right around $200.00:

If you shopped around, I suspect you could acquire a M1015 and Intel expander (plus some 8087 cables) for right around $300.00 and it would get you to 24 internal drives with solid and capable gear. Multiple M1015's can be had for cheaper total price, but you are using up more slots on the MB and might be a consideration for future expansion (for instance, if you want to add more NIC's for bonding/teaming or SMB 3 Multichannel, etc...)

PSU: Dont know much about that PSU so cannot comment. I have always have reliable performance from Seasonic branded ATX PSU's. Whichever PSU you choose, if you can afford it, a Gold/Platinum rated PSU will definately help to reduce power bills.

Chassis: You can find some good deals on Ebay if you shop around. The Norco is priced very competitively for a new 24 bay chassis, but there are deals around if you are interested in alternatives. For example, here is a very nice 16 bay for less than half the price of the Norco:

http://www.ebay.com/itm/Chenbro-RM3...ultDomain_0&hash=item257467bcfe#ht_1893wt_888

In this case, if you went 16 bays, you can simply pick up two M-1015's and that will also save money over the controller/expander route.

You might also notice that chassis included the PSU (and a darn good one at that) but these are "server grade" components and, as such, they will likely be noisy(er) than custom selected "quiet" components for a living space. Depending on where the box will live, noise is certainly something to consider when building a 24/7 box. But people can and do get creative on modding to reduce noise.

Nevertheless, There are many such deals around which would allow some of that hard earned money to be redeployed into other components (if that was a chosen route).

Best wishes.

peace,
Thanks for the detailed reply!

There must be something fundamental I'm missing but how exactly does the M1015 and Intel Expander work together? If I'm looking to just connect 24 drives in a JBOD configuration (and let Windows handle the storage spaces component of it all), wouldn't I be able to get by with just using just the Intel expander?

I'd probably go with the NORCO 4224 route, as I have 9 2TB drives form the WHS platform that I would use here, and a number of smaller drives laying around that I would just toss into the pool.... then again, using two of them would give me 32 drives for less than the cost of 1 NORCO unit...something I will have to put some more thought into.

I do have a rack that is inside a closet (with a window that opens, how awesome is that?!), so while noise isn't terribly important (I can close the closet door), I would like to at least attempt to minimize it. I know the NORCO offers a optional 120mm fan wall which I would likely take advantage of.

Could you elaborate on the need for the M1015 when using the JBOD setup?

Thanks,
Ogi
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Thanks for the detailed reply!

There must be something fundamental I'm missing but how exactly does the M1015 and Intel Expander work together? If I'm looking to just connect 24 drives in a JBOD configuration (and let Windows handle the storage spaces component of it all), wouldn't I be able to get by with just using just the Intel expander?

I'd probably go with the NORCO 4224 route, as I have 9 2TB drives form the WHS platform that I would use here, and a number of smaller drives laying around that I would just toss into the pool.... then again, using two of them would give me 32 drives for less than the cost of 1 NORCO unit...something I will have to put some more thought into.

I do have a rack that is inside a closet (with a window that opens, how awesome is that?!), so while noise isn't terribly important (I can close the closet door), I would like to at least attempt to minimize it. I know the NORCO offers a optional 120mm fan wall which I would likely take advantage of.

Could you elaborate on the need for the M1015 when using the JBOD setup?

Thanks,
Ogi
yvw,

A simple way to think of an expander is that is all it does: Expands the amount of devices that the controller can "talk" to. Expanders merely facilitate the connection of more devices than that available on the controller. The M1015 is the controller. There is much written about this great host bus adapter on this site.

A JBOD is usually just a chassis with drives and power and some sort of expander configuration (cables and/or expanders). The storage controller(s) typically resides in a separate chassis (which houses the MB/CPU/Mem, etc... the brains) and is connected to the JBOD using SFF-8088 (or denser variants) of cables.



P.S... There is much written about the Norco chassis' on this (and others) site. Lots of folks use them. Simple to get them "quiet".
Nice "server room"... a closet with a window is a great feature. :)

Best wishes.
 
Last edited:

9jack9

New Member
Aug 18, 2012
27
5
3
yvw,

A simple way to think of an expander is that is all it does: Expands the amount of devices that the controller can "talk" to. Expanders merely facilitate the connection of more devices than that available on the controller. The M1015 is the controller. There is much written about this great host bus adapter on this site.

A JBOD is usually just a chassis with drives and power and some sort of expander configuration (cables and/or expanders). The storage controller(s) typically resides in a separate chassis (which houses the MB/CPU/Mem, etc... the brains) and is connected to the JBOD using SFF-8088 (or denser variants) of cables.



P.S... There is much written about the Norco chassis' on this (and others) site. Lots of folks use them. Simple to get them "quiet".
Nice "server room"... a closet with a window is a great feature. :)

Best wishes.
Thanks for the diagram. Following up on the expander/controller, I'm going to assume that the expander cannot "expand" on the controller that is embedded in the motherboard, hence why the M1015 is used.

I suppose the next question is, with using the i3 CPU (and a non-xeon), is there anything that I'm giving up on that should make me reconsider my CPU choice?

.... here I thought I had the whole chassis thing settled, and now I got thinking to do!
 

ehorn

Active Member
Jun 21, 2012
342
52
28
Thanks for the diagram. Following up on the expander/controller, I'm going to assume that the expander cannot "expand" on the controller that is embedded in the motherboard, hence why the M1015 is used.

I suppose the next question is, with using the i3 CPU (and a non-xeon), is there anything that I'm giving up on that should make me reconsider my CPU choice?

.... here I thought I had the whole chassis thing settled, and now I got thinking to do!
Yes, That is one (among several) of the benefits of the M1015... Device expansion. IIRC, The M1015 supports up to 256 devices.

Some motherboards do support expanders (typically specialized higher-end server boards with very good controllers integrated on board). But the typical Marvel/Intel controllers that you see on most of MB's do not support expanders.

IMHO, I think that 2120T would be just fine for your needs (24/7 - platter based, storage server). Others may have better input regarding CPU choices. In the end it comes down to your intended use.

Have a great evening and have fun considering your next storage platform... :)

Best wishes.
 

9jack9

New Member
Aug 18, 2012
27
5
3
In the interest of having a good understanding of what I'm about to undertake, allow me to rehash the connectivity as I best understand it.

The M1015 has 2 SFF-8087 connections on them. I would then use the SFF-8087 cable to connect one of the ports on the M1015 to the expander card. I would use then standard SATA cables to connect the expander card to the backplane of the NORCO case (should I go with that one) (EDIT I just saw that the NORCO case takes SFF-8087 connections into the backplane). I have no doubt this would be obvious once the hardware is presented in front of me.

Thanks for your help and patience :)
 
Last edited:

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Chassis: You can find some good deals on Ebay if you shop around. The Norco is priced very competitively for a new 24 bay chassis, but there are deals around if you are interested in alternatives. For example, here is a very nice 16 bay for less than half the price of the Norco:

http://www.ebay.com/itm/Chenbro-RM3...ultDomain_0&hash=item257467bcfe#ht_1893wt_888

In this case, if you went 16 bays, you can simply pick up two M-1015's and that will also save money over the controller/expander route.

You might also notice that chassis included the PSU (and a darn good one at that) but these are "server grade" components and, as such, they will likely be noisy(er) than custom selected "quiet" components for a living space. Depending on where the box will live, noise is certainly something to consider when building a 24/7 box. But people can and do get creative on modding to reduce noise.
Just to chime in for that chassis. I bought my RM31616 from the same person. I love the case, but that PSU is older tech and any <=3U chassis limits your replacement options. The EMacs is about 65% efficient and as mentioned, LOUD. I have replaced it with a 1U Supermicro 80+ Platinum rated PSU(US$150) that has a PWM fan that makes little noise when idling. The other noise problem is the four 80mm PWM fans that get less loud, but still nothing you would want in an office/living room.

For my storage server I use a Pentium G630 currently. The G630 has also served desktop duty including 1080p playback with Linux. So, I would think the i3 would suit your needs well.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
... The M1015 has 2 SFF-8087 connections on them. I would then use the SFF-8087 cable to connect one of the ports on the M1015 to the expander card.
That is correct...

I would use then standard SATA cables to connect the expander card to the backplane of the NORCO case (should I go with that one).
The intel expander utilizes SFF-8087 connectors. But the backplane would be dependent on that chassis (I don't know the config off hand). Either you would use SFF-8087 (expander) <--> SFF-8087 (backplane) or SFF-8087 (expander) <--> SATA "forward" breakout cable (backplane).

Hope that helps.

Thanks for the additional insight cactus.
 

9jack9

New Member
Aug 18, 2012
27
5
3
Thanks for the followup. That's all the questions I have, I guess now there is no reason to not start bidding for stuff on ebay, and hope that it is all ready to go once Win Server 2012 is released.
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
One point worth noting is the bandwidth available.

A single M1015 is x8 = 4GB/s. It has 2 links x 4 channels maxing at 600MB/s per channel = 4.8GB/s

Now if you hook one link to an expander then you have 5x 4 channels going through 4 channels to the controller. 20 drives going through a 2.4GB/s link or a max of 120MB/s per drive (all drives running etc) which is lower than the max capacity of a decent mechanical drive let alone any SSD drives.

How to get around this, assuming you are looking at mechanical hard drives.... Attach 2x links the the expander so you now have 8x 600MB/s (4.8GB/s) shared between 16 drives (4 remaining links on the expander going to drives) = 300MB/s max each (all drives running) which will also cover SATA II SSDs and then add a second M1015 to connect the remaining 8 drives.

If bottlenecking the drives is not an issue then the original plan will connect all the drives. If you want top speed and the ability to run SATA II SSDs at top speed then the option above would be better. Of course the figures will be skewed by the number of drives you actually use :D.

Also remember that the motherboard (nice board by the way, usually my first choice for client builds) has two network ports and so with link aggregation (LACP) and a compatible switch you will still be limited to around 250MB/s output from the box on a good day with a following wind regardless of your disk to motherboard bandwidth. 12.5MB/s (100Mbit) will handle streaming MKV files around 8GB and less depending on actual encoding but may start showing frame drops after that. A single GbE will handle multiple Uncompressed bluray ISO files streaming. For home you are unlikely to need more but if the main use is for streaming then you may not want full speed from your drives anyway.

RB
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,821
113
Also, if you have not ordered parts yet, there are new Ivy Bridge Core i3's out as of this week. If you already did, or got a deal on the Sandy Bridge version, I would not worry about it.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Consider buying three used IBM M1015 SAS/SATA controllers instead of the AOC-SAS2LP-MV8 devices. Pay no more than $80 each for these. While you are on eBay, buy six SFF-8087 to SFF-8087 cables - pay no more than $10 each. You'll then connect each of your six total SFF-8087 ports on the three M1015 cards to an SFF-8087 port on the Chassis backplane and you'll then be fully wired for up to 24 drives. Re-flash the IBM cards with LSI firmware for good measure - see earlier responses for details.

The advantage to using the M1015 cards is that they are inexpensive, reliable, and well supported. Using three of them will provide more speed than you'll ever need and can increase reliability depending on how you configure RAID. Cabling will be easy to do and relatively cheap as well, if you go the eBay route.
The downside is that you are left with only one unused PCI slot and you'll be using maybe 10 more watts than if you'd chosen a single M1015 card along with an expander card.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,821
113
I have been going a lot more towards the M1015 (or other) HBA route lately. I do think it makes a lot of sense.

Other advantages to HBA's:
1. You can mix 3.0gbps and 6.0gbps drives without bringing everything to 3.0gbps.
2. You have a somewhat similar topology since you are going System -> HBA -> Drive instead of System -> HBA -> Expander -> Drive

Other advantages for SAS Expanders:
1. Much easier and less expensive to pull off for external JBOD/ SAS Expander storage.
2. Only one controller to manage.
3. Fewer PCIe slots used.
 

BigWorm

New Member
Sep 3, 2012
28
0
1
Consider buying three used IBM M1015 SAS/SATA controllers instead of the AOC-SAS2LP-MV8 devices. Pay no more than $80 each for these. While you are on eBay, buy six SFF-8087 to SFF-8087 cables - pay no more than $10 each.
There has not been a single sale of one of these at $80 in a while. :) Value has gone up on these guys.
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
A couple potential issues in doing the three card route;

The third card will run at half max speed as it is an x8 in an x4 slot. Possibly not a big deal but something to be aware of.

The drives will be split over 3 controllers rather than two. I have not tested if multiple M1015s can link the combined collection of drives in to a single array or if you are limited to arrays using drives on only a single controller. I will probably test this out tonight or tomorrow now my SAS cables have arrived.

You could get the 36 drive Intel expander and then dual link one M1015 to that and the expander to the 24 drives which would give you 200MB/s max per drive and allow you to create 1 single 24 drive array if you really wanted to go that way.

RB