Trying to build 15 Drive NAS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Tango2

New Member
Jul 8, 2012
6
0
1
I have a server that is currently running FreeNAS. It is currently running as a VM on an ESXi5 server, but I'm looking for the best way to upgrade this with more storage. I would like this server to be capable of supporting up to 15 disks. I'm considering separating the ESXi and the file server, but not 100% decided yet.

I'm having trouble figuring out the best approach to meeting my needs. The server will be used for file/media storage. I currently have a couple PERC5/i cards that I was going to use, but found problems with them and consumer drives due to the TLER problems. I've recently read a little about the SAS extender cards such as the IBM M1015, and it seems as this could be an option. I'm thinking that two M1015's would be the optimal setup (could be completely wrong here though). If I'm thinking correctly, then the problem I'm having is finding a motherboard that will support two of these cards.

I think I can figure out the rest, but suggestions are welcome.
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
What hardware does your current server have? If you have a motherboard which has two free PCI-E 2.0 x8 physical and x4 electrical slots you have enough for two M1015 cards that have spinning disks attached (a few SSDs may bottleneck on the x4, 8 HDDs won't).
 

Tango2

New Member
Jul 8, 2012
6
0
1
I'm currently using the GA-MA770T-UD3P motherboard as my ESXi server with the FreeNAS VM running as a guest. That board only has one PCIe x16 slot, which I currently have the PERC5/i board installed in. Honestly, the PCIe stuff gets a little confusing to me as to what will work and what won't.

For now, and the foreseeable future, I'm sticking with spinning disks.

I may never reach my 15 drive max, but if I upgrade hardware I do want to make sure it will support it in case I do.
 

Tango2

New Member
Jul 8, 2012
6
0
1
I'd like to keep the budget between $300-500 (or less). I already have much of the hardware I would need. I already have drives, NICs, power supply and case, so most of the cost would be in the motherboard/memory/controller card.

I don't need an overpowered machine, so I don't want to go overkill here, but I would like to make sure that I'm future-proof for a couple years.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Personally, X9SCL+-F or X9SCM-F + a Xeon E3-1240 V2 and 16-32GB of DDR3 and you would be set, especially adding an IBM M1015. One card gives you 12 drive total capacity. A second card gives you 20 drives.
 

Mike

Member
May 29, 2012
482
16
18
EU
Except that with an 1155 board by Supermicro you need to use ECC Udimms which would consume half of his budget, unless you went with the 4gb sticks which would mean you are maxed out at 16. Food for thought!

Great platform though.
 

ehorn

Active Member
Jun 21, 2012
342
52
28
...Honestly, the PCIe stuff gets a little confusing to me as to what will work and what won't...


Hello Tango2,

PCI-e can be thought of like highway lanes... More lanes, more traffic.

There is 16, 8, 4, and 1 lane slots available. 16 lanes can packs 4 times more cars on it than 4 lanes.

Now when relating to what slots are avaliable on a motherboard, there is a concept called; "Physical" and "Electrical".

Physical mean the slot can physically accept a particular size of card. Electrical means the lanes are actually useable.

Lets look at the expansion slots on an AM3 MB as an example:

MSI 890GXM-G65
http://www.newegg.com/Product/Product.aspx?Item=N82E16813130269

This board is spec'd with the following PCI-e expansion slots: "PCI Express 2.0 x16 - 2 (x16/x0 or x8/x8)"

Lets examine this.

The first part "PCI Express 2.0 x16 - 2" tells us that the board has two X16 slots. This describes the Physical characteristics of the board. 2 physical X16 lane slots (clear to observe by looking at the board).

Now lets look at the second set of information "(x16/x0 or x8/x8)". This tells us that you can populate one of those two slots and it will operate at the full X16 speed. But if you populate both slots it will operate only 8 lanes for each slot. This describes the Electrical characteristic of this MB.

So when we examine a hardware solution, we need to know what our needs are and match those needs to the physical/electrical specifications of the board.

I am not sure if this is useful information or stuff you already know/understand. But I hope it helps add clarity to the PCI-e stuff.

peace,
 
Last edited:

ehorn

Active Member
Jun 21, 2012
342
52
28
...
MSI 890GXM-G65
http://www.newegg.com/Product/Product.aspx?Item=N82E16813130269

This board is spec'd with the following PCI-e expansion slots: "PCI Express 2.0 x16 - 2 (x16/x0 or x8/x8)"
So this board would allow you to reuse your CPU/Memory. It has 5 x SATA 6Gb/s onboard. I am not sure of the compatibility of the onboard NIC with Hypervisors though. But it might be a budget minded choice to just upgrade your MB to allow more available PCI-e slots to be used however you need. Either more I/O or more network or more <fill in the blank>.

As an example, you could get this board for $80.00. Purchase an M1015 for ~ another $80.00 (or cheaper) and some SAS cables to get to 13 drives capable at a total budget of around or under $200.00.

I would ask myself if I wanted to save a little longer and get to the budget required to upgrade to a very nice platform such as Patrick suggested or just reuse my stuff and add a bit of expansion, such as this suggestion.

HTH.

peace,
 
Last edited:

Tango2

New Member
Jul 8, 2012
6
0
1
Thanks all for the replies. Really glad I stumbled across this forum.

The socket 1155 mobo seems a little overkill for what I am doing. The reason I think so, is because I'm currently using an AMD Phenom II x4 945 processor in my ESXi server. Normal usage of the 4 Windows servers puts the processor at about 200MHz and a hair under 2% usage. With the FreeNAS server, it bumps it up a little, but not much. Correct me if I'm wrong in my logic here.

If I can re-utilize some of my hardware from my current build, I'm considering trying to build a real cheap server, possibly integrated CPU, that is capable of about 16GB of RAM to run my ESXi on. Trick here is going to be compatibility. If I can find a small form factor case/mobo combo that works with ESXi and can support 16GB of RAM, it should do more than what I need - especially if I move the FreeNAS box to a physical machine. That will free up 4GB of allocated RAM to other machines.

The MSI 890GXM-G65 option is sounding real good right now, as I would really be building two machines at once here. I plan to do some more research, and would love to hear more recommendations on how to best tackle this project. I've had numerous little problems with the current build and would like something a little more reliable. I think getting rid of the RAID cards would help with that, as well as moving the file server over to it's own hardware and just hosting my VMs on it via iSCSI.

BTW, the explanation of the PCIe stuff did help. The main problem I was having was the physical vs electrical and trying to figure out what was what on the mobo specs. Your breakdown of that helps a lot. Thanks!

Edit: One possible issue with this would be adding NICs for increased throughput. I guess I could get a dual or quad gigabit card for the one additional PCIe slot. Those shoot up in price pretty quickly, but I guess if I needed it, the option would be there.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Except that with an 1155 board by Supermicro you need to use ECC Udimms which would consume half of his budget, unless you went with the 4gb sticks which would mean you are maxed out at 16. Food for thought!

Great platform though.
If I am going to recommend something, I am of course going to recommend ECC. I am a fan of lowering risk of errors in storage systems.
 

Tango2

New Member
Jul 8, 2012
6
0
1
I can't agree more, and in any production environment, I wouldn't think twice about it. I'm willing to take a risk here for the cheaper investment, BUT I'm fully aware of the issues that could come up because of it. Although I'm not ruling out using a server-class board with ECC RAM just yet, the cost benefits are seeming to outweigh the risks of going the other route.
 

neocronomican

New Member
Mar 10, 2011
6
0
1
What hardware does your current server have? If you have a motherboard which has two free PCI-E 2.0 x8 physical and x4 electrical slots you have enough for two M1015 cards that have spinning disks attached (a few SSDs may bottleneck on the x4, 8 HDDs won't).
do you mean that M1015 can work in 4x electrical?
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
do you mean that M1015 can work in 4x electrical?
Any PCIe card will work in any PCIe slot as long as it physically fits. I had a 4x Intel Dual NIC running in a 1x port I cut the back out of. You are just limited to the throughput given by the present electrical lanes; 8x card in 4x electrical slot will give you the throughput of PCIe 4x.
 

Tango2

New Member
Jul 8, 2012
6
0
1
My current motherboard only has one PCIe x16 physical slot, the others are x1 slots. I could install one M1015, but it would have to be into the same slot that I'm using for my PERC board now. That could possibly work until I upgrade, but I'd like the option to have two M1015's (or similar) installed opening up the option for up to 15 drives.