One Server to rule them all

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Cipher

Member
Aug 8, 2014
159
15
18
53
Ok, I've decided to consolidate my three current systems (Server, Workstation "Server", Synology 8-Bay NAS) into a new single server. This system will be responsible for the following primary tasks:

-Hyper-V Host - for Hyper-V development VM's
-Docker Container Server
-Media Server using Plesk and/or JRiver (setup in Hyper-V VM or Container)
-pfSense Server (setup in a Hyper-V VM)

To build this server, I'm looking at the following parts list:

1) Chassis
-I picked up a Supermicro SC846E16-R1200B 24 Bay chassis last year so this will be used to store all the components.

2) CPU
-Based on the current prices of the Intel Xeon E5-2670, I've decided that these Xeons will be the starting point of this build.
-Estimated eBay Price - 2 x $60 = $120

3) Motherboard
-I'm going with a dual CPU Supermicro motherboard given their built-in features and support for my chassis. In addition, I really want to get one with as many high speed PCI-E 3.0 lanes as possible so I've "narrowed" it down to the following Supermicro boards which contain 3 or 4 x16 PCI-E 3.0 lanes:
-X9DRi-LN4F+
-X9DR3-LN4F+
-X9DRi-F
-X9DR3-F
-X9DR7-LN4F
-Estimated eBay Price - $400-$450

4) Memory
-My current server with 96GB RAM isn't cutting it so I'm looking at 192GB for this build as there will be many times I need to run 3-4 Development Server VM's which will need 30-40GB RAM each (Microsoft CRM/AX/GP/SQL Server VM's for my largest clients requires this type of memory footprint)
-Estimated eBay Price - $400-600

5) Hard Drives
-I currently have the following drives:
-12 x Western Digital 3TB RED SATA drives (Media Storage, File Backups)
-4 x Hitachi 400GB SSD SAS drives (VM's storage)
-2 x Intel DC S3700 400GB SSD SATA drives (VM's storage)
-1 x SanDisk Extreme Pro 480GB SSD SATA drive (OS - Windows Server 2016)
-I'll be adding 5 more drives later (SATA & SAS) to fill up all 24 drive bays

5) HBA's
-To support all 24 bays in the chassis, I'm looking at picking up three of the Dell H310 cards.
-Estimated eBay Price - 3 x $50 = $150

6) Cables
-Not sure what type I need, but I know I'll need 6 cables to connect the 6 ports on the three HBA cards.
-Estimated eBay Price - ?

Given these components I do have a list of questions:

1) All the Supermicro boards support the following memory types: 1866/1600/1333/1066 ECC DDR3 SDRAM. For my usage, would I notice a big difference between these memory speeds?
2) Given these memory types, is their currently a sweet spot for pricing on eBay for one of these speeds? What are most people using with their E5-2670 builds?
3) Some of the Supermicro boards I listed contain the Intel i350 networking chip and 4 Ethernet ports. Is there any functional/performance differences between using this built in option versus a separate 4 port i350 network card?
4) Is there anything I should be worried about when connecting the 3 HBA cards/chassis backplane to different types of hard drives (SATA spinners, SATA SSD, SAS SSD)?
5) Any recommendations for cables to connect the HBA's to my chassis backplane?
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,648
2,065
113
1- Go 1866 if you can, the premium$ is usually there at this speed though. 1600 and 1333 are usually the same common & affordable. FWIW I use some of all speeds in various servers, all work fine for me...

2- See #1.

3- Shouldn't be.

4- What type of backplane? Some don't like to mix SAS and SATA. I have some interposers I got for mixing on same expander backplane, but have yet to try them.

5- OEM (Adaptec, LSI/Avago, SuperMicro) are usually all high-end amphenol -- I go with those.
 
  • Like
Reactions: Cipher

andrewbedia

Well-Known Member
Jan 11, 2013
701
260
63
RE:#4 @ T_Minus
It sounds like the backplane isn't an expander since he needs three H310s (24 ports), so it shouldn't matter?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,648
2,065
113
RE:#4 @ T_Minus
It sounds like the backplane isn't an expander since he needs three H310s (24 ports), so it shouldn't matter?
I'm not assuming that's the case ;) Thus why I asked. Some people think to run x# of drives they need x# of HBA -- but then look and find out they have an expander backplane.

Also, I'm not sure if you're saying cable quality shouldn't matter because it's not expander? If so, cable quality always matters. Or, if any issues? I still wouldn't personally mix SAS and SATA on same channel just to be 'extra' safe... with that many available connections I think 4 SAS 'alone' won't be hard to accomplish.
 
  • Like
Reactions: Cipher

Cipher

Member
Aug 8, 2014
159
15
18
53
4- What type of backplane? Some don't like to mix SAS and SATA. I have some interposers I got for mixing on same expander backplane, but have yet to try them.
Thanks for the replies, T-Minus. The backplane on this chassis is BPN-SAS2-846EL1.

I'm not assuming that's the case ;) Thus why I asked. Some people think to run x# of drives they need x# of HBA -- but then look and find out they have an expander backplane.
It sounds like I'm missing something here as I thought the Backplane Expander was meant for linking to other similar Chassis'.
 
Last edited:
Apr 13, 2016
56
7
8
54
Texas
Isn't that still proscribed?
Afaik there are issues due the SATA tunnel protocol, which lead to nasty misbehavior.
This is one that is debated a lot - but you'll always be best served by staying with SAS. There is a guy I know in the storage industry who is fond of saying that SATA is one letter short of Satan. Current expanders support a feature called end device frame buffering (EDFB, LSI calls it DataBolt), which aggregates multiple slower links (think 6Gbs SATA) to optimize the 12Gbs SAS link back to the IOC/RAID controller. (Assuming that the expander and HBA/RAID card are 12Gbs.) This is obviously a complex solution, so I'm sure there are more possibilities with issues using SATA behind a 12Gbs expander due to the greater complexity.
 

markarr

Active Member
Oct 31, 2013
421
122
43
Thanks for the replies, T-Minus. The backplane on this chassis is BPN-SAS2-846EL1.



It sounds like I'm missing something here as I thought the Backplane Expander was meant for linking to other similar Chassis'.
Since you have a SAS2 Backplane you only need one HBA. You can only hook one up unless you have dual port sas disks but that gets into MPIO and HA SAS which adds layers. The expander will "expand" the 4 channels from the HBA into the 24 on the backplane, it does have the ability to link to another expander for daisy chaining expanders together.
 
  • Like
Reactions: Cipher

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,648
2,065
113
Since you have a SAS2 Backplane you only need one HBA. You can only hook one up unless you have dual port sas disks but that gets into MPIO and HA SAS which adds layers. The expander will "expand" the 4 channels from the HBA into the 24 on the backplane, it does have the ability to link to another expander for daisy chaining expanders together.
Yep.


You could always get 2 though, and keep one on standby (not installed) as a spare :)
 
  • Like
Reactions: Cipher

Cipher

Member
Aug 8, 2014
159
15
18
53
Thanks, Markarr/T_Minus. I thought each port on the 2-port H310 cards supported 4 drives which would mean 8 drives per card and 24 drives total with 3 cards. However, it sounds like a single card can control more than 8. Besides not fully understanding the role of the expander, I think my confusion may have come from another post on the forum that said the following in regards to these H310 cards:

Buy three of these for $150 total and run an entire 24 disk enclosure with 1:1 port mapping.

Is there an advantage to using 3 cards, as described above, versus only one? If not, then I'll gladly just pickup the single card as it means a bit less money to spend, more free PCI-E slots, and less watts being by the server.
 

markarr

Active Member
Oct 31, 2013
421
122
43
Thanks, Markarr/T_Minus. I thought each port on the 2-port H310 cards supported 4 drives which would mean 8 drives per card and 24 drives total with 3 cards. However, it sounds like a single card can control more than 8. Besides not fully understanding the role of the expander, I think my confusion may have come from another post on the forum that said the following in regards to these H310 cards:

Buy three of these for $150 total and run an entire 24 disk enclosure with 1:1 port mapping.

Is there an advantage to using 3 cards, as described above, versus only one? If not, then I'll gladly just pickup the single card as it means a bit less money to spend, more free PCI-E slots, and less watts being by the server.
With the backplane you have you don't have a choice, as you can't do 1:1 with it so you can only use one card. If you were running all SSD's you could hit the max throughput of the 4 6GBs lanes, but would likely never hit it with spinning disks, so running all SSDs having 1:1 port would be beneficial otherwise not.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
Thanks, Markarr/T_Minus. I thought each port on the 2-port H310 cards supported 4 drives which would mean 8 drives per card and 24 drives total with 3 cards. However, it sounds like a single card can control more than 8. Besides not fully understanding the role of the expander, I think my confusion may have come from another post on the forum that said the following in regards to these H310 cards:

Buy three of these for $150 total and run an entire 24 disk enclosure with 1:1 port mapping.

Is there an advantage to using 3 cards, as described above, versus only one? If not, then I'll gladly just pickup the single card as it means a bit less money to spend, more free PCI-E slots, and less watts being by the server.
it is not about advantages, but the way you connect the drives.
what the quote meant was, you can remove the expander back-plain if you have one, or not if you don't,
and simply route cables from card to drives directly ,using the 1-to-4 breakout cables I think they called, 4 drives per port. 8 drives per card directly without port expander. in-fact this cards can support up to 32 devices in JBOD mode each, as per Dell specs.

Connectors Two x4 internal mini-SAS SFF8087
Maximum number of physical devices
Non-RAID: 32
RAID 0: 16 per volume
RAID 1: 2 per volume plus hot spare
RAID 5: 16 per volume
RAID 10: 16 per volume
RAID 50: 16 per volume

so theoretically if you have a 32 drive back-plane you just connect both ports to it and on card can run all.
also you can get the same result , if you do not have a case with expander back plain, is to get a port expander card and use that. that is what your back-plane do.
 

wildchild

Active Member
Feb 4, 2014
389
57
28
Thanks, Markarr/T_Minus. I thought each port on the 2-port H310 cards supported 4 drives which would mean 8 drives per card and 24 drives total with 3 cards. However, it sounds like a single card can control more than 8. Besides not fully understanding the role of the expander, I think my confusion may have come from another post on the forum that said the following in regards to these H310 cards:

Buy three of these for $150 total and run an entire 24 disk enclosure with 1:1 port mapping.

Is there an advantage to using 3 cards, as described above, versus only one? If not, then I'll gladly just pickup the single card as it means a bit less money to spend, more free PCI-E slots, and less watts being by the server.
Bandwidth, but that only becomes interesting when using ssd's
 

Cipher

Member
Aug 8, 2014
159
15
18
53
markarr, vl1969, wildchild, thanks for the explanation on the backplane vs 1:1 port mapping with cards. Since my build will be a mix of SATA and SSD, and I plan to use the backplane, it looks like I will only need a single HBA card.

HBA
Given their popularity in this forum and many others, and their ability to be flashed to LSI firmware, the cards I was looking at were: Dell H310, Dell H200 and IBM1015. Since I'm not interested in RAID, and only in speed/performance and JBOD support, would any of these cards be recommended over the others or would I be good with any one of these three?

Cables
From reading the Supermicro SC846 chassis manual it looks like I'll need an iPass (Mini-SAS) to iPass (Mini-SAS) cable to link the HBA card and the SAS2-846EL backplane. Supermicro lists the following three parts and I'm leaning towards the 15 inch version just so I have the most leeway inside my case:

Part #: CBL-0108L-02 Length: 39 cm (15 inches)
Part #: CBL-0109L-02 Length: 22 cm (9 inches)
Part #: CBL-0110L-02 Length: 18 cm (7 inches)

RAM
At this point, rather than sell it, I think I'm going to keep my existing 96GB of Hynix 8GB DDR3-1333 PC3L-10600R RAM and buy another 96GB to get me to the 192GB needed for this build. I would have preferred 16GB sticks, and possibly faster RAM, but the price increase doesn't seem worthwhile. As an aside, I can't believe I bought my 96GB of RAM for over $600 2 years ago and I can now get the exact same 96GB order for less than $250.
 

Cipher

Member
Aug 8, 2014
159
15
18
53
Ok, I've finished ordering my parts for this build and ended up picking up the following:

-Supermicro X9DR3-LN4F+ Motherboard
-2 x Intel Xeon E5-2670
-12 x 8GB (96 GB) PC3-10600 DDR3-1333 MHz RAM
-Dell H310 HBA
-Supermicro iPass (Mini-SAS) to iPass (Mini-SAS) Cable (CBL-0108L-02)
-6 x Supermicro 3.5 to 2.5 caddy adaptor (MCP-220-00043-0N)

The only items left to orders are cooling for the CPUs.

I should have everything in 10-14 days at which time I can begin assembly and then move onto software installs/configuration. I'm sure more questions will follow during this period.