Scalable Xeon SAS3 Storage Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Chassis - CSE-216BE2C-R920LPB with BPN-SAS3-216A BackPlane and Rear 2 Bay Drives add on
CPU - 2.1GHz Xeon Silver 4110 8-Core
Motherboard - SuperMicro X11SPL-F
Cooler - Dynatron 2U Active cooler with Noctua 6omm fan
RAM - 128GB 2133Mhz DDR4 ECC Reg

I have been wanting to build a storage server with the CSE-216 chassis for a while + with the new Scalable Xeons being available, I decided to build a system just for VM storage.

The chassis was a hot deal that I scored recently. It came with a SAS3-EL2 backplane and 2 920SQ psus. I decided to replace the SAS3 expander backplane with a direct attach backplane to get the max possible throughput based on advice in the forum. Coincidentally I found one on ebay that was within my budget.

I had 3 SAS2 HBAs that I planned to use but since I already had 4 HUSMM1680ASS200 drives, I decided to go with SAS3 HBAs and acquired a lot of 3 LSI 9341-8i cards.

I had ordered a Dynatron 2U active cooler that was supposed to be delivered last week but got delayed. I wanted to work on this build this weekend and ordered a supermicro cooler from WiredZone. Coincidentally both reached me on Friday. I decided to go with the dynatron for this build. The SuperMicro will go for sale or be on reserve for a future build.

I do not have any SFF-8643 cables on hand and did not order the cables yet. I wanted to build everything and then order the right length cables. Also the PDB has only 6 4Pin Molex power connectors and does not have a connector free to power the rear 2 drive bay. I need a BERG-Molex cable to pull power from one of the Floppy/CDROM drive connectors.

After several days of waiting to get the parts I finally put everything together and started burning in the system today.

Once I get the cables and complete the build in the next few days, I need to decide on the pool layouts. I have in hand
Intel DC 3700 400GB - 16nos
HUSMM1680ASS200 800GB - 4 nos, 4 incoming
Intel P3700 AIC - 4 nos

I just finished taking measurements for the cables and racked it. I am running mprime today to stress out the CPU. I'll let it idle for the rest of the week till I get the cables.

Here are some pictures of the build as it is now.

1.jpg 2.jpg 3.jpg 4.jpg 5.jpg

Updates

Update 1 - Initial Power Consumption and Temperatures

Update 2 - Initial Benchmarks with 10 DC S3700 SSDs
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
I'm concerned you didn't factor in for actually allowing high performance access to this from your VMs.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I'm concerned you didn't factor in for actually allowing high performance access to this from your VMs.
Quite Likely :D

I forgot to mention that I also have a Mellanox Connect X3 dual port NIC to provide connectivity.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Update 1 - Initial Power Consumption and Temperatures

Mid Plane Fans set to constant 10% duty cycle (~2000 rpm)
CPU Fan set to constant 50% duty cycle (~1500 rpm)

After Running mprime for 2 hours:
Max Power Consumption - 132w
Max CPU Temp - 52deg

Idle
Avg Power Consumption - 86w
Avg CPU Temp - 29deg

With 3 SAS3 HBAs, 1 Mellanox CX3, 1 Samsung PM961, 3 Nidec Ultra Flo 0.6a mid plane fans and 1 Noctua CPU fan, I think the system is extremely efficient in it's power utilization. Will see how it changes once I add the drives.

This is an 85W tdp CPU. The Dynatron Cooler with a Noctual AF60-PWM Fan is able to keep the temps at 50deg at full load for 2 hrs straight. And this is when forcing the fan to run at 50% duty cycle. I think this setup should be able to handle higher TDP CPUs without issues. The stock fan with the dynatron cooler is quiet and moves a ton of air. Subjectively I'll say atleast 3-4 times more than the noctua at full speed. The noise is bearable at full speed. Currently the Midplane Nidecs drown out any other noise even at a very low speed. Eventually I plan to replace them.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
I've never really done benchmarks before. Doing some reading to figure it out. Any pointers?
Storage review has a great writeup on their use of fio. The 70/30 benchmark is a key one for me as it is a pretty good estimate of what database perf would be in SQLServer. Just, if you could, use an 8K option as the pages in SQLServer are 8KB in size.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
Quite Likely :D

I forgot to mention that I also have a Mellanox Connect X3 dual port NIC to provide connectivity.
Which will let you use 1 port to almost capacity in your leftover x4 slot, leaving ~50% read capacity available assuming a raid10/pool of mirrors setup going by 24x S3700 400gb only... 12*400MB/s = 4.8Gb/s which doesn't account for a # of those being much higher performing SAS3 drives, and doesn't include any NVME performance in there at all.

Technically you are not going to have enough PCIE slots to utilize full performance of the setup, obviously you can utilize full capacity though ;)

With that said it will be fun to see your benchmarks on server vs. over network :) :)

Looking forward to your sharing/results.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Which will let you use 1 port to almost capacity in your leftover x4 slot
I'm not sure I follow. The board is an X11SPL-F which has 6 PCIe x8 from the CPU and 1 PCIE x4 from PCH. I am using 3 for the HBAs and 1 for the CX3. That still leaves 2 x8 and 1 x4 free.

What am I missing here?
 

niekbergboer

Active Member
Jun 21, 2016
154
59
28
46
Switzerland
What OS are you running? I'm asking since I've been eyeing these Supermicro X11 SP mainboards, but I see that they have an AST2500 IPMI system, which isn't supported until Linux 4.13, whereas I'd like to use it on Proxmox VE 5.0 (which runs 4.10).

Any experience there?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
I'm not sure I follow. The board is an X11SPL-F which has 6 PCIe x8 from the CPU and 1 PCIE x4 from PCH. I am using 3 for the HBAs and 1 for the CX3. That still leaves 2 x8 and 1 x4 free.

What am I missing here?
I'm simply going by what you said you're using...

================
Once I get the cables and complete the build in the next few days, I need to decide on the pool layouts. I have in hand
Intel DC 3700 400GB - 16nos
HUSMM1680ASS200 800GB - 4 nos, 4 incoming
Intel P3700 AIC - 4 nos
==================

How are you going to use the AIC P3700 NVME drive without using PCIE slots?

24x SSD on 3x PCIE
4x P3700 AICs on 4x PCIE

= 100% full

Based on: X11SPL-F | Motherboards | Products - Super Micro Computer, Inc.
7 Total Slots available.


Thus my concern for no room for networking, and even if going with 3x p3700 you're stuck with the x4 free for limited networking... ?
 
  • Like
Reactions: K D

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Got it. I was just stating that these are the drives I have at hand and need to figure out a way to use them. I may not use all the nvme drives in this build. As you have rightly pointed out, I have only 3 slots left and I probably will just use 2.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
What OS are you running? I'm asking since I've been eyeing these Supermicro X11 SP mainboards, but I see that they have an AST2500 IPMI system, which isn't supported until Linux 4.13, whereas I'd like to use it on Proxmox VE 5.0 (which runs 4.10).

Any experience there?
I don't have any experience with Proxmox. But isn't the whole point of IPMI out of band access?
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I don't have any experience with Proxmox. But isn't the whole point of IPMI out of band access?
But the OS can make settings and get info.
I would expect it not working is not a huge problem actually.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
Got it. I was just stating that these are the drives I have at hand and need to figure out a way to use them. I may not use all the nvme drives in this build. As you have rightly pointed out, I have only 3 slots left and I probably will just use 2.
Gotcha, I thought you meant those were all the drives you had for this system and had not figured out how to configure them yet... ie: rz2/3, pool of mirrors, mirrored, etc...

You're saying you may try the P3700 as a SLOG or L2ARC too or all drives allocated to capacity?

I would put the HUSMM1680ASS200 800GB x8 in their own pool myself, the question is do you put 4 on 2 HBAs and call it good, or do you do 3 - 2 - 3? That may be an interesting test... quick one too in an SC216 just swap the drive position re-create, and test again :) I think DBA said before 6 SAS2 would max out a SAS2 HBA, so I'm not sure how SAS3 HBA and SAS3 SSD are now days though... May just have to add more work to my testing before I sell some stuff, ha ha!
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I will try out those options and post the results. Ordered cables from supermicro estore. Hopefully they will be delivered by Friday.

Hit a snag and I won't be getting the 4 HUSSM drives now :(. Have to keep looking out for good deals.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Update 2 - Preliminary Bechmarks
I finally installed the cables and 10 of the DC S3700 400 GB drives. Installed esxi free and set up a napp-it vm. I still have some work to do. For now, I have just added a 16GB RAM stick here. Need to move things around and get the rest of the RAM installed. The HBAs are at P10 frmware. I will need to update them to P14 which is the latest.

Here are the preliminary numbers AS is with 10 S3700 SATA SSDs.

Pool Config
01.png

DDBench Iteration 1
02.png
DDBench Iteration 2
03.png
IOZone
04 iozone.png
Bonnie++
05 bonnie.png
FileBench
06 filebench.png

The SSDs are spread across the 3 HBAs (4-3-3).

Any feedback will be much appreciated.
 
  • Like
Reactions: realtomatoes

Deci

Active Member
Feb 15, 2015
197
69
28
try testing with 2x raid z2 pools instead of mirrors, all mirrors will give the best iops but is the least space efficient, typically with an SSD pool you can take the performance hit for raidz2 pools for better space efficiency without a great a penalty to iops.