Supermicro SuperStorage SSG-5029P-E1CTR12L 2U Storage Server Review

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
um am I missing something? This is a storage node and the performance is being tested by seeing how fast it can compile a kernel? I was expecting iops numbers etc., maybe a fio run or iometer
We were using non-SM approved drives. But it is disk and 10Gbase-T so not exactly the configuration for performance storage in 2017.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
The MB/CPU are a bit overkill for a 12 spinny drive server. Seems like C3k based board would be more apropos here.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
The MB/CPU are a bit overkill for a 12 spinny drive server. Seems like C3k based board would be more apropos here.
I think the other side is that you can attach disk shelves plus 3x NVMe without touching the expansion slots. Supermicro requested the 6132 be used so we augmented with the Silver and Bronze. The Bronze is not that much power and in-line with low/ midrange C3000 in terms of performance and pricing.

Also, if you want to use SAS, the C3000 is disadvantaged.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
We were using non-SM approved drives. But it is disk and 10Gbase-T so not exactly the configuration for performance storage in 2017.
Yeah this is nit picky for sure but if that was the case then why include benchmarks that effectively test the various chips tried I just don’t see how it applies to the Supermicro hardware fits here.

In any case it received high marks in the first paragraph as it would have been the foundation of the ZFS back end at STH had it been around
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
@Patrick Any chance to take a picture of the "2x 2.5″ U.2 hot swap NVMe rear bays" from the inside and also the cabling ?
This is imho one of the more interesting parts but can't find anything useful for the SKU (MCP-220-82619-0N).
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@Patrick Any chance to take a picture of the "2x 2.5″ U.2 hot swap NVMe rear bays" from the inside and also the cabling ?
This is imho one of the more interesting parts but can't find anything useful for the SKU (MCP-220-82619-0N).
Let me see if I can take a quick snapshot or if I have one. It is in the data center right now.
 

fmatthew5876

Member
Mar 20, 2017
80
18
8
38
Looks like it would have been perfect for my needs, and only 2U!

Patrick, can you tell us anything about how loud this configuration is?
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Thanks.
No hurry, i'm just interested as it would be a nice upgrade option for my 826B chassis (together with an AOC).
Given both is available at all ...
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Unfortunately, I cannot. It has been in the data center from day 1.

Actually reminded me of @K D and Scalable Xeon SAS3 Storage Server
Conceptually the same. The X11SPH-nTPF board is still not widely available and since I was using a 216 Chassis I went with a X11SPL-F board,non expander backplane and 3 HBAs.

I am liking the rear NVMe drive cage. That could be an option for when the larger 2.5 nvme drives actually are affordable for me.
 

XeonLab

Member
Aug 14, 2016
40
13
8
In general, is there any performance difference with Skylake-SP when running 6 DIMM's (6CH x 1DPC) vs. 8 DIMM's (5CH x 1DPC + 1CH x 2DPC)?

@Patrick
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Oops missed that @XeonLab - generally this is not a feature we have used too much. You can actually set the system to run at DDR4-2666 with all eight DIMMs populated. At that point, things like memory ranks come into play but you are talking relatively small differences offset by gains in RAM capacity.

@K D great question. No idea. Maybe part number MCP-220-82619-0N from their page but I think you may need more than just that.
 

bitrot

Member
Aug 7, 2017
95
25
8
I have a Supermicro 826 myself (SC826E16-R1200LPB being the exact model) which I have heavily modified: different backplane, different PSU, different fans, and adding a 2.5" SAS/SATA drive cage at the back (MCP-220–82609–0N) plus the respective rear window (MCP-240–82608–0N).

So when I read this article, I, like many others it seems, was intrigued by the NVMe drive cage at the back of the chassis, thinking about replacing my SAS/SATA one with it as I actually have a two Intel DC P3700s 1.6TB installed in my server. But besides the fact that it seems impossible to find over here, I thought about one aspect that tends to be overlooked when it comes to enterprise NVMe SSDs: they actually draw quite a bit of power and need proper cooling.

So unless you have truly excellent cooling (and hence, very high noise levels with just 3 x 80mm fans installed in the 826), putting two drives which consume more than 20W each (when writing data) in a small drive cage at the back of the chassis that hardly gets any airflow unless your chassis fans run at Ferrari-like RPMs (definitely not in my case) is probably not such a good idea.

I have an ATX board installed, the same Supermicro X11SPH-nCTF, and hence have quite a bit of space on the side of the board right next to the fans. As I have no intention to replace the brandnew board with an E-ATX one anytime soon, I used some double-sided tape to 'install' my DC P3700s there, connected to the OCULink ports of the board.

sth_nvme_ssds.jpg

I know, I know, you're thinking 'how Ghetto'! But it actually works fine, both drives are properly cooled despite my slowed down fans and I don't really need them to be truly 'hot-plug' anyway, as they are used as redundant cache drives. Really love the Intel DC P3700s btw., ridiculously fast drives.
 
  • Like
Reactions: _alex and nkw

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
i also own 2x 826BE and think of upgrading another three 826 chassis to ,BE' with the rear window and 2.5" drive cage.

For my nodes noise is not really a concern, so i would prefer a nvme capable drive cage when upgrading for future proof - if there was just any Information available how cabling for nvme is done, if there is any additional cooling necessary or any other caveats.

btw, anyone running SAS ssd's on the non-nvme rear Cage?
 

bitrot

Member
Aug 7, 2017
95
25
8
Not me, I have two SATA SSDs installed there that actually run a bit hot for my taste. But they only get rather lightweight (more or less) read-only use.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Don't the Intel SSDs have a max operating temperature of 70c? The 2 s3700s I have in mine avg around 22c. They are the VM host datastore and there is always some activity on them. This is in a 216 chassis with the 3 stock fans replaced with some noctuas. There are no drives installed in the front bays.