BPN-SAS3-826EL1 (sas3 expander) questions.. (really a sas3-826el1-n4 , nvme-4)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

james23

Active Member
Nov 18, 2014
441
122
43
52
So i just bought a 2u X10DRU-i+ system off ebay (acutally a SuperServer 6028U-TNRT+), as one of the systems to be part of my new home esx lab, and i found out it has a SM BPN-SAS3-826EL1-N4 , backplane. (sas3 expander on 8 bays, nvme on 4 bays = 12 x 3.5" bays total).

SM BP manual is here: https://www.supermicro.com/manuals/other/BPN-SAS3-826EL1-N4.pdf

I swore off any expander backplanes after a few bad test rigs many years ago, and since then any SM server i have built or managed has used direct attach BPs only (ie xxxA or xxxTQ backplanes).
my questions:

1- can anyone provide some info/experiences with SAS3 SuperMicro BP expanders? (ie which HBA / Raid card are you using, and how do you have the setup configured).
(ie im a bit confused as to how the manual shows 2x sasHD cables going to a single hba/raid card, instead of just 1x)

2- any experiences on this BP or others with some slots NVMe and others expander ?


(im not new to any of this, however there is NO info on the web about this expander beyond this SM manual....and also im to using specifically SM BP expanders, and a bit new to non-pcie slot NVMe drives).

Im planning to use SSDs only on the BP, and would like Raid1.
Ill be running 6.5u2 ESXi, but this is all as a lab setup

thanks!

EDIT (3days after OP)- i figured out from its IPMI that the system is actually a : 6028U-E1CNRT+1 -SG007 )
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
The 2nd connector would normally be used to daisy-chain to another expander. Otherwise it serves no real purpose - you don't need an expander for 8 drives if you bring 8 lanes to the backplane :).

If you want a direct-attach backplane w/out the NVMe I have a BPN-SAS-826A (3x 4-lane 8087 connectors for 12 drives) :). Kidding, of course - the one you have is much better.
 

james23

Active Member
Nov 18, 2014
441
122
43
52
The 2nd connector would normally be used to daisy-chain to another expander. Otherwise it serves no real purpose - you don't need an expander for 8 drives if you bring 8 lanes to the backplane :).

If you want a direct-attach backplane w/out the NVMe I have a BPN-SAS-826A (3x 4-lane 8087 connectors for 12 drives) :). Kidding, of course - the one you have is much better.

Thanks, i HATE to tell you all what i paid for this system, bc it was a steal at the ebay list price (then msg some ? to the seller, he offered to take 150$ off !)... ok u twist my arm, paid $550 shipped,

this is the ebay item (sold, was just 1x):
SUPERMICRO SYS-6028U-TRTP+ X10DRU-i+ 2x LGA2011v3 E5-2600v3/v4 2U Server CTO | eBay

( for those reading this later, its essentially this system, barebones:

Supermicro | Products | SuperServers | 2U | 6028U-TNRT+)

I do have one last question, are you sure i can uplink via just 1x SASHD to this 8x bay backplane (and not 2x sasHDs) ? im hoping to use a Adaptec 8405 raid card to be able to access the 8x bays , which only has 1x SASHD port. (i will not be uplinking to other expanders nor anything beyond the link to the 8405). else id need to get a 8805, which has 2x SASHDs

the only reason i ask this, is the only info i can find on this BP, is from manual (see below) and the image/config is with 2x SAS HDs going to the HBA/raid card (i know the image context is for expander uplinking, but i want to be sure):

exp.JPG
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
It all turns on whether or not it really is an expander backplane. The manual you linked in your original post is for the EL1 model, which has an expander and yes - you can certainly connect a single 4-lane link and use all 8 drives.

The SuperMicro link you posed in your latest post has an "A" series backplane, which does not have an expander and you'd have to connect both 4-lane SAS3 connectors to use all 8 drives.

The answer to your question comes from confirming the EXACT backplane model:

BPN-SAS3-826EL1-N4 then yes, you can use a single cable.
BPN-SAS3-826A-N4 would require both.
 

james23

Active Member
Nov 18, 2014
441
122
43
52
It all turns on whether or not it really is an expander backplane. The manual you linked in your original post is for the EL1 model, which has an expander and yes - you can certainly connect a single 4-lane link and use all 8 drives.

The SuperMicro link you posed in your latest post has an "A" series backplane, which does not have an expander and you'd have to connect both 4-lane SAS3 connectors to use all 8 drives.

The answer to your question comes from confirming the EXACT backplane model:

BPN-SAS3-826EL1-N4 then yes, you can use a single cable.
BPN-SAS3-826A-N4 would require both.
sorry for the confusion, i definitely do have the expander (EL1) version , not the direct attached A:

mine: BPN-SAS3-826EL1-N4

( i put the system link in there only in regards to the good price i had got on a similarly spec'd reference. I shouldnt have added that link, as it only added confusion).

its just such a weird configuration backplane (my EL1 + 4x NVMe), ie- why even offer an expander model when we are only dealing with 8x bays? (and most likely SSD disks as the remaining 4x bays are nvme, so clearly this system is for high disk io), there should have only ever existed a BPN-SAS3-826A-N4 (not a el1 version)

tks
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Agree. Its weird. And unless there is something REALLY strange in place pretty much useless. But I also know SuperMicro responds to customer specs and sometimes a large buyer just knee-jerks requirements into an RFP (as in somebody in their sourcing chain spec'd and expander on a large order, so SM delivered one).
 
  • Like
Reactions: james23

james23

Active Member
Nov 18, 2014
441
122
43
52
funny, that was my exact comments to the seller! ("boy this is one custom system SM did for some big client"). It took a few days of messages to the seller and asking for more picts (as he only had 4x not so good picts on the listing). I was even photoshoping his ebay picts at full res to try to figure out WHAT system this is! (everything matched up to that sm sys- i list above, but in my CSI-style picture analysis something was still off with the backplane area, and the cables going to his backplane). then he replied back, the PCB reads "BPN-SAS3-826EL1-N4" ... EL1-N4?? never heard of that at 2u ... , and there is little to no info on that BP anywhere.
thanks for help, will post here or in new thread with some info once i have this system running.

EDIT:
for others curious (as it took me quite some time and note keeping to gather this), here are the links i was using to try to figure out exactly which system this was (this is a common task when buying full systems on ebay):

lists of all recent 2u SM sys-

Ultra Servers | Super Micro Computer, Inc.

2U SuperServer® Solutions | SuperServer | Products - Super Micro Computer, Inc.

and then SM's system naming decoder / conventions:

SuperServer with 3.5" HDD | Product Naming Conventions | Super Micro Computer, Inc.
 
Last edited:

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Agree. Its weird. And unless there is something REALLY strange in place pretty much useless.
How weird is it to link 2 x4 lanes to the backplane, and then daisy-chain from the expander to an external jbod with a single x8 hba?
Given fast Flash is in the nvme bays and spinners in the sas this + one or more 846 / 847 jbods could serve a massive amount of storage quite nice.
 
  • Like
Reactions: Eta_Carinae

james23

Active Member
Nov 18, 2014
441
122
43
52
interesting, that is a good point and use case. in that scenario though i wonder at which point do you run into BW limits if you fill the 8x local bays with sas/sata SSDs and then say have a uplinked 12x bay (either with more SSDs or HDDs) - so for example, lets assume 16x SSDs and 4xHDDs.

so with SFF-8643 you get 4x lanes of 12 gbit = 48gbit (6,000 MB/s) - theoretical
x2 uplinks to this local expander = ~12,000 MB/s

but x1 (SFF-8643) uplink to your external enclosure.

I dont know of many sas/sata SSDs that will have enough sequential BW to saturate that at qty 8x or 16x units, but how does IOPS relate to the sequential BW capacity that SFF-8643 or a SFF-8087 are spec'd in?

ie; is there some number of IOPS that will saturate a single SFF-8643 "cable" , way before you hit the 6,000 MB/s capacity?
(assuming you have an infinitely powerful HBA or Raid card)


ive always wondered about this question (and i know there are some users on STH that have lots of SSDs in an expander based chassis / config)


(im aware my real BW is via the direct NVMe slots, which is why i grabbed 2x 2.5" p3700 1.6tb today from ebay for $410 each - however afaik there is no "clean/easy" way to raid nvme currently).

Maybe i was wrong in wishing it had come with a direct attached BP (ie i was wanting BPN-sas3-826A-N4 vs this EL1)
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
You should have 2x 8643 for uplink (HBA) and 2x 8643 on the output-side (JBOD / Daisys-Chaining / Cascading - see Chapter 3 in the Manual).
This means, if you use both Ports on the HBA and towards the JBOD you have full Bandwidth of 8 Ports SAS3, maybe Limited by pcie3.0 x8.

16 SSD + 4x HDD for my feeling should be 'ok' but a bit on the edge - but honestly, i'd setup something like 2-4x NVME for Caching (with what setup ever, ZFS, bcache - you name it) + lots of spinners with this.
 
  • Like
Reactions: james23

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
I dont know of many sas/sata SSDs that will have enough sequential BW to saturate that at qty 8x or 16x units, but how does IOPS relate to the sequential BW capacity that SFF-8643 or a SFF-8087 are spec'd in?
Bandwidth = Portcount * Protocol bandwidth
Eg: 4ports @ SAS 3 = 4x 12gbit/s = 48gbit/s ~ 4.8gbyte/s

4.8gbyte/s / 4kb = 1,171,875 iops
4.8gbyte/s / 128kb ("sequential workload" in iometer) = 36,621 iops

If you want to saturate @4k with intel s3520 ssds (65k iops @ 4kb) you will need 18 ssds.
For 128kb 10 ssds (3.6k iops @ 128kb) will be enough.

Edit:
SFF-8087 says nothing about bandwidth. It's just a specification for connectors.
Source: https://doc.xdevs.com/doc/Seagate/SFF-8087.PDF
 
  • Like
Reactions: james23

james23

Active Member
Nov 18, 2014
441
122
43
52
awesome info! thanks.

i'll post some stats in the next few days as i just got the system in today and will be stressing it for a week or 2. I was surprised to find the seller had a AOC-2308L-L8i (lsi sas2 w 2x 8087 ports) in one of the slots, 2x cables to the BP. however in terms of sas3 testing to BP, I also have an adaptec 7 and 8 series to test with, but only 5x SSDs and 10x HDDs for now.
This machine will actually be my esxi 6.5 test bed/lab, while im also now building a 4u sm x9 setup to run freenas (and will have 1 or 2x 10gb between the two systems)

(and i have a nvme p3700 2.5" 1.6tb , ill be putting on this x10 esxi setup also)

for anyone wondering, this is a NVMe slot (or can run sas) VS a sas slot on my backplane (see above for BP model):
backplane nvme vs sas x10.jpg

EDIT: btw, i finally figured out from its IPMI today that the system is a: 6028U-E1CNRT+1 -SG007 )
 
Last edited:

james23

Active Member
Nov 18, 2014
441
122
43
52
here are a few initial FIO disk benches (manly for stress testing and to see if things are working right) - see the pdf link for many more results.

it wont let me attach the PDF i printed from my notes (seems like 1mb limit maybe?). but if anyone is interested here is a link to that PDF, 2 of the pages have some of my stress test commands i use frequently when testing a new server. I'll post more benches later if ppl want (and from the 48 disk freenas system im building at the same time as this one). if ppl are interested maybe ill make a new thread with the results. tks

PDF link:
https://file.io/nSvH4r

Captu111re.JPG