Supermicro SAS backplate, LSA 2208 and working with ZFS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
How much NVMe are you using in your lab?
I never know if people are speaking about production at work or at home labs.
Well lets put it that way, my *lab* is 3 generations ahead of what my company uses (for production) and probably a generation ahead (in parts) of what our customers use - o/c they do also have some fancy stuff (expensive) that I can't afford. Difference mostly is branded vs cobbled together.

#subscribed !
#askthealternativetoo_howmuchisundeployed
Hey, unfair - thats insider knowledge :D

Now, a post detailing all the HW I (don't) use (or the (very long) path I took to not yet reaching my goals) would be quite long, but to answer - my 'production boxes' (4) are still running vsan with an 900p/[4510|3600 2tb] drives.
The non production boxes or leftover or not yet put into a meaningful build probably have triple the drives (not necessarily the capacity) (nvme only) - as @svtkobra7 happened to know;).
And lets better not talk about ssds ;)
 

zecas

New Member
Dec 6, 2019
27
0
1
I'm planning going SATA not only because or my requirements, but also because SSD/NVMe is just too expensive for the job. At least with some decent enterprise level SSD/NVMe ... at some point I've considered going with prosumer material, but I could end up with drives being excessively used by the
system, specially since I'm planning going ZFS.

I can get a WD Ultrastar 1Tb 3.5 SATA3 disk for 100€, so I guess I'm choosing being on the safe side. This will be a production server for virtualization in a very small company.

At the moment this will be my first server build, I've only built desktop computers and doing so since waaaay long time ago ... but a server is a different beast and things must be thought differently. Also avoiding HP servers just because I hate the fact I need to have a valid subscription to get a bios firmware update ... I've been there already with a bug that was solved with bios fix (correction clearly listed on release notes), but no subscription to download it. Very frustrating.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
If you can consider used enterprise equipment ... usually tons of life left, might be cheaper than new con/prosumer stuff and likely better performance to boot.
 

zecas

New Member
Dec 6, 2019
27
0
1
If you can consider used enterprise equipment ... usually tons of life left, might be cheaper than new con/prosumer stuff and likely better performance to boot.
I'm considering enterprise equipment to build the server, everything is used/refurbished material. But for data storage, I'm still a bit reluctant on making that step.

That's why the WD Ultrastar will be brand new disks, I have the strange feeling that I can get an enterprise SSD with too much usage and can only find it just a little too late, maybe this is just a bad thought but, at the end of the day, everything can break as long as I have my data.

How would you choose a refurbished SSD/NVMe disk? What indicators would you look for in refurbished material selling online to distinguish a good sell from a bad one?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
SSDs and NVMe drives usually have a wear level indicator that can be queried. Additionally they can list the amount of data that has been written to them; together with the specification (Max TBW) it should give a proper impression how much life is left.
Add lifetime max temperature and you're all set imho.

One benefit of used HW is also that you can get spares and still stay within budget,
or if you get intel drives for example, they often have a part of their 5 yr warranty left.
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
Look what I found loitering about in between one of my vdevs and slog - a RAID write "HOLE" ...



HOLEY MOLEY! ;)
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
11.3 RC1 was the direct and proximate cause, sir! I am innocent! ;)

Same FreeNAS $hit$how as always .... I keed I keed ...

=> I don't expect the actual upgrade from 11.2 to the 11.3 '"RELEASE" train to go smoothly so I'd of course expect issue when poking under the hood of a release candidate (given my luck). Seriously though all of the following epicly failed:
  1. Attempt to "upgrade" from 11.2U7 to 11.3RC1 via GUI;
  2. via CLI;
  3. and via installing with the ISO (as upgrade, not clean).
Which makes sense, I suppose, as there is something in my config that doesn't "map" from 11.2U7 (where it is stable - and in not one - but two instances - configured identically (save for minor variances such as IP addresses or hostname "-01" v. "-02" etc.), but the 11.2U7 config + 11.3RC1 = the land of unicorns. The only way I could actually get 11.3RC1 to boot was via "clean" install to a wiped boot drive, but even then = upon import, = more unicorns. I have scoured the freenas-v1.db file as is, rebuilt from scratch to arrive at the same config, and one another in comparison, in attempt to figure out why every time I change sw configuration, even non-RC iterations such as ESXi 6.7U3 do I encounter such bad luck. My conclusion = as on topic as my post = I should have used the time instead to try NappIt (which I've been intending to do forever).

btw = i had some free time, so I was just looking for a little "taste" - flipping back to 11.2U7 until it drops o/c.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
weird, upgraded to rc1 without any issues - but of course it was on a test box with basically no services running (and physical)
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
weird, upgraded to rc1 without any issues - but of course it was on a test box with basically no services running (and physical)
Indeed - with the only item that may raise an eyebrow being my affinity for RaidZ 3 x 4 (compared to RaidZ2 6 x 2) - BUT I have two identical pools, replicated to only a 5 min variance, locally, and full offsite backup. I'm prepared for those pesky RAID "HOLES" showing up and thank goodness iX deployed a codebase that shows them in the pool topology when they choose to appear! ;) (I know I"m being stupid, I just find it hilarious it showed up like that)

LOL I didn'thave any services running either (I know what you meant), but you aren't going to take us down the "if you virtualize, you don't like your data" path are you (comment re: physical)? It being 2020, I thought we left that behind in '18? ;)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
physical due to the fact that my nvdimms are not reckognized properly in esxi so I can't pass them through (or RDM them), so I have to go phys to use them :/
 

SRussell

Active Member
Oct 7, 2019
327
152
43
US
#subscribed !
#askthealternativetoo_howmuchisundeployed

  • serious question - as you called out learning stuff (which I enjoy too) - why is it that they are always "home labs" and never referred to as a "home prod" environment or at least something that implies some stablitity? Nothing about the term production implies usage by some Fortune 500 company that with a $25M/yr IT CapEx budget, depends on said infra for $X million of revenue per day, etc.
    • rather it simply means the operational environment post-testing, i.e. dev => qa => prod
  • Further to my point, there are probably a number of individuals on this board that have more infra firepower than many Russell 3000 companies - and are every bit as knowledgeable if not more so than their CTOs/CIOs etc.
  • I hear "home lab" and I think of somebody studying for a Cisco certification of some sort (disclosure = I know nothing about this sort of thing) or just buying toys to have fun. Or someone getting creative with a soldering iron and hw and plumes of smoke rising into the air.
    • At one point in time, I only had one server, and its uptime was not so great as I was learning ... but then I started to find needing to restore all the way from the hypervisor back up to be too time consuming, annoying, and inconvenient, so I moved to two[1] and really don't fiddle around too much anymore, but if I do, never with with both at once as to ensure I'm always up.
    • (always wondered on that point btw - I know it is a silly question of sorts)
[1] I would have a friggin full rack if I had space, but a DC ≠ in a condo closet; heck, it is only 35.5" deep if memory serves so I had to choose been horizontal or vertical mounting on the wall otherwise I wouldn't be able to close the door.
I do have my lab segmented out. I have home 'production' but it is still a lab. It is always open to change. Then I have a set of NUCs that I blow away usually every week and test changes before it is promoted to prod.
 
  • Like
Reactions: svtkobra7

zecas

New Member
Dec 6, 2019
27
0
1
Hi,

Thank you for all the help and info so far, it has been a very nice finding, these forums :).

motherboard: X9DRH-7TF
disk backplate: BPN-SAS2-826EL1
chassis: SSG-6027R-E1R12T
have same exact config x 2
2 x 12 x 10TB WDC WD100EMAZ-00WJTA0
2 RaidZ 3x4 pools, but if one server wasn't continuously replicating to the other, I'm not sure I would favor the increased risk of RaidZ
That mobo is my fav X9.
Is it possible to confirm which motherboard revision is required to support Ivy Bridge processors?

I'm planning putting 2x Xeon E5 2630v2 processors (Ivy Bridge), the board is revision 1.02 and I can't seem to find an official confirmation for this info. Supermicro site states that bios version 3.0 or above is required, but I'm worried about the motherboard revision itself since that can't be changed.

Many thanks.
 

itronin

Well-Known Member
Nov 24, 2018
1,234
794
113
Denver, Colorado
Hi,
I'm planning putting 2x Xeon E5 2630v2 processors (Ivy Bridge), the board is revision 1.02 and I can't seem to find an official confirmation for this info. Supermicro site states that bios version 3.0 or above is required, but I'm worried about the motherboard revision itself since that can't be changed.

Many thanks.
have you seen this thread or this one? It has some clues and insight. You might also try contacting supermicro support and see what they say.
 

zecas

New Member
Dec 6, 2019
27
0
1
have you seen this thread or this one? It has some clues and insight. You might also try contacting supermicro support and see what they say.
Checked the first link, the second one I had already checked.

It "looks" I will be safe with 1.02 revision, but just wanted to be absolutely sure from Supermicro or from someone around that may have the same mobo and using some ivy bridge processor.

Edit:

Well it's not the exact same model, but I found some review of a similar board (X9DRH-7F instead of *-7TF) that states the revision 1.02 supports Ivy Bridge:
Code:
https://youtu.be/zy99gZ27ru8?t=521
 
Last edited:

vangoose

Active Member
May 21, 2019
326
104
43
Canada
weird, upgraded to rc1 without any issues - but of course it was on a test box with basically no services running (and physical)
I tried in on 2 machines. As soon as I transfer files form production storage to them and put load on them, in a few minutes the machines reset. No overheating.
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
no error in the log? same behavior with rc2?
No log, just crashed. haven't tried rc2. but same behavior on beta1 and rc1.
My prod serve is Solaris, tried freenas to see, never a fan of it though. Decide to move to Linux now, CentOS 8, zol 0.8.2, scst on same hardware, no issue at all.
 

zecas

New Member
Dec 6, 2019
27
0
1
Hi again,

So I'm in the process of picking the missing parts for my build based on a supermicro X9DRH-7TF mother board. I'm missing memory, CPU and thinking on a SAS controller (IT mode without messing the mobo's one).


The choice of CPU should be a Xeon E5 2630v2 (SR1AM) 2.60GHz 6-Core LGA2011, and I've been watching some on ebay. From what I've found around the web, there could be some fakes around, so I'm looking into seller reputation and the actual pics of the CPUs, looking for something suspicious (hoping pics represent the actual product, but seller reputation also helps).

The CPU reference is SR1AM and I find some made in MALAY, COSTA RICA. I've also found some being sold as matched pairs, which I assume means simply they were pulled from the same server. There should be no problem in matching distinct CPUs from distinct lots, even one from malay and another from costa rica, am I correct?



  • I think he was joking - to me at least - I followed the implied purpose.
  • It is so easy though. My Schnauzer actually took over and flashed the 2nd one for me
  • FYI - you'll be up and running before you can screw 12 HDDs into the caddies
  • As an analogy, its like not using the 10 Gb NICs on board, but having the infra in place to do so
  • If you screw up there is even a lsi2208fixer.zip floating about - that well - fixes your 2208
  • And if you somehow managed to bork your controller (you won't), just buy an HBA (I'm a minimalist)
I was thinking going for an adapter instead of flashing the LSI 2208 onboard controller. But searching for a good one around is tricky. Since this controller will deal with my data, I need to find a solution I would be confident on buying.

So I've found a "Genuine LSI 6Gbps SAS HBA LSI 9200-8i = (9211-8I) IT Mode ZFS FreeNAS unRAID" as follows:
Genuine LSI 6Gbps SAS HBA LSI 9200-8i = (9211-8I) IT Mode ZFS FreeNAS unRAID | eBay

Now ... something tells me not to buy something as that from Hong Kong, China, ... but it seems a legit product from a seller with good reputation. From the description and if pictures represent a similar product from what I would receive, it seems to be a legit product.

So they have seemed to have picked up an LSI SAS 9211-8i and flashed it with the firmware of an LSI 9200-8i. Looking further into the product specs sheet, both cards seem to share the same controller chip (LSI SAS 2008), but one provides RAID functionality, the other does not:

LSI SAS 9211-8i
- The LSI SAS 9211-8i HBA has onboard Flash memory for the firmware, and BIOS and NVSRAM for Integrated RAID
support (RAID 0, RAID 1, RAID 10, and RAID 1E).
- Implements one LSI SAS 2008 eight-port 6Gb/s to PCIe 2.0 controller
- Supports Integrated RAID (RAID 0, RAID 1, RAID 10, and RAID 1E)

LSI SAS 9200-8e
- The LSI SAS 9200-8e HBA has onboard Flash memory for the firmware and BIOS.
- Implements one LSI SAS 2008 eight-port 6Gb/s to PCIe 2.0 controller
- LSI documentation does not refer any RAID capabilities

Anyone can see something fishy in this product? Something that would put you away from buying it?

Again, this is my first server build, so I want to get it right, hence all these questions.


Thank you again.