Slightly oddball server build advice

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ouroboros

New Member
Jul 26, 2012
27
5
3
Hi folks!


At work we have an older Nexenta ZFS rig serving mostly iSCSI to ESXi blades that my boss is getting worried about because we have no "support" and the drives are getting old, and the people who built it made a zpool ashift choice error (default ashift=9 rather than forcing 12) so we are now locked into 512 byte sector drives effectively.

I am seeing hints that the boss might be okay with a new build so I am trying to spec up something while stepping away from Nexenta to FreeNAS (and hopefully safely migrating the iscsi zvols to a brand new zpool as exporting the pool directly unfortunately retains the ashift setting...).

Does the following sound vaguely credible, in a parts sense, and in a "Can I badger a supermicro seller to customize this far" sense?


build basis is a Supermicro 6048R-E1CR60L, sold normally as a complete system only, which is a top loading 60 disk chassis.

Supermicro | Products | SuperStorage Servers | 4U | 6048R-E1CR60L

2x Xeon E5-2637 v4 3.5GHz 4 core
2x 64GB LRDIMM (1 per CPU)(allow LRDIMM buildout, rather than lock in with RDIMM)
add the optional NVMe U.2 drive cage (which connects to CPU1 OCulink ports, but doesn't that mean only 4x U.2 drives?)
add the SIOM network card (thinking the quad 10G SFP+ Intel X710 based one)

Here's where things get weird. Note the system name ends with E1. Supermicro naming system implies this uses a single path backplane. Looking at their separate backplane literature however for the 30 disk backplanes shows the name BPN-SAS3-946SEL1/EL2 which implies there is a proper EL2 dual path version of the backplane (illustrations seem to confirm).

https://www.supermicro.com.tw/manuals/other/BPN-SAS3-946SEL.pdf

So one potential configuration is the typical cascading backplane setup. The illustrations implies a single HBA dual path can be done with the prebuilt system's mezzanine SAS HBA's 2 connectors only. But the system only has a single mezzanine slot for an HBA, and the HBA itself doesn't seem stackable so I don't understand how they could go about doing dual path for the prebuilt system. I guess they use 2 HBA's on other motherboards or SBB modules?

So the bright idea I had, is use the normal PCIe slots, and fit two Supermicro AOC-S3216L-L16iT SAS3 HBA's which each have 4 miniSAS connectors, to dual path to both backplanes. Cables galore though, and I suppose just a mezzanine card and a lower spec 2 connector HBA in one PCIe slot would cover the cascading setup.

Super Micro Computer, Inc. - Products | Accessories | Add-on Cards | AOC-S3216L-L16iT

This all assumes I could badger the Supermicro reseller to fit 2 BPN-SAS3-946SEL2 backplanes into the system, otherwise no dual path options.

The other weird part of the setup is to use the NVMe U.2 drive cage to host some Optane drives for the ZIL, and wrap that up with a Squid PCIe adapter carrier board hosting up to 4 M.2 drives for L2ARC using the last remaining conventional PCIe slot.

PCI Express Gen 3 Carrier Board for 4 M.2 SSD modules - Amfeltec

HDD selection will probably be 4TB SAS to increase drive count and cut down on cost. The chassis has an additional 2x2.5 SATA drive holders, which I would like to use for the OS ZFS syspool.


Crazy, or just entirely dependent on my reseller relationship?


PS -- Also, any tips for forklifting Nexenta zvols backing iSCSI to a new zpool on FreeNAS and configuring the iSCSI targets? I figure the easiest thing is pull all zpool HDD's from the old server and temporarily put them in the new server, import the zpool minus the old ZIL, manually copy the zvols more or less, then stringing up the new iSCSI configuration. But theory is not reality so...
 

gea

Well-Known Member
Dec 31, 2010
3,155
1,193
113
DE
Why not stay with Illumos?

Nexentastor = Illumos (Free Solaris fork) + support + Web-UI + some Nexenta Extensions like RSF-1 (HA), SMB3 or VAAI
with Solaris Comstar, the Solaris enterprise grade iSCSI stack ( Configuring Storage Devices With COMSTAR - Oracle Solaris Administration: Devices and File Systems )

If you do not want to renew the Nexenta license the closest alternative is OmniOS, a "Just enough storage OS" with proven stability.
OmniOS = Illumos + napp-it as web-UI (free, Opensource), OmniOS Community Edition

OmniOS even plans a commercial support option in future. It is fully compatible with Nexenta (Just import the pool or replicate the fs or zvols)


Setup, of OmniOS
http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf

Fo the HBA I would use
SAS 9305-16i Host Bus Adapter

With the Optane as Slog, you can use the 900P U.2 versions (16/32G would not fit the demands)
see the 4 x U.2 Adapter SuperMicro AOC-SLG3-4E4T
see (Installation, performance) http://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf

About the Ashift 9
I suppose the pool is not that huge if its older.
Replicate the pool to a backup pool, recreate with ashift=12 and replicate back
 
Last edited:
  • Like
Reactions: T_Minus

Linda Kateley

New Member
Apr 25, 2017
21
5
3
62
Minnesota
I was going to say what gea said. You should be able to import that pool. You also can import into a linux distro. Freenas isn't compatible with other zfs distros.
 
  • Like
Reactions: T_Minus

Ouroboros

New Member
Jul 26, 2012
27
5
3
Thanks for the replies.


The nice part of this particular server build is that the motherboard already has OCulink NVMe ports built in, so I don't need an additional AOC-SLG3-4E4T NVMe U.2 adapter HBA and can hook up immediately to the U.2 NVMe drive cage once it is fitted.

Any particular reason why you are recommending a LSI 3226 chip based HBA (SAS 9305-16i) over a LSI 3216 based HBA?


The reason why I want to move to FreeNAS is that we are already using VAAI, sometimes heavily via VMware ESXi VMFS 6 unmap features to aggressively claw back space (we do a lot of hand unmapping). We used iSCSI rather than NFS for this environment as the Nexenta NAS-VAAI plugin to work with NFS wasn't available when they built this, plus we are cheap bastards and don't even use vCenter so as a result we have all these shady free ESXi rigs pulling iSCSI for dev/QA datastores.

It still looks like OmniOS or other non-Nexenta Illumos distros do not have VAAI because the Nexenta work still hasn't been upstreamed to illumos-gate. I could go to the trouble of disabling VAAI/ATS on each and every ESXi datastore before the migration to not shoot myself in the foot (we got burned by the Nexenta 3->4 upgrade that turned off VAAI, that sucked...) so I could use a non-VAAI ZFS platform, but the storage clawback is a nice thing to have.


I was under the impression that ZFS, if the pool version isn't wacky, should be importable to any other ZFS system, especiallyfor a very vanilla use case? What exactly is meant by FreeNAS not being compatible? Is this a case of ZFS import would work, but the zvol migration would be problematic or FreeNAS doesn't know what to do with the zvol for backing recreated iSCSI LUN's?