Off lease Dell C2100 for ZFS file storage (napp-it/esxi/omnios) - advice would be great

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

crazyj

Member
Nov 19, 2015
75
2
8
49
Build’s Name: Old Horse (but better than nothing)
Operating System/ Storage Platform: ESXi 5.5 / OmniOS / napp-it
CPU: 2x Xeon E5645
Motherboard: Dell C2100 server mobo
Chassis: Dell C2100
Drives: 2x 3TB seagates, 4x 500GB seagates, maybe others
RAM: 48GB 1333 1.35v ecc
Add-in Cards: Dell H200 (have yet to flash it to IT mode)
Power Supply: 2x 750w
Other Bits:

Usage Profile: ZFS storage, VMs (Plex, and possibly play with others). Photo/Video storage, document storage. Maybe play with owncloud for my wife's laptop sync.

Other information… This is primarily a storage box. I want to play with Plex to see if I like it, and I'm serving out a couple of squeezeboxes. Nothing heavy duty.

I've searched a bit, and most folks doing these AIO setups are putting VMs and the OmniOS boot from a pair of mirrored SSDs. The ESXi is typically just on a usb stick (with a spare copy on hand).

2 Qs:

-is it necessary to use SSDs for the mirrored OS/VMs? Will I be tremendously disappointed if I just use a couple of the 500gb drives to boot from? (I should be able to boot from the H200, no?)

-as I'm using ESXi / NFS is an SSD slog drive a must? Any way around that? How critical is an SLC SSD vs an MLC device? After all, I don't think I'm going to break any benchmarks with this thing.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
1.
Use a small 30GB+ SSD connected to onboard Sata where you install ESXi.
Use the SSD also as a local datastore where you put napp-it/OmniOS.

If you care about downtime, you can clone the SSD to a second SSD using a tool
like clonezilla or a hardware raid 1 enclosure (2 x 2,5" disks in a 3,5" case) but you
only need a boot mirror if you have a quite complex bootsystem with services that
require configuration. But you should not do this. Offer all such services from VMs
(BSD, OSX, Linux, Solaris, Windows) that are on save ZFS storage with snaps and backups.

Keep your base storage VM as simple as possible. It is not as save as it is on "unsecure" vmfs.

ESXi itself is reinstalled in 10 minutes as well as the napp-it storage template.
You can simply import your VMs with a mouse right-click on the .vmx file (ESXi filebrowser)
so there is no need of a napp-it boot mirror if you follow this suggestion.

Pass-through your HBA to napp-it to give ZFS real disk access with its own drivers.

2.
ZFS is a Copy onWrite filesystem what means it is crash resistent. A powerloss can never
damage ZFS itself. For a pure filer you do not need to care about sync write.

This is different id you use transactions (databases) or use "old" filesystems ex with VMs.
They are not crash resistent what means that on a powerloss during write the last writes
may be lost and/or an older filesystem may got corrupted. This is why ZFS offers secure sync write.

Sync write is a logging of last writes and is either done slowly on your datapool or faster
on a separate Slog device.

Such an Slog requires ultra low latency, high write iops, endurance and powerloss protection.
Most use an Intel S3700 or P750 as an slog. Best of all is a Dram based ZeusRam.
 

crazyj

Member
Nov 19, 2015
75
2
8
49
Anyone have hardware advice regarding zfs performance going through the hba to an sas expander backplane? I have an opportunity to pick up a 1:1 backplane. Just wondering if if it's worth bothering. Not like I'm going to get anywhere close to populating all 12 drive bays.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I think there were a lot of these made. I would give it a shot with what you have before swapping the backplane. I would generally go 1:1 if I were buying new but since you already have that setup, may as well test it out. If you need to swap later, the fact it was a popular system means it will be easy to get a 1:1 later.
 

crazyj

Member
Nov 19, 2015
75
2
8
49
The 1:1 configuration wasn't nearly as popular as the expander, which is why I'm a bit more interested in it. The do become available, but it is not frequent. I'm just seeing a number of posts saying that sata disks behind expanders are generally not a great idea as it complicates the setup and causes issues down the road.
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,140
594
113
New York City
www.glaver.org
The 1:1 configuration wasn't nearly as popular as the expander, which is why I'm a bit more interested in it.
I like the 1:1 backplanes as well. The Supermicro ones have an unpleasant "feature" of scattering the SFF-8087 connectors all over, requiring either different-length cables or stuffing a bunch of cable somewhere and messing with the airflow. I solved this with a custom set of 3M high-routability SAS cables:


I'm just seeing a number of posts saying that sata disks behind expanders are generally not a great idea as it complicates the setup and causes issues down the road.
The main issue with SATA and expanders is that it moves the SAT layer from the controller (where the operating system at least has an idea what's happening) out to the expander. And expander firmware is updated much less frequently than controller firmware.

Mixing SATA and SAS on an expander is definitely a bad idea. If the SATA drive has a problem, the expander may stop talking to the SAS drives while it tries to deal with it.
 

manxam

Active Member
Jul 25, 2015
234
50
28
The one benefit I see of not using a backplane with an expander is you're not limited to the generation of the backplane (SAS1/2/3). Regardless of the age of the server you can utilize whatever size disks you want at whatever speed assuming you have an HBA that supports what you're attempting.

It also allows running direct off of the SATA ports onboard assuming you have a mobo with enough ports...

The only downside is the additional cabling and cost of additional HBAs (or external expanders like the resv240)