New build, need storage

Katfish

New Member
Aug 14, 2016
12
0
1
40
Build’s Name:TestBed
Operating System/ Storage Platform: ESXi 6
CPU: (2) E5-2670
Motherboard: Intel S2600CP2J
Chassis:Intel P4000M
Drives: TBD
RAM: 128GB
Add-in Cards:
Power Supply:
Other Bits:

Usage Profile: Will be a self contained / AIO host for testbed. Will be doing vulnerability testing and proof of concept against various OS and platforms.

Other information… I have an i7 w/ 32GB RAM for a daily driver.

Am assembling the above build for a testbed. I am in need of guidance on storage. I come from a traditional HW RAID mentality but am willing to give ZFS and others a try. I am looking to spend $500 (absolute max 700) for HBA/Controller + drives. I have the 12 drive 2.5" and 12 drive 3.5" bays for the P4000M. Not concerned too much about the recoverability of content as they can be easily rebuilt. Shouldn't need more than 4TB.

Any suggestions?
 

voodooFX

Active Member
Jan 26, 2014
243
50
28
If your hypervisor is ESXi and you would like to use/test ZFS you will need a dedicated HBA to passthrough to your "storage VM".
That said, I guess a LSI 9211-8i + 2x SSD + 5x hard drives should do the work
 

Katfish

New Member
Aug 14, 2016
12
0
1
40
Thank you.

The two SSD would be for ESXi install as well as the datastore for the ZFS guest?

I see you mentioned 5x drives. Benefit of 5 vs 6? And would there be an ideal RAID type with ZFS for this setup?
 

gea

Well-Known Member
Dec 31, 2010
2,535
856
113
DE
If you build a Raid-Z, usable capacity is highest when number of datadisks is a power of 2, example a Raid-z1 from 3, 5, 9 disks or a Raid Z2 from 4, 6 or 10 disks.

For a VM datastore prefer SSD only pools ex a Mirror or a Raid-10 if you use disks.
I would use a pool from an SSD mirror for VMs and a pool from disks for the rest.

For ESXi and the storage VM, use and disk/SSD > 60 GB as you have a lot of RAM.
A mirror is not needed as you should setup only a base storage VM with services on
other VMs on ZFS.

For HBA use one with LSI/Avago 2008 or 3008 chipset and best with IT firmware ex
LSI 9207 (IT per default) or a cheap OEM like Dell H200 or IBM 1015 that you can
(must) reflash with the LSI 9211/IT firmware
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,003
4,987
113
If you really just want 4TB I would get 2x 4TB drives and mirror them (RAID 1). Then look to get an inexpensive SSD for L2ARC. $110 for the SSD and maybe $200-250 for the hard drives. The LSI sas2008 is plenty for such a small array. Natex has a labor day thread where these are around $76 IIRC (on phone now).

Total you are probably looking at $400 or so.

If you did not use ESXi and instead went with ZFS on Linux + KVM or something similar you can just use the onboard controller. Minimal power savings as well. ESXi is a bit less flexible but is well tested.
 

rjoancea

New Member
Apr 28, 2015
20
2
3
35
I have the 12 drive 2.5" and 12 drive 3.5" bays for the P4000M. Not concerned too much about the recoverability of content as they can be easily rebuilt. Shouldn't need more than 4TB.

Any suggestions?
Slightly offtopic, as I don't think you're planning on hitting this limit - but you mentioned 12 2.5" and 12 3.5" bays...not sure this confit exists, if you just meant 8 3.5", or something else altogether.
 

Katfish

New Member
Aug 14, 2016
12
0
1
40
Thanks everyone.

@rjoancea - You are correct. I meant (8) bays and not 12.

ZFS has me intrigued quite a bit. I had stood up an openfiler instance 5+ years ago leveraging U320 drives and shared to a ESX setup.

If I am understanding everything correctly, it would be...
(1) LSI 9211-8i - flash to IT mode
(2) SSD - mirrored for ESXi install and datastore for ZFS guest (Napp it/ NexentaStor)
(2) 4TB SATA - mirrored for storage, presented through ZFS
(1) SSD - L2ARC

How would the above compare versus...
(1) HP P812 w/ 1G FBWC
(4) 2TB SATA in RAID 10

I realize this is an apples to oranges comparison. Am guessing that the L2ARC would boost IOPs.

Thanks!
 

Keljian

Active Member
Sep 9, 2015
429
71
28
Melbourne Australia
I'm gonna throw this out there - for low volume use, you don't need L2ARC - just make sure you have ample memory (>8gig). I have 12 gig of ram in my FreeNAS vm. I achieve 550 Mb/s speeds (8x7200rpm drives in RaidZ2).
 
Last edited:

katit

Member
Mar 18, 2015
369
18
18
50
Yep. Not sure what testings you guys doing here :) For me - I built FreeNAS server per their hardware specs (16Gb RAM) and have 2x 4Gb WD Reds in ZFS mirror.

It can saturate gigabit network and I don't see any performance issues. Everything works just fine (file sharing for family)
I spent plenty of time on FreeNAS boards and L2ARC is only needed when you start pushing it really hard and RAM can't be added.
 

Katfish

New Member
Aug 14, 2016
12
0
1
40
Is a pair of 4tb drives mirrored enough performance for a dozen VMs?

Or would 5x 2tb drives in a raidz1 work?

And would there be a performance gain in using ZFS and HBA for mirrored set up vs a hw controller with 1gb cache?

Are there key specs for spinning drives in this scenario with ZFS? I know from a hardware level rpm and cache played a role.
 

katit

Member
Mar 18, 2015
369
18
18
50
Personally, I use FreeNAS for exactly this, file storage and sharing. Those spinning drives in mirror is only because they cheaper and only for files storage. They are OK for gigabit network. But for VMs hosting? I guess it will work. But I don't need much space for VMs and all my VMs running off SSD stripe (not mirror) giving max speed. And I backup them daily to spinning disks :)
 

Katfish

New Member
Aug 14, 2016
12
0
1
40
I'm not looking to share files. Just house VMs for an AIO solution.

I think ill go with a 9211-8i, (2) S3500 600GB, (5) 2TB HGST SATA drives, (1) SSD on sale.

Will pass the S3500 through to ZFS for a mirror via the motherboard, and the 5 data via the to be IT flashed 9211 in a Raidz1 setup, followed by a single SSD for esxi install and ZFS guest VM.

Any flaws with the above? Price tag should come out to $700.
 

katit

Member
Mar 18, 2015
369
18
18
50
Ok. Explain what OS going to be on bare metal? ESXi?
And you going to do FreeNAS as a guest?

I read it's a PITA to setup SCSI on FreeNAS so it gives good performance for VMs
 

Katfish

New Member
Aug 14, 2016
12
0
1
40
So...

As I am getting the 9211 and planning on using (5) SATA drives for a raidz1 setup, the SSD would have to be from the motherboard?

I saw the S3710 400GB thread. Would it make sense to use one for ZIL? Or stripe a pair or raidz1 3 of em? This would be for the IO heavy guest bits.