Help new build with Supermicro Storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

minhneo

New Member
Oct 3, 2018
9
1
3
Hi All,

I need some advice about hardware config for product storage

1. Service: My company about 300 employees
- Nextcloud: 16TB, I will build nextcloud on centos instead of use Jail from Freenas, necessary ?
- SQL server for: CRM, Acouting, Company Management,...
- Voice server
- 3 Dynamic Website about 2000 - 5000 CCU
- pFsense firewall
- 2 Active directory, dns, dhcp ( primary - secondary )
- Monitor server
- Some other service

2. Primary Storage: 18k USD
- Supermicro SuperStorage Server 5049P-E1CTR36L - 36x SATA/SAS - LSI 3008 12G SAS - 8x DDR4 - 1200W Redundant
- Intel® Xeon® Silver 4114 Processor 10-core 2.20GHz 13.75MB Cache (85W)
- 4 x 64GB PC4-21300 2666MHz DDR4 ECC Registered DIMM --> 256GB for ARC, can upgrade to 512GB, necessary ?
- 2 x 1.0TB Samsung 970 PRO M.2 PCIe 3.0 x4 NVMe Solid State Drive --> Raid 0 for L2ARC, can upgrade to 2 x Optane 905P 960GB, necessary ?
- Micron NVDIMM 16GB --> ZIL log
- 3 x 1.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Seagate Exos 7E8 Series (512n) --> Raid 1 Pool for Boot, 1 disk for spare
- 28 x 6.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Seagate Exos 7E8 Series (512e) -->Raid 10 Pool data( Raid 1 group with 3 disk, Raid 0 with 8 Raid 1 group ), capacity (26 * 6)/3 =~ 52TB, 2 disk for spare
- 2 x Intel® 10-Gigabit Ethernet Converged Network Adapter X710-DA4 (4 x SPF+) --> Direct connect to 4 ESXi host (iSCSI)

3. Backup Storage:
- Supermicro SuperStorage Server 5049P-E1CTR36L - 36x SATA/SAS - LSI 3008 12G SAS - 8x DDR4 - 1200W Redundant
- Intel® Xeon® Silver 4110 Processor 8-core 2.10GHz 11MB Cache (85W)
- 4 x 16GB PC4-21300 2666MHz DDR4 ECC Registered DIMM
- 1 x 512GB Samsung 970 PRO M.2 PCIe 3.0 x4 NVMe Solid State Drive
- Micron NVDIMM 16GB --> ZIL log
- 2 x 1.0TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Seagate Exos 7E8 Series (512n) --> Boot
- 10 x 12TB SAS 3.0 12.0Gb/s 7200RPM - 3.5" - Seagate Exos 7E8 Series (512e) -->Raid-Z2
- 2 x Intel® 10-Gigabit Ethernet Converged Network Adapter X710-DA4 (4 x SPF+) --> Direct connect to 4 ESXi host (iSCSI)

Can you give me some advice to optimize the parameters, such as: Network, ZIL, ARC, L2ARC,...

Sorry, English is not first my language.

Many thanks!
 

dswartz

Active Member
Jul 14, 2011
610
79
28
L2ARC is most likely not useful (certainly not spending the $$$ on optanes!) Might want to spend some of that $$$ on a redundant SLOG device - can you use 2 NVDIMM instead? Rather than a spare for the boot pool, use that 3rd drive in the root pool (e.g. a 3-way raid mirror). I don't understand the data pool description. You say RAID10, but are talking about RAID1 and RAID0? Also, what OS will run on primary storage server?
 

Rand__

Well-Known Member
Mar 6, 2014
6,633
1,767
113
Ok, just to get it straight-
You are trying to get a new storage box for your company with a usage portfolio as in (1).
Your current plan is described in (2) with (3) as backup?

You then describe a ZFS based setup but implicate usage of Raid Cards? Or are those supposed to be RaidZ raids? I assume you are looking at FreeNas as storage OS?

Connection to 4 ESX hosts (for the VMs from (1) I assume) via direct connection? Why no switched setup? 10G got cheap...

Then - you have a single spinner Raid10 Array for very different access types - SQL Databases (maybe not heavy use), webpages (potentially lots of cached data unless you are highly dynamic); are you sure you will be able to satisfy most reads from cache?

This does not look sensible to me to be honest...
 

minhneo

New Member
Oct 3, 2018
9
1
3
L2ARC is most likely not useful (certainly not spending the $$$ on optanes!) Might want to spend some of that $$$ on a redundant SLOG device - can you use 2 NVDIMM instead? Rather than a spare for the boot pool, use that 3rd drive in the root pool (e.g. a 3-way raid mirror). I don't understand the data pool description. You say RAID10, but are talking about RAID1 and RAID0? Also, what OS will run on primary storage server?
I can use 2 NVDIMM.
I will 3 way mirror raid for boot.
Pool data raid 10: triple of 3 way mirror.
I will Freenas but it's problem with VMWare backup and replicate.
 

minhneo

New Member
Oct 3, 2018
9
1
3
Ok, just to get it straight-
You are trying to get a new storage box for your company with a usage portfolio as in (1).
Your current plan is described in (2) with (3) as backup?

You then describe a ZFS based setup but implicate usage of Raid Cards? Or are those supposed to be RaidZ raids? I assume you are looking at FreeNas as storage OS?

Connection to 4 ESX hosts (for the VMs from (1) I assume) via direct connection? Why no switched setup? 10G got cheap...

Then - you have a single spinner Raid10 Array for very different access types - SQL Databases (maybe not heavy use), webpages (potentially lots of cached data unless you are highly dynamic); are you sure you will be able to satisfy most reads from cache?

This does not look sensible to me to be honest...
Yes, (2) is primary, (3) is backup via replicate.

Can you suggest for me some model?

I think my data will leave in ARC and L2ARC cache in a month beforce it's full and cache flush, web cache dynamic in memory.
 

Rand__

Well-Known Member
Mar 6, 2014
6,633
1,767
113
What model do you mean? For a non Raid Card? a switch? a possible setup?
 

Rand__

Well-Known Member
Mar 6, 2014
6,633
1,767
113
So 4 1:1 connections (mirror'ed but not shared).

Are you sure ESXi will provide vmotion/HA capability in that setup? I don't think they will since those would be seen as 4 individual (multipathed) storage arrays
Or are the 4 ESXi boxes not in a cluster setup?
 

minhneo

New Member
Oct 3, 2018
9
1
3
So 4 1:1 connections (mirror'ed but not shared).

Are you sure ESXi will provide vmotion/HA capability in that setup? I don't think they will since those would be seen as 4 individual (multipathed) storage arrays
Or are the 4 ESXi boxes not in a cluster setup?
I tested, Freenas is only support 1 path ( not multipath ) with iSCSI. If use multipath option from vcenter is slow and hight latency than 1 path
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,633
1,767
113
You could try napp-it on a solaris derivate as alternative unless you need FN/FreeBSB specific features.
Or you do plan with a switch (or two for redundancy), that would not provide multipath with FN but at least would enable building a proper cluster with HA/vmotion
 

minhneo

New Member
Oct 3, 2018
9
1
3
I tested with solaris 11.4 with multipath, bandthwith near double, latency decrease 300% ( 100% read, 0% write )